Skip to content

Launch of the CyberPeace Institute in Geneva

Thursday 26 September 2019 saw the launch of the CyberPeace Institute, an independent NGO that will address the growing impact of major cyberattacks, assist vulnerable communities, promote transparency, and advance global discussions on acceptable behavior in cyberspace. EPFL President Martin Vetterli will be sitting on the Executive Board, and the Center for Digital Trust is named as a scientific partner.

The C4DT is looking forward to working with the @cyberpeaceinst led by @DuguinStephane and @MarietjeSchaake and supporting its mission to enhance the stability of #cyberspace. Please click below to access the official announcement.

C4DT Distinguished Lecture : Talk by Dr. Dan Bogdanov, Cybernetica, Estonia

In this talk, Dan Bogdanov will start by introducing secure computing technologies and their potential in enterprise and government use. He will then look at a focus group study of the barriers of adopting such technologies based on interviews in many industries.

September 4, 2019 @ 14:15 in BC 410

C4DT Distinguished Lecture : Hidden Backdoors in Deep Learning Systems

by Prof. Ben Zhao, Univ. of Chicago
The lack of transparency in today’s deep learning systems has paved the way for a new type of threats, commonly referred to as backdoor or Trojan attacks. In this talk, Ben Zhao will describe two recent results on detecting and understanding backdoor attacks on deep learning systems.
September 24th, 2019 @ 14:15, room BC 420

DataShare: Decentralized Privacy-Preserving Search Engine for Investigative Journalists

Kasra Edalatnejad presents DataShare, a decentralized and privacy-preserving global search system that enables journalists worldwide to find documents via a dedicated network of peers. This work stems from the need of the International Consortium of Investigative Journalists (ICIJ) for securing their search and discovery platform.
Wednesday, July 3rd 2019 @16:15, room BC 410

All Your Clicks Belong to Me: Investigating Click Interception on the Web

By Prof. Wei Meng, Chinese University of Hong Kong
Click is the prominent way that users interact with web applications. Attackers aim to intercept genuine user clicks to either send malicious commands to another application on behalf of the user or fabricate realistic ad click traffic. In this talk, Prof. Wei Meng investigates the click interception practices on the Web.
Tuesday July 23rd, 2019 @10:00, room BC 420

The Summer Research Institute on Security and Privacy

EPFL’s IC School invites you to the 2019 edition of the IC Summer Research Institute (SuRI), held in Lausanne (EPFL, BC 420) on June 13-14. The conference brings together renowned researchers and experts from academia and industry who will present their latest research in cybersecurity, privacy, and cryptography. The event is open to everyone and attendance is free of charge. For more information and to register please click here…

AMLD Intelligence Summit 2026 – AI & Media, how to secure and verify info?

AI empowers journalists by enabling rapid access to and analysis of vast document sets, but it also brings risks: it can be misused to unmask anonymous sources or to fabricate convincing misinformation. Without strong governance, AI may hallucinate, producing false or defamatory claims. In this track, co‑organized by C4DT, we highlight the needs and tools for robust safeguards to ensure that AI strengthens, rather than undermines, journalistic integrity.

On Evaluating Cognitive Capabilities in Machines (and Other “Alien” Intelligences)

I always appreciate this author’s talent for breaking down her research for laypersons like me. This article gives an overview of the current state of evaluating the cognitive capabilities and the shortcomings of these methods. Most importantly, it also provides suggestions for improving them. Definitely a recommendation for enthusiasts, sceptics and everyone in-between!

The AI race is creating a new world order

This article caught my attention for several reasons. Firstly, Russia’s surprising absence from the AI race. Secondly, there is the strategic positioning of Middle Eastern players, such as the UAE, who are using their immense investment capabilities to manoeuvre between superpowers. Thirdly, this is not just about AI; it’s also about broader digital sovereignty. Unlike (…)

6 Scary Predictions for AI in 2026

This article is interesting because it links digital trust to systemic AI dangers—not just small tech glitches. It predicts how in 2026 AI might spread lies, spy on people, or disrupt jobs and markets faster than today’s content‑moderation and governance mechanisms can manage. This leaves us to question who should control key AI technology.

The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT Disagrees

I find this article interesting because it puts a spotlight on how the technical limits and product decisions of LLMs can shape, and sometimes distort, people’s perception of real-world events. It’s striking to see the same news prompt produce authoritative, up-to-date answers from some models and a blunt, incorrect denial from others, simply because of (…)

Airbus aims to migrate workloads to European cloud

I found this article particularly interesting because it highlights how major European companies like Airbus are now treating cloud sovereignty as a strategic criterion in their procurement processes. By explicitly requiring ‘a European provider’ in their tenders, they set an important precedent for other enterprises and even governments across Europe. This move reinforces the idea (…)

C4DT FOCUS 10 Swiss democracy faces its digital crossroads

C4DT Focus #10, entitled “Swiss democracy faces its digital crossroads”, by C4DT, in collaboration with Gregory Wicky. Fake signature collection, fake ID scandals, and a razor-thin vote on the new federal e-ID have presented the country with an uncomfortable question: how do our institutions and the trust they are built on evolve in a digital world? (…)

Digital Omnibus – First Legal Analysis

Although the target audience of this legal analysis from noyb of the European Commission’s ‘Digital Omnibus’ proposal is clearly lawyers, it still gives the layperson a good overview of the proposed changes to the GDPR and their practical implications.

Large language mistake

AI is often anthropomorphized, with terms like ‘hallucinations’ having entered technical jargon. All the more striking that when the capabilities of popular models today are discussed, ‘intelligence’ is reduced to one set of benchmarks or another, but seldom considered in a holistic way. I liked this article for putting the human, and not the machine, (…)

Disrupting the first reported AI-orchestrated cyber espionage campaign

Improvements in agentic AI and increasing competency of hackers are driving more sophisticated cyber attacks, from mere AI-augmented intrusions to ever more autonomous operations. While this is deeply concerning, such efforts can only succeed when infrastructure is vulnerable. Investing in the cybersecurity of our infrastructures has never been more pressing.

Hacker Paragraph: BSI Chief Calls for Decriminalization of Security Researchers

The ‘hacker paragraph’ in Germany is a law saying that you are not allowed to break into foreign IT systems, not even for research, nor as a white hat hacker who discloses their findings responsibly. The development or distribution of such software is also prohibited. For researchers and white hat hackers alike, this is of (…)

What we lose when we surrender care to algorithms

Although written from the point of view of the U.S. healthcare system, quite a few of the issues raised in this essay are universal. I appreciated this in-depth discussion of the impact of AI on the healthcare system because it not only points out the detrimental consequences these technologies have, but also puts these consequences (…)

AI bubble: “70% of the cloud is controlled by three American companies,” a conversation with Meredith Whittaker, president of Signal

Bubble or no bubble? In this interview, the president of Signal shares her analysis: dominant AI is not just a neutral technological advance, but the result of an economic model of platforms that concentrate data and computing power among a few giants, creating monopolies, geopolitical and security risks—and requiring strict regulation (e.g., enforcement of the (…)

‘People thought I was a communist doing this as a non-profit’: is Wikipedia’s Jimmy Wales the last decent tech baron?

I really appreciate Jimmy Wales’s insistence that sticking to Wikipedia’s core principals of neutrality, factuality and fostering a diverse community around the globe will eventually prevail in today’s polarized digital landscape. As a nerd myself I also relate very strongly to his enthusiasm of working on Wikipedia for the sake of doing something interesting over (…)

New prompt injection papers: Agents Rule of Two and The Attacker Moves Second

I found Simon Willison’s blog post interesting because he self-critically builds on his lethal trifecta concept. He clearly explains why prompt injection remains an unsolved risk for AI agents and highlights the practical “Rule of Two” for safer design (proposed by Meta AI). He also discusses new research showing that technical defenses consistently fail. His (…)