Skip to content

Launch of the CyberPeace Institute in Geneva

Thursday 26 September 2019 saw the launch of the CyberPeace Institute, an independent NGO that will address the growing impact of major cyberattacks, assist vulnerable communities, promote transparency, and advance global discussions on acceptable behavior in cyberspace. EPFL President Martin Vetterli will be sitting on the Executive Board, and the Center for Digital Trust is named as a scientific partner.

The C4DT is looking forward to working with the @cyberpeaceinst led by @DuguinStephane and @MarietjeSchaake and supporting its mission to enhance the stability of #cyberspace. Please click below to access the official announcement.

C4DT Distinguished Lecture : Talk by Dr. Dan Bogdanov, Cybernetica, Estonia

In this talk, Dan Bogdanov will start by introducing secure computing technologies and their potential in enterprise and government use. He will then look at a focus group study of the barriers of adopting such technologies based on interviews in many industries.

September 4, 2019 @ 14:15 in BC 410

C4DT Distinguished Lecture : Hidden Backdoors in Deep Learning Systems

by Prof. Ben Zhao, Univ. of Chicago
The lack of transparency in today’s deep learning systems has paved the way for a new type of threats, commonly referred to as backdoor or Trojan attacks. In this talk, Ben Zhao will describe two recent results on detecting and understanding backdoor attacks on deep learning systems.
September 24th, 2019 @ 14:15, room BC 420

DataShare: Decentralized Privacy-Preserving Search Engine for Investigative Journalists

Kasra Edalatnejad presents DataShare, a decentralized and privacy-preserving global search system that enables journalists worldwide to find documents via a dedicated network of peers. This work stems from the need of the International Consortium of Investigative Journalists (ICIJ) for securing their search and discovery platform.
Wednesday, July 3rd 2019 @16:15, room BC 410

All Your Clicks Belong to Me: Investigating Click Interception on the Web

By Prof. Wei Meng, Chinese University of Hong Kong
Click is the prominent way that users interact with web applications. Attackers aim to intercept genuine user clicks to either send malicious commands to another application on behalf of the user or fabricate realistic ad click traffic. In this talk, Prof. Wei Meng investigates the click interception practices on the Web.
Tuesday July 23rd, 2019 @10:00, room BC 420

The Summer Research Institute on Security and Privacy

EPFL’s IC School invites you to the 2019 edition of the IC Summer Research Institute (SuRI), held in Lausanne (EPFL, BC 420) on June 13-14. The conference brings together renowned researchers and experts from academia and industry who will present their latest research in cybersecurity, privacy, and cryptography. The event is open to everyone and attendance is free of charge. For more information and to register please click here…

‘People thought I was a communist doing this as a non-profit’: is Wikipedia’s Jimmy Wales the last decent tech baron?

I really appreciate Jimmy Wales’s insistence that sticking to Wikipedia’s core principals of neutrality, factuality and fostering a diverse community around the globe will eventually prevail in today’s polarized digital landscape. As a nerd myself I also relate very strongly to his enthusiasm of working on Wikipedia for the sake of doing something interesting over (…)

New prompt injection papers: Agents Rule of Two and The Attacker Moves Second

I found Simon Willison’s blog post interesting because he self-critically builds on his lethal trifecta concept. He clearly explains why prompt injection remains an unsolved risk for AI agents and highlights the practical “Rule of Two” for safer design (proposed by Meta AI). He also discusses new research showing that technical defenses consistently fail. His (…)

Why Signal’s post-quantum makeover is an amazing engineering achievement

Here is another post-quantum implementation of a popular protocol: Signal announced the addition of a quantum-safe algorithm to increase the protection of the messages sent between two Signal users. Like other quantum-safe algorithms, it doesn’t replace the currently used cryptographic base, but rather enhances it. Interestingly, the biggest hurdle was the size of the new (…)

Privacy-preserving and distributed processing of public data in hybrid trust networks

One of the increasingly popular paradigms for managing the growing size and complexity of modern ML models is the adoption of collaborative and decentralized approaches. While this has enabled new possibilities in privacy-preserving and scalable frameworks for distributed data analytics and model training over large-scale real-world models, current approaches often assume a uniform trust-levels among (…)

ARFON : Adversarial Robustness of Foundation Models

State-of-the-art architectures in many software applications and critical infrastructures are based on deep learning models. These models have been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose fundamental questions in terms of safety, security, or performance guarantees at large. Several defense mechanisms have been developed in the last years (…)

Microsoft ‘illegally’ tracked students via 365 Education, says data watchdog

noyb’s latest victory may sound like a technicality – who is responsible for complying with the GDPR – but it is actually very important, because if no one knows who is responsible, no one really is responsible. All the more important that the ruling clearly holds Microsoft U.S. as the company actually selling the product (…)

Autonomous AI hacking and the future of cybersecurity

With our conference on “Assessing the Disruptions by AI Agents” in mind, I found this article compelling because it documents the alarming acceleration of cyberattack capabilities thanks to AI agents. This raises the critical question of whether we are approaching a tipping point at which defence becomes structurally impossible. However, the authors offer cautious optimism, (…)

Governments are spending billions on their own ‘sovereign’ AI technologies – is it a big waste of money?

Many countries are investing in “sovereign” AI to compensate for the linguistic, cultural, and security shortcomings of American and Chinese models. Examples include SEA-LION in Singapore, ILMUchat in Malaysia, and Apertus in Switzerland, which are designed for local uses and characteristics. However, these initiatives face major obstacles: very high costs, massive computing power and talent (…)

Personal data storage is an idea whose time has come

An important part of digital trust is being in charge of your own data. Unfortunately, nowadays this is not at all the case. Some of your data resides in the cloud, e.g., with cloud-drives like Google Drive, directly accessible to you. But most of your data is hidden from you, saved on the servers of (…)

Microsoft cuts off some services used by Israeli military unit

Without defending any party or attacking the other, I find this article interesting because it somehow presents a new situation whose implications we should carefully consider: First, can Microsoft’s logic be extended from the use of cloud storage and AI to the use of operating systems? What about communications services or even hardware? Can this (…)

The AI coding trap

While we programmers are still figuring out where and how LLMs can help us get our work done, it’s worth taking a step back to reflect on what we know so far. I like this piece, which compares LLMs to “lightning-fast” junior programmers and describes how we can deploy them to deliver results. Although not (…)

China bans its biggest tech companies from acquiring Nvidia chips, says report — Beijing claims its homegrown AI processors now match H20 and RTX Pro 6000D

This ban is notable because, rather than targeting cutting-edge AI chips, it focuses on mass-produced processors that are essential for wider industry use. By disrupting the supply of equipment and forcing Chinese tech giants to innovate internally, it raises the stakes in the US–China tech conflict. This will likely accelerate the development of domestic production (…)

New C4DT Focus #9, entitled “Smart surveillance on the rise: A legal and ethical crossroads”, is out!

This issue explores the rapid spread of AI-powered video surveillance, from supermarkets to large public gatherings, and examines the legal, ethical, and societal challenges it raises. With insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH), it looks at Switzerland’s sectoral approach versus the EU’s new AI Act, and what this means (…)

C4DT FOCUS 9 Smart surveillance on the rise: A legal and ethical crossroads

From supermarket checkouts to Olympic stadiums, smart surveillance technologies are spreading rapidly, raising new questions about privacy, trust, and oversight. How should societies balance the benefits of AI-powered cameras with the risks of bias, misuse, and erosion of democratic freedoms? And how will the upcoming European AI Act reshape the governance of biometric surveillance, both in the EU and in Switzerland? This edition of C4DT Focus examines these pressing issues by offering a legal and ethical perspective on intelligent video surveillance, with insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH).

Quantum Scientists Have Built a New Math of Cryptography

While one of my previous weekly picks showed that there is currently no mathematical proof for the reliability of today’s cryptographic algorithms, this article shows a way out: if a quantum computer is used as a basis to build a cryptographic algorithm, the foundation can be shown to protect against attacks to the system. While (…)

Hackers Hijacked Google’s Gemini AI With a Poisoned Calendar Invite to Take Over a Smart Home

This article is fascinating because it exposes how indirect prompt injection attacks against LLM assistants like Google Gemini are not just theoretical—they have real-world implications, enabling hackers to hijack smart homes through poisoned data. This highlights a fundamental security flaw: current LLMs cannot reliably distinguish trusted commands from untrusted, external data.