Skip to content

C4DT Distinguished Lecture : Hidden Backdoors in Deep Learning Systems

by Prof. Ben Zhao, Univ. of Chicago
The lack of transparency in today’s deep learning systems has paved the way for a new type of threats, commonly referred to as backdoor or Trojan attacks. In this talk, Ben Zhao will describe two recent results on detecting and understanding backdoor attacks on deep learning systems.
September 24th, 2019 @ 14:15, room BC 420

DataShare: Decentralized Privacy-Preserving Search Engine for Investigative Journalists

Kasra Edalatnejad presents DataShare, a decentralized and privacy-preserving global search system that enables journalists worldwide to find documents via a dedicated network of peers. This work stems from the need of the International Consortium of Investigative Journalists (ICIJ) for securing their search and discovery platform.
Wednesday, July 3rd 2019 @16:15, room BC 410

All Your Clicks Belong to Me: Investigating Click Interception on the Web

By Prof. Wei Meng, Chinese University of Hong Kong
Click is the prominent way that users interact with web applications. Attackers aim to intercept genuine user clicks to either send malicious commands to another application on behalf of the user or fabricate realistic ad click traffic. In this talk, Prof. Wei Meng investigates the click interception practices on the Web.
Tuesday July 23rd, 2019 @10:00, room BC 420

The Summer Research Institute on Security and Privacy

EPFL’s IC School invites you to the 2019 edition of the IC Summer Research Institute (SuRI), held in Lausanne (EPFL, BC 420) on June 13-14. The conference brings together renowned researchers and experts from academia and industry who will present their latest research in cybersecurity, privacy, and cryptography. The event is open to everyone and attendance is free of charge. For more information and to register please click here…

Microsoft ‘illegally’ tracked students via 365 Education, says data watchdog

noyb’s latest victory may sound like a technicality – who is responsible for complying with the GDPR – but it is actually very important, because if no one knows who is responsible, no one really is responsible. All the more important that the ruling clearly holds Microsoft U.S. as the company actually selling the product (…)

Autonomous AI hacking and the future of cybersecurity

With our conference on “Assessing the Disruptions by AI Agents” in mind, I found this article compelling because it documents the alarming acceleration of cyberattack capabilities thanks to AI agents. This raises the critical question of whether we are approaching a tipping point at which defence becomes structurally impossible. However, the authors offer cautious optimism, (…)

Governments are spending billions on their own ‘sovereign’ AI technologies – is it a big waste of money?

Many countries are investing in “sovereign” AI to compensate for the linguistic, cultural, and security shortcomings of American and Chinese models. Examples include SEA-LION in Singapore, ILMUchat in Malaysia, and Apertus in Switzerland, which are designed for local uses and characteristics. However, these initiatives face major obstacles: very high costs, massive computing power and talent (…)

Personal data storage is an idea whose time has come

An important part of digital trust is being in charge of your own data. Unfortunately, nowadays this is not at all the case. Some of your data resides in the cloud, e.g., with cloud-drives like Google Drive, directly accessible to you. But most of your data is hidden from you, saved on the servers of (…)

Microsoft cuts off some services used by Israeli military unit

Without defending any party or attacking the other, I find this article interesting because it somehow presents a new situation whose implications we should carefully consider: First, can Microsoft’s logic be extended from the use of cloud storage and AI to the use of operating systems? What about communications services or even hardware? Can this (…)

The AI coding trap

While we programmers are still figuring out where and how LLMs can help us get our work done, it’s worth taking a step back to reflect on what we know so far. I like this piece, which compares LLMs to “lightning-fast” junior programmers and describes how we can deploy them to deliver results. Although not (…)

China bans its biggest tech companies from acquiring Nvidia chips, says report — Beijing claims its homegrown AI processors now match H20 and RTX Pro 6000D

This ban is notable because, rather than targeting cutting-edge AI chips, it focuses on mass-produced processors that are essential for wider industry use. By disrupting the supply of equipment and forcing Chinese tech giants to innovate internally, it raises the stakes in the US–China tech conflict. This will likely accelerate the development of domestic production (…)

New C4DT Focus #9, entitled “Smart surveillance on the rise: A legal and ethical crossroads”, is out!

This issue explores the rapid spread of AI-powered video surveillance, from supermarkets to large public gatherings, and examines the legal, ethical, and societal challenges it raises. With insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH), it looks at Switzerland’s sectoral approach versus the EU’s new AI Act, and what this means (…)

C4DT FOCUS 9 Smart surveillance on the rise: A legal and ethical crossroads

From supermarket checkouts to Olympic stadiums, smart surveillance technologies are spreading rapidly, raising new questions about privacy, trust, and oversight. How should societies balance the benefits of AI-powered cameras with the risks of bias, misuse, and erosion of democratic freedoms? And how will the upcoming European AI Act reshape the governance of biometric surveillance, both in the EU and in Switzerland? This edition of C4DT Focus examines these pressing issues by offering a legal and ethical perspective on intelligent video surveillance, with insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH).

Quantum Scientists Have Built a New Math of Cryptography

While one of my previous weekly picks showed that there is currently no mathematical proof for the reliability of today’s cryptographic algorithms, this article shows a way out: if a quantum computer is used as a basis to build a cryptographic algorithm, the foundation can be shown to protect against attacks to the system. While (…)

Hackers Hijacked Google’s Gemini AI With a Poisoned Calendar Invite to Take Over a Smart Home

This article is fascinating because it exposes how indirect prompt injection attacks against LLM assistants like Google Gemini are not just theoretical—they have real-world implications, enabling hackers to hijack smart homes through poisoned data. This highlights a fundamental security flaw: current LLMs cannot reliably distinguish trusted commands from untrusted, external data.

U.S. targets Brazil’s payments platform Pix in trade spat

I find this article interesting because it highlights the tension between digital sovereignty and the expansion of global technology. With 75% market penetration compared to the single-digit presence of US alternatives, Pix demonstrates how public digital goods can effectively challenge the dominance of Big Tech. This case raises the question of whether payment systems constitute (…)

Bring Back the Blue-Book Exam

This article talks about deepening digital estrangement, digital intrusion, and digital distraction from the perspective of a teacher who has seen the harm that overreliance on AI has caused to her students’ educational attainment. Hers is another testimony to the need for the definition of responsible and trustworthy AI to include when it should be (…)

Addressing the unauthorized issuance of multiple TLS certificates for 1.1.1.1

This is a nice reminder of the state of the foundation upon which our public key infrastructure stands. Depending on the angle you’re looking at, it is either stable or shaky. The incident in question was a certificate authority that emitted a rogue certificate for “test purposes.” What ensued and how Cloudflare responded shows how (…)

Second Call for Vaud Projects

The collaboration between the Swiss Data Science Center (SDSC) and the Canton of Vaud aims to generate a tangible and lasting impact on the economy and public community of the Vaud region. In this context, the SDSC supports collaborative projects in the field of data science, bringing together the strengths of academic excellence, companies, particularly SMEs and public actors.

“Anyway” – Distributed LLM Hands-on Workshop

While public LLM APIs are convenient, they store all queries on providers’ servers. Running open LLMs locally offers privacy and offline access, though setup can be challenging depending on hardware and model requirements. ‘Anyway’ addresses this by distributing queries across multiple GPUs with dynamic scaling. Professor Guerraoui’s lab is developing “Anyway”, a tool that can (…)

Humans are being hired to make AI slop look less sloppy

After many thought that LLMs and image-generators will remove jobs from writers and image artists, the pendulum swings back: clients realize that these tools only get you halfway to a useful result. So they turn to the ones they wanted to replace, and ask them to fix the half-baken results. I find i interesting how (…)

Anthropic will start training its AI models on chat transcripts

While it was to be expected that Anthropic will also use the users’ chats for training, I think the way they’re approaching this is not too bad. Perhaps the pop-up is not clear enough, but at least past chats will not get in the LLM training grinder. One of the big question will be of (…)