by Prof. Ben Zhao, Univ. of Chicago
The lack of transparency in today’s deep learning systems has paved the way for a new type of threats, commonly referred to as backdoor or Trojan attacks. In this talk, Ben Zhao will describe two recent results on detecting and understanding backdoor attacks on deep learning systems.
September 24th, 2019 @ 14:15, room BC 420
Kasra Edalatnejad presents DataShare, a decentralized and privacy-preserving global search system that enables journalists worldwide to find documents via a dedicated network of peers. This work stems from the need of the International Consortium of Investigative Journalists (ICIJ) for securing their search and discovery platform.
Wednesday, July 3rd 2019 @16:15, room BC 410
By Prof. Wei Meng, Chinese University of Hong Kong
Click is the prominent way that users interact with web applications. Attackers aim to intercept genuine user clicks to either send malicious commands to another application on behalf of the user or fabricate realistic ad click traffic. In this talk, Prof. Wei Meng investigates the click interception practices on the Web.
Tuesday July 23rd, 2019 @10:00, room BC 420
EPFL’s IC School invites you to the 2019 edition of the IC Summer Research Institute (SuRI), held in Lausanne (EPFL, BC 420) on June 13-14. The conference brings together renowned researchers and experts from academia and industry who will present their latest research in cybersecurity, privacy, and cryptography. The event is open to everyone and attendance is free of charge. For more information and to register please click here…
noyb’s latest victory may sound like a technicality – who is responsible for complying with the GDPR – but it is actually very important, because if no one knows who is responsible, no one really is responsible. All the more important that the ruling clearly holds Microsoft U.S. as the company actually selling the product (…)
With our conference on “Assessing the Disruptions by AI Agents” in mind, I found this article compelling because it documents the alarming acceleration of cyberattack capabilities thanks to AI agents. This raises the critical question of whether we are approaching a tipping point at which defence becomes structurally impossible. However, the authors offer cautious optimism, (…)
Many countries are investing in “sovereign” AI to compensate for the linguistic, cultural, and security shortcomings of American and Chinese models. Examples include SEA-LION in Singapore, ILMUchat in Malaysia, and Apertus in Switzerland, which are designed for local uses and characteristics. However, these initiatives face major obstacles: very high costs, massive computing power and talent (…)
An important part of digital trust is being in charge of your own data. Unfortunately, nowadays this is not at all the case. Some of your data resides in the cloud, e.g., with cloud-drives like Google Drive, directly accessible to you. But most of your data is hidden from you, saved on the servers of (…)
This long portrait of one of the leading authorities in the world on AI touches only lightly on concepts that those focused on safe and trustworthy AI prioritize, like explainability. But I think the key take-away from this article is that ‘it is hard, in the current AI race, to separate out purely intellectual inquiry (…)
Without defending any party or attacking the other, I find this article interesting because it somehow presents a new situation whose implications we should carefully consider: First, can Microsoft’s logic be extended from the use of cloud storage and AI to the use of operating systems? What about communications services or even hardware? Can this (…)
While we programmers are still figuring out where and how LLMs can help us get our work done, it’s worth taking a step back to reflect on what we know so far. I like this piece, which compares LLMs to “lightning-fast” junior programmers and describes how we can deploy them to deliver results. Although not (…)
No sooner had I begun to express my bemusement at the apparent popularity of a new app that pays users to record their phone calls and sells the data to AI firms, than the app was summarily shut down after a security flaw was reported that allowed anyone to access the phone numbers, call recordings, (…)
This ban is notable because, rather than targeting cutting-edge AI chips, it focuses on mass-produced processors that are essential for wider industry use. By disrupting the supply of equipment and forcing Chinese tech giants to innovate internally, it raises the stakes in the US–China tech conflict. This will likely accelerate the development of domestic production (…)
This issue explores the rapid spread of AI-powered video surveillance, from supermarkets to large public gatherings, and examines the legal, ethical, and societal challenges it raises. With insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH), it looks at Switzerland’s sectoral approach versus the EU’s new AI Act, and what this means (…)
From supermarket checkouts to Olympic stadiums, smart surveillance technologies are spreading rapidly, raising new questions about privacy, trust, and oversight. How should societies balance the benefits of AI-powered cameras with the risks of bias, misuse, and erosion of democratic freedoms? And how will the upcoming European AI Act reshape the governance of biometric surveillance, both in the EU and in Switzerland? This edition of C4DT Focus examines these pressing issues by offering a legal and ethical perspective on intelligent video surveillance, with insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH).
While one of my previous weekly picks showed that there is currently no mathematical proof for the reliability of today’s cryptographic algorithms, this article shows a way out: if a quantum computer is used as a basis to build a cryptographic algorithm, the foundation can be shown to protect against attacks to the system. While (…)
This article is fascinating because it exposes how indirect prompt injection attacks against LLM assistants like Google Gemini are not just theoretical—they have real-world implications, enabling hackers to hijack smart homes through poisoned data. This highlights a fundamental security flaw: current LLMs cannot reliably distinguish trusted commands from untrusted, external data.
I find this article interesting because it highlights the tension between digital sovereignty and the expansion of global technology. With 75% market penetration compared to the single-digit presence of US alternatives, Pix demonstrates how public digital goods can effectively challenge the dominance of Big Tech. This case raises the question of whether payment systems constitute (…)
This article talks about deepening digital estrangement, digital intrusion, and digital distraction from the perspective of a teacher who has seen the harm that overreliance on AI has caused to her students’ educational attainment. Hers is another testimony to the need for the definition of responsible and trustworthy AI to include when it should be (…)
This is a nice reminder of the state of the foundation upon which our public key infrastructure stands. Depending on the angle you’re looking at, it is either stable or shaky. The incident in question was a certificate authority that emitted a rogue certificate for “test purposes.” What ensued and how Cloudflare responded shows how (…)
The collaboration between the Swiss Data Science Center (SDSC) and the Canton of Vaud aims to generate a tangible and lasting impact on the economy and public community of the Vaud region. In this context, the SDSC supports collaborative projects in the field of data science, bringing together the strengths of academic excellence, companies, particularly SMEs and public actors.
While public LLM APIs are convenient, they store all queries on providers’ servers. Running open LLMs locally offers privacy and offline access, though setup can be challenging depending on hardware and model requirements. ‘Anyway’ addresses this by distributing queries across multiple GPUs with dynamic scaling. Professor Guerraoui’s lab is developing “Anyway”, a tool that can (…)
After many thought that LLMs and image-generators will remove jobs from writers and image artists, the pendulum swings back: clients realize that these tools only get you halfway to a useful result. So they turn to the ones they wanted to replace, and ask them to fix the half-baken results. I find i interesting how (…)
While it was to be expected that Anthropic will also use the users’ chats for training, I think the way they’re approaching this is not too bad. Perhaps the pop-up is not clear enough, but at least past chats will not get in the LLM training grinder. One of the big question will be of (…)