Thursday 26 September 2019 saw the launch of the CyberPeace Institute, an independent NGO that will address the growing impact of major cyberattacks, assist vulnerable communities, promote transparency, and advance global discussions on acceptable behavior in cyberspace. EPFL President Martin Vetterli will be sitting on the Executive Board, and the Center for Digital Trust is named as a scientific partner.
The C4DT is looking forward to working with the @cyberpeaceinst led by @DuguinStephane and @MarietjeSchaake and supporting its mission to enhance the stability of #cyberspace. Please click below to access the official announcement.
In this talk, Dan Bogdanov will start by introducing secure computing technologies and their potential in enterprise and government use. He will then look at a focus group study of the barriers of adopting such technologies based on interviews in many industries.
September 4, 2019 @ 14:15 in BC 410
by Prof. Ben Zhao, Univ. of Chicago
The lack of transparency in today’s deep learning systems has paved the way for a new type of threats, commonly referred to as backdoor or Trojan attacks. In this talk, Ben Zhao will describe two recent results on detecting and understanding backdoor attacks on deep learning systems.
September 24th, 2019 @ 14:15, room BC 420
Kasra Edalatnejad presents DataShare, a decentralized and privacy-preserving global search system that enables journalists worldwide to find documents via a dedicated network of peers. This work stems from the need of the International Consortium of Investigative Journalists (ICIJ) for securing their search and discovery platform.
Wednesday, July 3rd 2019 @16:15, room BC 410
By Prof. Wei Meng, Chinese University of Hong Kong
Click is the prominent way that users interact with web applications. Attackers aim to intercept genuine user clicks to either send malicious commands to another application on behalf of the user or fabricate realistic ad click traffic. In this talk, Prof. Wei Meng investigates the click interception practices on the Web.
Tuesday July 23rd, 2019 @10:00, room BC 420
EPFL’s IC School invites you to the 2019 edition of the IC Summer Research Institute (SuRI), held in Lausanne (EPFL, BC 420) on June 13-14. The conference brings together renowned researchers and experts from academia and industry who will present their latest research in cybersecurity, privacy, and cryptography. The event is open to everyone and attendance is free of charge. For more information and to register please click here…
I really appreciate Jimmy Wales’s insistence that sticking to Wikipedia’s core principals of neutrality, factuality and fostering a diverse community around the globe will eventually prevail in today’s polarized digital landscape. As a nerd myself I also relate very strongly to his enthusiasm of working on Wikipedia for the sake of doing something interesting over (…)
I found Simon Willison’s blog post interesting because he self-critically builds on his lethal trifecta concept. He clearly explains why prompt injection remains an unsolved risk for AI agents and highlights the practical “Rule of Two” for safer design (proposed by Meta AI). He also discusses new research showing that technical defenses consistently fail. His (…)
Here is another post-quantum implementation of a popular protocol: Signal announced the addition of a quantum-safe algorithm to increase the protection of the messages sent between two Signal users. Like other quantum-safe algorithms, it doesn’t replace the currently used cryptographic base, but rather enhances it. Interestingly, the biggest hurdle was the size of the new (…)
One of the increasingly popular paradigms for managing the growing size and complexity of modern ML models is the adoption of collaborative and decentralized approaches. While this has enabled new possibilities in privacy-preserving and scalable frameworks for distributed data analytics and model training over large-scale real-world models, current approaches often assume a uniform trust-levels among (…)
State-of-the-art architectures in many software applications and critical infrastructures are based on deep learning models. These models have been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose fundamental questions in terms of safety, security, or performance guarantees at large. Several defense mechanisms have been developed in the last years (…)
noyb’s latest victory may sound like a technicality – who is responsible for complying with the GDPR – but it is actually very important, because if no one knows who is responsible, no one really is responsible. All the more important that the ruling clearly holds Microsoft U.S. as the company actually selling the product (…)
With our conference on “Assessing the Disruptions by AI Agents” in mind, I found this article compelling because it documents the alarming acceleration of cyberattack capabilities thanks to AI agents. This raises the critical question of whether we are approaching a tipping point at which defence becomes structurally impossible. However, the authors offer cautious optimism, (…)
Many countries are investing in “sovereign” AI to compensate for the linguistic, cultural, and security shortcomings of American and Chinese models. Examples include SEA-LION in Singapore, ILMUchat in Malaysia, and Apertus in Switzerland, which are designed for local uses and characteristics. However, these initiatives face major obstacles: very high costs, massive computing power and talent (…)
An important part of digital trust is being in charge of your own data. Unfortunately, nowadays this is not at all the case. Some of your data resides in the cloud, e.g., with cloud-drives like Google Drive, directly accessible to you. But most of your data is hidden from you, saved on the servers of (…)
This long portrait of one of the leading authorities in the world on AI touches only lightly on concepts that those focused on safe and trustworthy AI prioritize, like explainability. But I think the key take-away from this article is that ‘it is hard, in the current AI race, to separate out purely intellectual inquiry (…)
Without defending any party or attacking the other, I find this article interesting because it somehow presents a new situation whose implications we should carefully consider: First, can Microsoft’s logic be extended from the use of cloud storage and AI to the use of operating systems? What about communications services or even hardware? Can this (…)
While we programmers are still figuring out where and how LLMs can help us get our work done, it’s worth taking a step back to reflect on what we know so far. I like this piece, which compares LLMs to “lightning-fast” junior programmers and describes how we can deploy them to deliver results. Although not (…)
No sooner had I begun to express my bemusement at the apparent popularity of a new app that pays users to record their phone calls and sells the data to AI firms, than the app was summarily shut down after a security flaw was reported that allowed anyone to access the phone numbers, call recordings, (…)
This ban is notable because, rather than targeting cutting-edge AI chips, it focuses on mass-produced processors that are essential for wider industry use. By disrupting the supply of equipment and forcing Chinese tech giants to innovate internally, it raises the stakes in the US–China tech conflict. This will likely accelerate the development of domestic production (…)
This issue explores the rapid spread of AI-powered video surveillance, from supermarkets to large public gatherings, and examines the legal, ethical, and societal challenges it raises. With insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH), it looks at Switzerland’s sectoral approach versus the EU’s new AI Act, and what this means (…)
From supermarket checkouts to Olympic stadiums, smart surveillance technologies are spreading rapidly, raising new questions about privacy, trust, and oversight. How should societies balance the benefits of AI-powered cameras with the risks of bias, misuse, and erosion of democratic freedoms? And how will the upcoming European AI Act reshape the governance of biometric surveillance, both in the EU and in Switzerland? This edition of C4DT Focus examines these pressing issues by offering a legal and ethical perspective on intelligent video surveillance, with insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH).
While one of my previous weekly picks showed that there is currently no mathematical proof for the reliability of today’s cryptographic algorithms, this article shows a way out: if a quantum computer is used as a basis to build a cryptographic algorithm, the foundation can be shown to protect against attacks to the system. While (…)
This article is fascinating because it exposes how indirect prompt injection attacks against LLM assistants like Google Gemini are not just theoretical—they have real-world implications, enabling hackers to hijack smart homes through poisoned data. This highlights a fundamental security flaw: current LLMs cannot reliably distinguish trusted commands from untrusted, external data.