Skip to content

DataShare: Decentralized Privacy-Preserving Search Engine for Investigative Journalists

Kasra Edalatnejad presents DataShare, a decentralized and privacy-preserving global search system that enables journalists worldwide to find documents via a dedicated network of peers. This work stems from the need of the International Consortium of Investigative Journalists (ICIJ) for securing their search and discovery platform.
Wednesday, July 3rd 2019 @16:15, room BC 410

All Your Clicks Belong to Me: Investigating Click Interception on the Web

By Prof. Wei Meng, Chinese University of Hong Kong
Click is the prominent way that users interact with web applications. Attackers aim to intercept genuine user clicks to either send malicious commands to another application on behalf of the user or fabricate realistic ad click traffic. In this talk, Prof. Wei Meng investigates the click interception practices on the Web.
Tuesday July 23rd, 2019 @10:00, room BC 420

The Summer Research Institute on Security and Privacy

EPFL’s IC School invites you to the 2019 edition of the IC Summer Research Institute (SuRI), held in Lausanne (EPFL, BC 420) on June 13-14. The conference brings together renowned researchers and experts from academia and industry who will present their latest research in cybersecurity, privacy, and cryptography. The event is open to everyone and attendance is free of charge. For more information and to register please click here…

New C4DT Focus #9, entitled “Smart surveillance on the rise: A legal and ethical crossroads”, is out!

This issue explores the rapid spread of AI-powered video surveillance, from supermarkets to large public gatherings, and examines the legal, ethical, and societal challenges it raises. With insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH), it looks at Switzerland’s sectoral approach versus the EU’s new AI Act, and what this means (…)

C4DT FOCUS 9 Smart surveillance on the rise: A legal and ethical crossroads

From supermarket checkouts to Olympic stadiums, smart surveillance technologies are spreading rapidly, raising new questions about privacy, trust, and oversight. How should societies balance the benefits of AI-powered cameras with the risks of bias, misuse, and erosion of democratic freedoms? And how will the upcoming European AI Act reshape the governance of biometric surveillance, both in the EU and in Switzerland? This edition of C4DT Focus examines these pressing issues by offering a legal and ethical perspective on intelligent video surveillance, with insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH).

Quantum Scientists Have Built a New Math of Cryptography

While one of my previous weekly picks showed that there is currently no mathematical proof for the reliability of today’s cryptographic algorithms, this article shows a way out: if a quantum computer is used as a basis to build a cryptographic algorithm, the foundation can be shown to protect against attacks to the system. While (…)

Hackers Hijacked Google’s Gemini AI With a Poisoned Calendar Invite to Take Over a Smart Home

This article is fascinating because it exposes how indirect prompt injection attacks against LLM assistants like Google Gemini are not just theoretical—they have real-world implications, enabling hackers to hijack smart homes through poisoned data. This highlights a fundamental security flaw: current LLMs cannot reliably distinguish trusted commands from untrusted, external data.

U.S. targets Brazil’s payments platform Pix in trade spat

I find this article interesting because it highlights the tension between digital sovereignty and the expansion of global technology. With 75% market penetration compared to the single-digit presence of US alternatives, Pix demonstrates how public digital goods can effectively challenge the dominance of Big Tech. This case raises the question of whether payment systems constitute (…)

Bring Back the Blue-Book Exam

This article talks about deepening digital estrangement, digital intrusion, and digital distraction from the perspective of a teacher who has seen the harm that overreliance on AI has caused to her students’ educational attainment. Hers is another testimony to the need for the definition of responsible and trustworthy AI to include when it should be (…)

Addressing the unauthorized issuance of multiple TLS certificates for 1.1.1.1

This is a nice reminder of the state of the foundation upon which our public key infrastructure stands. Depending on the angle you’re looking at, it is either stable or shaky. The incident in question was a certificate authority that emitted a rogue certificate for “test purposes.” What ensued and how Cloudflare responded shows how (…)

Second Call for Vaud Projects

The collaboration between the Swiss Data Science Center (SDSC) and the Canton of Vaud aims to generate a tangible and lasting impact on the economy and public community of the Vaud region. In this context, the SDSC supports collaborative projects in the field of data science, bringing together the strengths of academic excellence, companies, particularly SMEs and public actors.

“Anyway” – Distributed LLM Hands-on Workshop

While public LLM APIs are convenient, they store all queries on providers’ servers. Running open LLMs locally offers privacy and offline access, though setup can be challenging depending on hardware and model requirements. ‘Anyway’ addresses this by distributing queries across multiple GPUs with dynamic scaling. Professor Guerraoui’s lab is developing “Anyway”, a tool that can (…)

Humans are being hired to make AI slop look less sloppy

After many thought that LLMs and image-generators will remove jobs from writers and image artists, the pendulum swings back: clients realize that these tools only get you halfway to a useful result. So they turn to the ones they wanted to replace, and ask them to fix the half-baken results. I find i interesting how (…)

Anthropic will start training its AI models on chat transcripts

While it was to be expected that Anthropic will also use the users’ chats for training, I think the way they’re approaching this is not too bad. Perhaps the pop-up is not clear enough, but at least past chats will not get in the LLM training grinder. One of the big question will be of (…)

Iranian pro-Scottish independence accounts go silent after Israel attacks

The Israeli airstrike campaign against Iranian military and cyber infrastructure on 12 June had an ‘interesting’ side effect. Accounts that had previously been identified as allegedly being managed by the Iranian Revolutionary Guard Corps (IRGC) and that promoted Scottish independence fell silent following the strikes. This resulted in a 4% reduction in all discussion related (…)

AI Could Actually Help Rebuild The Middle Class

I found this article interesting because, rather than perpetuating fear-driven narratives, it provides a thorough analysis backed by demographic realities in the Western world. Labour shortages, it suggests, make it unlikely that AI will ‘take all our jobs’. It emphasises how AI can increase access to specialist roles for a wider range of workers. The (…)

Why AI chatbots lie to us

With all the hype around agentic AI, the industry is rushing to embrace it. However, alarm bells have been sounded again and again concerning misaligned behaviour of LLMs and Large Reasoning Models (LRMs), ranging from ‘harmless’ misinformation to deliberately malicious actions. This raises serious questions whether the current technology is really mature enough to be (…)

Conspiracy Theories About the Texas Floods Lead to Death Threats

Severe floods in Texas sparked a wave of conspiracy theories, with claims circulating online that the disaster was caused by geoengineering or weather weapons. This highlights a growing tension between the speed at which formal institutions can communicate accurate information and the rapid spread of AI-fueled disinformation. The resulting vandalism of radar infrastructure and threats (…)

Anticipating the Agentic Era: Assessing the Disruptions by AI Agents

This full-day conference explores the potential disruptions caused by the rise of AI agents and their impact on existing systems and structures. Bringing together industry leaders, researchers, policymakers, and stakeholders, the event will facilitate in-depth discussions on the challenges and opportunities presented by AI agents. Participants will assess the risks, examine strategies to mitigate emerging threats, and collaborate on establishing resilient frameworks for responsible innovation.

This event is organized by the Center for Digital Trust (C4DT) at EPFL.

Libxml2’s “no security embargoes” policy

Here’s an interesting take on what happens if security bugs are found in Open Source libraries. Now that more and more of Open Source libraries find their way into commercial products from Google, Microsoft, Amazon, and others, the problem of fixing security bugs in a timely manner is becoming a bigger problem. Open Source projects (…)

The NO FAKES Act Has Changed – and It’s So Much Worse

This article highlights significant flaws within the proposed NO FAKES Act, whose repercussions would extend far beyond U.S. borders. I found it particularly insightful because of the parallels it draws between this bill and existing mechanisms for addressing copyright infringement, outlining how the deficiencies within the latter are likely to be mirrored in the former.

What happens when you feed AI nothing

Driven by ethical concerns about using existing artwork to train gen AI models, an artist created his own model that produces output untrained on any data at all. What was interesting to me is that, in exploring whether gen AI could create original art, he also demonstrated a potential path to better understanding how such (…)