The Israeli airstrike campaign against Iranian military and cyber infrastructure on 12 June had an ‘interesting’ side effect. Accounts that had previously been identified as allegedly being managed by the Iranian Revolutionary Guard Corps (IRGC) and that promoted Scottish independence fell silent following the strikes. This resulted in a 4% reduction in all discussion related (…)
A secure and reliable electronic identity (e-ID) is both a challenge and a crucial issue in today’s digital landscape. EPFL and SICPA are joining forces to design an innovative system of cryptographic algorithms.
To promote research and education in cyber-defence, EPFL and the Cyber-Defence (CYD) Campus launched a rolling call for Master Thesis Fellowships – A Talent Program for Cyber-Defence Research.
This month we introduce you to Hamza Abid, a CYD Master Thesis Fellowship recipient, who is finishing up his Master Thesis in the Laboratory of Sensing and Networking Systems at EPFL.
This article prompts reflection on what we mean by ‘trust’ when we talk about ‘trustworthy’ AI. There are many dimensions to trust, and the author helpfully breaks them down. In human-AI interactions, misalignments can occur when stakeholders interpret ‘trust’ differently. For example, companies might emphasize the epistemic aspect—reliance on knowledge and its acquisition—of trust, while (…)
It often appears as if disinformation was spread by a large number of social media users. However, research suggests that it is a comparatively small percentage of the users that is primarily responsible for creating and widely sharing divisive content, with these voices being amplified by the platforms’ algorithms. As bleak as this may be, (…)
An impressively large line-up of AI leaders and experts are advocating for more research into ‘chain-of-thought’ (CoT) monitoring of reasoning models. This technique, as the name implies, aims to understand how AI reasoning models work. It could become a key method for understanding how AI agents think and what their goals are, and could enhance (…)
With all the hype around agentic AI, the industry is rushing to embrace it. However, alarm bells have been sounded again and again concerning misaligned behaviour of LLMs and Large Reasoning Models (LRMs), ranging from ‘harmless’ misinformation to deliberately malicious actions. This raises serious questions whether the current technology is really mature enough to be (…)
A must-attend event in Switzerland, the Black Alps conference is a hot spot for cybersecurity professionals and enthusiasts. The event allows you to discuss the latest threats, mitigations and advances in cybersecurity. The 2-day and 2-night program includes a variety of keynotes and technical talks, networking dinners and an ethical hacking contest (CTF). #BlackAlps25
From a cryptographer’s point of view, the big breakthrough in quantum computing would be if it can successfully factorize numbers in the 1000-digit range. As it turns out, this is actually quite difficult. The record from 2012 of factorizing the number 21 is still unbeaten! And all reports of factorizing bigger numbers used very, very (…)
A lot of cryptographic proofs rely on something called the ‘random oracle model’ and the ‘Fiat-Shamir transformation’. Together, they can create a mathematical proof of the security of a specific zero knowledge protocol. However, the random oracle model is never used – in real algorithms, it is replaced by a hash function. What can go (…)
As a software engineer, I’m looking at LLMs both as a tool for, but potentially also a danger to, my job: will it replace me one day? In this study, they measured the time that seasoned software needed to fix a bug, both with and without the aid of LLMs. The outcome in this specific (…)
This full-day conference explores the potential disruptions caused by the rise of AI agents and their impact on existing systems and structures. Bringing together industry leaders, researchers, policymakers, and stakeholders, the event will facilitate in-depth discussions on the challenges and opportunities presented by AI agents. Participants will assess the risks, examine strategies to mitigate emerging threats, and collaborate on establishing resilient frameworks for responsible innovation.
This event is organized by the Center for Digital Trust (C4DT) at EPFL.
The Center for Digital Trust hosted a successful workshop on Privacy-Preserving eID last week. We welcomed 14 participants from seven partner organizations including Be-Ys, ELCA, FOITT, Kudelski, SICPA, Swiss Post/SwissSign, and Swisscom. The day-long event combined theoretical foundations with hands-on technical demonstrations. Our focus centered on swiyu, Switzerland’s proposed eID project developed by FOITT, and (…)
This article highlights the alarming reliance of critical infrastructure on outdated technology, exposing significant vulnerabilities in essential systems. The need for uninterrupted operation and compatibility requirements presents major challenges to the modernization of these legacy systems, and the costs to upgrade are steep. Yet the potential for catastrophic failure due to obsolete equipment underscores the (…)
This article highlights significant flaws within the proposed NO FAKES Act, whose repercussions would extend far beyond U.S. borders. I found it particularly insightful because of the parallels it draws between this bill and existing mechanisms for addressing copyright infringement, outlining how the deficiencies within the latter are likely to be mirrored in the former.
Driven by ethical concerns about using existing artwork to train gen AI models, an artist created his own model that produces output untrained on any data at all. What was interesting to me is that, in exploring whether gen AI could create original art, he also demonstrated a potential path to better understanding how such (…)
As LLM agents become ‘en vogue’, we need to rethink the attacks they open to malicious third parties. Here Simon Willison describes a combination often seen in such agents that will put your private data at risk. Unfortunately, there is currently not much you can do, except be aware that all the data that agents (…)
Cycle tracking apps are not only helpful for those trying to conceive, but also serve as important tools for keeping track of one’s general reproductive health. As this article discusses, however, such tools can quickly become a double-edged sword due to the high value of the data they collect, which can potentially end up in (…)
This article underscores that neither digital policies nor technologies can be discussed in isolation. Using Indonesia as an example, it lays out how the country’s laws and regulations on internet content are actually implemented by the ISPs and examines how the combination of vaguely worded laws and sweeping filtering methods ultimately impacts citizens’ access to (…)
This article is interesting because it highlights the opportunities and challenges of personal data ownership. Although tools such as dWallet claim to empower users, they can encourage the poorest and least educated people to sell their data without understanding the risks, thereby widening the digital divide. True data empowerment means that everyone must have the (…)
To foster wallets, credentials and trusted infrastructure for the benefit of all humans. Leading organizations from across the globe coming together to shape the future of digital identity, in particular in the realm of secure, interoperable wallets, credentials and trusted infrastructure.
That is a very nice attack on privacy-protection in the mobile browsers: even if you don’t allow any cookies and don’t consent on being tracked, you’re browsing behaviour is still tracked. The idea of communicating from the mobile browser to your locally installed app is technically very interesting, and seems to be difficult to avoid (…)
This atlas of algorithmic systems curated by AlgorithmWatch CH is a nonexhaustive yet revealing list of algorithms currently deployed in Switzerland, whether to ‘predict, recommend, affect or take decisions about human beings’ or to ‘generate content used by or on human beings.’ The atlas is really eye-opening for me – so many systems that we (…)
Agentic AI has only recently emerged, yet it is already being used to commit fraud. This trend is not new; historically, fraudsters have exploited new technologies to target unsuspecting users and weak security systems, as seen with the first instances of voice phishing during the rise of telephony in the early 20th-century. These challenges have (…)