The Israeli airstrike campaign against Iranian military and cyber infrastructure on 12 June had an ‘interesting’ side effect. Accounts that had previously been identified as allegedly being managed by the Iranian Revolutionary Guard Corps (IRGC) and that promoted Scottish independence fell silent following the strikes. This resulted in a 4% reduction in all discussion related (…)
I found this article interesting because, rather than perpetuating fear-driven narratives, it provides a thorough analysis backed by demographic realities in the Western world. Labour shortages, it suggests, make it unlikely that AI will ‘take all our jobs’. It emphasises how AI can increase access to specialist roles for a wider range of workers. The (…)
With all the hype around agentic AI, the industry is rushing to embrace it. However, alarm bells have been sounded again and again concerning misaligned behaviour of LLMs and Large Reasoning Models (LRMs), ranging from ‘harmless’ misinformation to deliberately malicious actions. This raises serious questions whether the current technology is really mature enough to be (…)
From a cryptographer’s point of view, the big breakthrough in quantum computing would be if it can successfully factorize numbers in the 1000-digit range. As it turns out, this is actually quite difficult. The record from 2012 of factorizing the number 21 is still unbeaten! And all reports of factorizing bigger numbers used very, very (…)
Severe floods in Texas sparked a wave of conspiracy theories, with claims circulating online that the disaster was caused by geoengineering or weather weapons. This highlights a growing tension between the speed at which formal institutions can communicate accurate information and the rapid spread of AI-fueled disinformation. The resulting vandalism of radar infrastructure and threats (…)
As a software engineer, I’m looking at LLMs both as a tool for, but potentially also a danger to, my job: will it replace me one day? In this study, they measured the time that seasoned software needed to fix a bug, both with and without the aid of LLMs. The outcome in this specific (…)
This full-day conference explores the potential disruptions caused by the rise of AI agents and their impact on existing systems and structures. Bringing together industry leaders, researchers, policymakers, and stakeholders, the event will facilitate in-depth discussions on the challenges and opportunities presented by AI agents. Participants will assess the risks, examine strategies to mitigate emerging threats, and collaborate on establishing resilient frameworks for responsible innovation.
This event is organized by the Center for Digital Trust (C4DT) at EPFL.
Here’s an interesting take on what happens if security bugs are found in Open Source libraries. Now that more and more of Open Source libraries find their way into commercial products from Google, Microsoft, Amazon, and others, the problem of fixing security bugs in a timely manner is becoming a bigger problem. Open Source projects (…)
This article highlights significant flaws within the proposed NO FAKES Act, whose repercussions would extend far beyond U.S. borders. I found it particularly insightful because of the parallels it draws between this bill and existing mechanisms for addressing copyright infringement, outlining how the deficiencies within the latter are likely to be mirrored in the former.
Driven by ethical concerns about using existing artwork to train gen AI models, an artist created his own model that produces output untrained on any data at all. What was interesting to me is that, in exploring whether gen AI could create original art, he also demonstrated a potential path to better understanding how such (…)
Images façon “studio Ghibli”, tendance Starter Pack. derrière leur aspect ludique, ces images générées par l’intelligence artificielle générative posent des questions environnementales très concrètes. Réponses avec Babak Falsafi, professeur ordinaire à la faculté d’Informatique et de Communications de l’EPFL, président et fondateur de l’Association suisse pour l’efficacité énergétique dans les centres de données (SDEA).
This article is interesting because it highlights the opportunities and challenges of personal data ownership. Although tools such as dWallet claim to empower users, they can encourage the poorest and least educated people to sell their data without understanding the risks, thereby widening the digital divide. True data empowerment means that everyone must have the (…)
That is a very nice attack on privacy-protection in the mobile browsers: even if you don’t allow any cookies and don’t consent on being tracked, you’re browsing behaviour is still tracked. The idea of communicating from the mobile browser to your locally installed app is technically very interesting, and seems to be difficult to avoid (…)
This atlas of algorithmic systems curated by AlgorithmWatch CH is a nonexhaustive yet revealing list of algorithms currently deployed in Switzerland, whether to ‘predict, recommend, affect or take decisions about human beings’ or to ‘generate content used by or on human beings.’ The atlas is really eye-opening for me – so many systems that we (…)
Agentic AI has only recently emerged, yet it is already being used to commit fraud. This trend is not new; historically, fraudsters have exploited new technologies to target unsuspecting users and weak security systems, as seen with the first instances of voice phishing during the rise of telephony in the early 20th-century. These challenges have (…)
This policy paper highlights the crucial trade-off between privacy and utility in data sharing and call for a shift from technology-centric solutions to purpose-driven policies. The paper formulates eight actionable recommendations to guide realistic, privacy-preserving data-sharing practices in Europe.
To promote research and education in cyber-defence, the EPFL and the Cyber-Defence (CYD) Campus have jointly launched the “CYD Fellowships – A Talent Program for Cyber-Defence Research.”
The 12th call for applications is now open, with a rolling call for Master Thesis Fellowship applications and Proof of Concept Fellowship applications, and with a deadline of 20 August 2025 (17:00 CEST) for Doctoral and Distinguished Postdoctoral Fellowship applications.
In this paper, the authors highlight the crucial trade-off between privacy and utility in data sharing and call for a shift from technology-centric solutions to purpose-driven policies. The paper formulates eight actionable recommendations to guide realistic, privacy-preserving data-sharing practices in Europe.
ENISA’s new vulnerability database is a significant development in the pursuit of European digital sovereignty. It reduces reliance on US-dominated resources and could lead to better alignment with EU regulations, such as the GDPR and the NIS2 Directive. However, key questions remain about coordination with existing global databases, disclosure policies, and the participation of non-EU (…)
Switzerland’s digital identity (eID) system is ready, and a public referendum on the new law is scheduled for 28th of September 2025. C4DT started working on the recently awarded Innosuisse grant to research privacy-preserving technologies for the eID. We’ll present our first findings, as well as the current Swiyu test environment during our upcoming hands-on (…)
‘I call it the ‘AI dilemma’: while AI may threaten many jobs, it also serves as an essential tool to mitigate its own impact by boosting re-skilling and upskilling initiatives. I appreciate this article because it demonstrates how agentic AI can be employed in lifelong learning systems to reduce skill gaps, which are in part (…)
Melanie Kolbe-Guyot und Matthias Finger diskutieren die Notwendigkeit einer umfassenden Datenpolitik für die Schweiz, um das Potenzial von Daten durch klare Regeln und Anreize zu nutzen und die Wettbewerbsfähigkeit langfristig zu sichern. Es formuliert 6 Handlungsempfehlungen für die Politik.
In diesem Paper diskutieren Melanie Kolbe-Guyot und Matthias Finger die Notwendigkeit einer umfassenden Datenpolitik für die Schweiz, um das Potenzial von Daten durch klare Regeln und Anreize zu nutzen und die Wettbewerbsfähigkeit langfristig zu sichern. Es formuliert 6 Handlungsempfehlungen für die Politik.
Don’t mess with Texas! This settlement, along with the $1.4 billion agreement Meta reached with Texas last year over privacy violations, will hopefully disincentivize companies from engaging in similar practices. While Google doesn’t admit to any wrongdoing with this settlement, it’s still a significant win for data privacy advocates.