Skip to content

U.S. targets Brazil’s payments platform Pix in trade spat

I find this article interesting because it highlights the tension between digital sovereignty and the expansion of global technology. With 75% market penetration compared to the single-digit presence of US alternatives, Pix demonstrates how public digital goods can effectively challenge the dominance of Big Tech. This case raises the question of whether payment systems constitute (…)

Bring Back the Blue-Book Exam

This article talks about deepening digital estrangement, digital intrusion, and digital distraction from the perspective of a teacher who has seen the harm that overreliance on AI has caused to her students’ educational attainment. Hers is another testimony to the need for the definition of responsible and trustworthy AI to include when it should be (…)

Addressing the unauthorized issuance of multiple TLS certificates for 1.1.1.1

This is a nice reminder of the state of the foundation upon which our public key infrastructure stands. Depending on the angle you’re looking at, it is either stable or shaky. The incident in question was a certificate authority that emitted a rogue certificate for “test purposes.” What ensued and how Cloudflare responded shows how (…)

Second Call for Vaud Projects

The collaboration between the Swiss Data Science Center (SDSC) and the Canton of Vaud aims to generate a tangible and lasting impact on the economy and public community of the Vaud region. In this context, the SDSC supports collaborative projects in the field of data science, bringing together the strengths of academic excellence, companies, particularly SMEs and public actors.

“Anyway” – Distributed LLM Hands-on Workshop

While public LLM APIs are convenient, they store all queries on providers’ servers. Running open LLMs locally offers privacy and offline access, though setup can be challenging depending on hardware and model requirements. ‘Anyway’ addresses this by distributing queries across multiple GPUs with dynamic scaling. Professor Guerraoui’s lab is developing “Anyway”, a tool that can (…)

Humans are being hired to make AI slop look less sloppy

After many thought that LLMs and image-generators will remove jobs from writers and image artists, the pendulum swings back: clients realize that these tools only get you halfway to a useful result. So they turn to the ones they wanted to replace, and ask them to fix the half-baken results. I find i interesting how (…)

Anthropic will start training its AI models on chat transcripts

While it was to be expected that Anthropic will also use the users’ chats for training, I think the way they’re approaching this is not too bad. Perhaps the pop-up is not clear enough, but at least past chats will not get in the LLM training grinder. One of the big question will be of (…)

Iranian pro-Scottish independence accounts go silent after Israel attacks

The Israeli airstrike campaign against Iranian military and cyber infrastructure on 12 June had an ‘interesting’ side effect. Accounts that had previously been identified as allegedly being managed by the Iranian Revolutionary Guard Corps (IRGC) and that promoted Scottish independence fell silent following the strikes. This resulted in a 4% reduction in all discussion related (…)

AI Could Actually Help Rebuild The Middle Class

I found this article interesting because, rather than perpetuating fear-driven narratives, it provides a thorough analysis backed by demographic realities in the Western world. Labour shortages, it suggests, make it unlikely that AI will ‘take all our jobs’. It emphasises how AI can increase access to specialist roles for a wider range of workers. The (…)

Why AI chatbots lie to us

With all the hype around agentic AI, the industry is rushing to embrace it. However, alarm bells have been sounded again and again concerning misaligned behaviour of LLMs and Large Reasoning Models (LRMs), ranging from ‘harmless’ misinformation to deliberately malicious actions. This raises serious questions whether the current technology is really mature enough to be (…)

Conspiracy Theories About the Texas Floods Lead to Death Threats

Severe floods in Texas sparked a wave of conspiracy theories, with claims circulating online that the disaster was caused by geoengineering or weather weapons. This highlights a growing tension between the speed at which formal institutions can communicate accurate information and the rapid spread of AI-fueled disinformation. The resulting vandalism of radar infrastructure and threats (…)

Anticipating the Agentic Era: Assessing the Disruptions by AI Agents

This full-day conference explores the potential disruptions caused by the rise of AI agents and their impact on existing systems and structures. Bringing together industry leaders, researchers, policymakers, and stakeholders, the event will facilitate in-depth discussions on the challenges and opportunities presented by AI agents. Participants will assess the risks, examine strategies to mitigate emerging threats, and collaborate on establishing resilient frameworks for responsible innovation.

This event is organized by the Center for Digital Trust (C4DT) at EPFL.

Libxml2’s “no security embargoes” policy

Here’s an interesting take on what happens if security bugs are found in Open Source libraries. Now that more and more of Open Source libraries find their way into commercial products from Google, Microsoft, Amazon, and others, the problem of fixing security bugs in a timely manner is becoming a bigger problem. Open Source projects (…)

The NO FAKES Act Has Changed – and It’s So Much Worse

This article highlights significant flaws within the proposed NO FAKES Act, whose repercussions would extend far beyond U.S. borders. I found it particularly insightful because of the parallels it draws between this bill and existing mechanisms for addressing copyright infringement, outlining how the deficiencies within the latter are likely to be mirrored in the former.

What happens when you feed AI nothing

Driven by ethical concerns about using existing artwork to train gen AI models, an artist created his own model that produces output untrained on any data at all. What was interesting to me is that, in exploring whether gen AI could create original art, he also demonstrated a potential path to better understanding how such (…)

Quelle est notre consommation énergétique quand on utilise l’IA ?

Images façon “studio Ghibli”, tendance Starter Pack. derrière leur aspect ludique, ces images générées par l’intelligence artificielle générative posent des questions environnementales très concrètes. Réponses avec Babak Falsafi, professeur ordinaire à la faculté d’Informatique et de Communications de l’EPFL, président et fondateur de l’Association suisse pour l’efficacité énergétique dans les centres de données (SDEA).

In a world first, Brazilians will soon be able to sell their digital data

This article is interesting because it highlights the opportunities and challenges of personal data ownership. Although tools such as dWallet claim to empower users, they can encourage the poorest and least educated people to sell their data without understanding the risks, thereby widening the digital divide. True data empowerment means that everyone must have the (…)

Disclosure: Covert Web-to-App Tracking via Localhost on Android

That is a very nice attack on privacy-protection in the mobile browsers: even if you don’t allow any cookies and don’t consent on being tracked, you’re browsing behaviour is still tracked. The idea of communicating from the mobile browser to your locally installed app is technically very interesting, and seems to be difficult to avoid (…)

‘Ghost Student’ Bots Steal Millions from California Colleges

Agentic AI has only recently emerged, yet it is already being used to commit fraud. This trend is not new; historically, fraudsters have exploited new technologies to target unsuspecting users and weak security systems, as seen with the first instances of voice phishing during the rise of telephony in the early 20th-century. These challenges have (…)

CYD Fellowships

To promote research and education in cyber-defence, the EPFL and the Cyber-Defence (CYD) Campus have jointly launched the “CYD Fellowships – A Talent Program for Cyber-Defence Research.”

The 12th call for applications is now open, with a rolling call for Master Thesis Fellowship applications and Proof of Concept Fellowship applications, and with a deadline of 20 August 2025 (17:00 CEST) for Doctoral and Distinguished Postdoctoral Fellowship applications.