Skip to content

C4DT Focus #10, entitled “Swiss democracy faces its digital crossroads”, is out!

This issue explores how Switzerland’s deeply trusted democratic system is being challenged by digitalisation, from signature collection scandals to the razor-thin vote on the federal e-ID. Through expert interviews with Andrzej Nowak (SICPA) and Imad Aad (C4DT), the publication examines the tensions between speed and trust, technology and legitimacy, and asks how digital tools can strengthen — rather than undermine — democratic participation in an era of deepfakes, platform power, and growing scepticism.

Dtangle & Hafnova join the C4DT through its Start-up Program

We are delighted to announce that 2 additional start-ups have joined the C4DT community through the C4DT start-up program. For two years Dtangle and Hafnova will complement the already diverse group of partner companies through their start-up perspectives to collaborate and share insights on trust-building technologies. Their agility and innovation capacity have permitted these start-ups (…)

New Ways to Corrupt LLMs

It seems that LLMs are hitting a wall where new models don’t improve capabilities anymore. At the same time, attacks and cautious tales of what is wrong with current models increase and make us weary of using them without enough supervision. In fact, little is known about all the quirks these models have when they (…)

Lawmakers Want to Ban VPNs—And They Have No Idea What They’re Doing

While protecting citizens is crucial, this VPN ban proposal exemplifies the consequences of non-technical politicians legislating technology. The goal of child safety is valid, but the proposed measures are technically impossible and would harm businesses, students and vulnerable groups. To avoid such issues, impact assessments and structured dialogue between IT experts and lawmakers should be (…)

The Normalization of Deviance in AI

This thought-provoking article challenges the rationale behind the increased integration of large language models (LLMs) into our daily workflows. Is it the result of a thorough risk-benefit analysis, or rather because of us steadily normalising the inherent problems of these systems to the point of becoming complacent to their potentially disastrous consequences?

Large language mistake

AI is often anthropomorphized, with terms like ‘hallucinations’ having entered technical jargon. All the more striking that when the capabilities of popular models today are discussed, ‘intelligence’ is reduced to one set of benchmarks or another, but seldom considered in a holistic way. I liked this article for putting the human, and not the machine, (…)

Cloudflare outage causes error messages across the internet

Last month’s outage of Amazon’s AWS US-East-1 barely became yesterday’s news when another outage, this time chez Cloudflare, took down major parts of the internet again. Not only do these incidents show just how brittle the Internet’s underlying infrastructure is becoming, they also serve as a stark reminder of how much it relies on only (…)

FFmpeg to Google: Fund Us or Stop Sending Bugs

Are LLMs helping discover new bugs, or are they merely making the life of Open Source developers more difficult? Currently it looks more like the latter, with many Open Source projects being overwhelmed by bad quality bug reports created automatically by LLMs. This is a problem that won’t go away quickly, and adding a fix (…)

‘People thought I was a communist doing this as a non-profit’: is Wikipedia’s Jimmy Wales the last decent tech baron?

I really appreciate Jimmy Wales’s insistence that sticking to Wikipedia’s core principals of neutrality, factuality and fostering a diverse community around the globe will eventually prevail in today’s polarized digital landscape. As a nerd myself I also relate very strongly to his enthusiasm of working on Wikipedia for the sake of doing something interesting over (…)

New prompt injection papers: Agents Rule of Two and The Attacker Moves Second

I found Simon Willison’s blog post interesting because he self-critically builds on his lethal trifecta concept. He clearly explains why prompt injection remains an unsolved risk for AI agents and highlights the practical “Rule of Two” for safer design (proposed by Meta AI). He also discusses new research showing that technical defenses consistently fail. His (…)

The army chief is resisting Microsoft

Switzerland’s military faces a critical dilemma: 90% of its data seems to be too sensitive for Microsoft’s US cloud, highlighting the tension between the efficiency of a cloud solution and digital sovereignty. This raises questions about whether open-source alternatives can match proprietary solutions, how to balance interoperability with protection from foreign legal jurisdiction (e.g. the (…)

The end of the rip-off economy

I do not agree with the article’s conclusion that the “days of the know-nothing consumer are well and truly over”. The article does discuss potential shortfalls, such as both sides of a negotiation relying on specialised chatbots to conduct it, but fails to point out the root issue, namely the reliability of the information. As (…)

Google Explores Quantum Chaos on Its Most Powerful Quantum Computer Chip

If real-life applications of quantum computing emerge, it could revolutionize chemistry, physics, computer sciences and more. Despite the apparent progress achieved by Google here, I am cautious about placing full trust in the advances claimed by commercial companies, as their competitive approach may prioritize hype or market value. Given the extent to which scientific research (…)

Autonomous Infrastructure-as-Code : Leveraging Agentic LLM Verifiers for Robust Infrastructure Monitoring

Modern infrastructure management increasingly relies on infrastructure-as-code (IaC), which is the paradigm of automatically managing computing infrastructure using programming languages such as Ansible. Furthermore, there is an increasing interest to leverage Large Language Models (LLMs) to 1) automatically generate the specification code that provisions the desired infrastructure, and 2) to periodically check if the infrastructure (…)

Privacy-preserving and distributed processing of public data in hybrid trust networks

One of the increasingly popular paradigms for managing the growing size and complexity of modern ML models is the adoption of collaborative and decentralized approaches. While this has enabled new possibilities in privacy-preserving and scalable frameworks for distributed data analytics and model training over large-scale real-world models, current approaches often assume a uniform trust-levels among (…)

ARFON : Adversarial Robustness of Foundation Models

State-of-the-art architectures in many software applications and critical infrastructures are based on deep learning models. These models have been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose fundamental questions in terms of safety, security, or performance guarantees at large. Several defense mechanisms have been developed in the last years (…)

This House Believes AI Will Save Democracy: An Oxford-Style Debate

A Geneva Democracy Week Event.
The impact of AI on all levels of society is undeniable and growing, making this debate more timely than ever. On 10 October, as part of Geneva Democracy Week, we warmly invite you to take part in an Oxford-style debate on the motion: ‘”This House believes that AI will save democracy.”

Autonomous AI hacking and the future of cybersecurity

With our conference on “Assessing the Disruptions by AI Agents” in mind, I found this article compelling because it documents the alarming acceleration of cyberattack capabilities thanks to AI agents. This raises the critical question of whether we are approaching a tipping point at which defence becomes structurally impossible. However, the authors offer cautious optimism, (…)

Chat Control et messages privés: le vrai du faux de la surveillance européenne

Suite au récent refus de l’Allemagne d’adopter la loi européenne dite « Chat Control », et à l’approche de son vote au niveau européen, cet article — accompagné d’une courte vidéo — propose une synthèse claire et accessible des points de vue divergents entre législateurs et experts scientifiques.

Governments are spending billions on their own ‘sovereign’ AI technologies – is it a big waste of money?

Many countries are investing in “sovereign” AI to compensate for the linguistic, cultural, and security shortcomings of American and Chinese models. Examples include SEA-LION in Singapore, ILMUchat in Malaysia, and Apertus in Switzerland, which are designed for local uses and characteristics. However, these initiatives face major obstacles: very high costs, massive computing power and talent (…)

Personal data storage is an idea whose time has come

An important part of digital trust is being in charge of your own data. Unfortunately, nowadays this is not at all the case. Some of your data resides in the cloud, e.g., with cloud-drives like Google Drive, directly accessible to you. But most of your data is hidden from you, saved on the servers of (…)

Agentic AI at Scale: Redefining Management for a Superhuman Workforce

Companies are beginning to incorporate AI agents into their workstreams, even as they play catch up (or are falling behind, depending on how you look at it) to articulate frameworks to assign accountability for AI-driven decisions and weigh the trade-off between human oversight and explainability. This article nicely summarizes the findings of a survey of (…)