Skip to content

An Agent Revolt: Moltbook Is Not A Good Idea

This article is interesting because it shifts the Moltbook debate away from sci‑fi “bot consciousness” and toward concrete security architecture risks. What could possibly go wrong when thousands of OpenClaw‑like agents —with full root access to their owners’ machines— are frolicking in a shared, untrusted environment, swapping prompts, payloads, and jailbreak tricks? His advice is (…)

EU tech chief sounds alarm over dependence on foreign tech

The volatile start to 2026 has shaken the foundation of global digital trust. Digital sovereignty has always been a running theme in the background of European conversations around tech, but this year, it is likely to be front and center, as governments and businesses scramble to seek digital solutions that provide reliable data security and (…)

Big Tech is racing to own Africa’s internet

This article highlights a crucial reality: basic connectivity must exist before any digital transformation can take place. With only 38% of the population online, Africa’s digital divide represents both a massive challenge and an opportunity. It is fascinating to observe the competition between space-based solutions (Starlink, Amazon Leo) and submarine cables (Meta’s 2Africa, Google’s Equiano). (…)

AI companies will fail. We can salvage something from the wreckage

Beyond the author’s predictions for the AI bubble’s aftermath, what struck me most was his explanation of ‘accountability sinks’—people who take the blame for AI’s mistakes. Understanding this concept, which emerges from examining AI companies’ business model, leads to a crucial insight: it’s not just white-collar workers whose jobs will be eliminated, but specifically those (…)

AMLD Intelligence Summit 2026 – AI & Media, how to secure and verify info?

AI empowers journalists by enabling rapid access to and analysis of vast document sets, but it also brings risks: it can be misused to unmask anonymous sources or to fabricate convincing misinformation. Without strong governance, AI may hallucinate, producing false or defamatory claims. In this track, co‑organized by C4DT, we highlight the needs and tools for robust safeguards to ensure that AI strengthens, rather than undermines, journalistic integrity.

UK threatens action against X over sexualised AI images of women and children

This article highlights the vital role of governments and regulators in ensuring responsible digital innovation and protecting citizens—especially vulnerable groups—from AI-driven exploitation. It also raises the issue of digital sovereignty, showing that regulation is not just technical or legal but shaped by geopolitical pressures. For instance, action against the nudifier tool on Musk’s X platform (…)

6 Scary Predictions for AI in 2026

This article is interesting because it links digital trust to systemic AI dangers—not just small tech glitches. It predicts how in 2026 AI might spread lies, spy on people, or disrupt jobs and markets faster than today’s content‑moderation and governance mechanisms can manage. This leaves us to question who should control key AI technology.

New California tool can stop brokers from selling your personal online data. Here’s how

California’s Privacy Protection Agency has launched its new  Delete Request and Opt-out Platform (DROP), which allows residents to demand all registered data brokers delete their personal information in a single request. This data— Social Security numbers, income, political preferences, and so on—is used to profile people for decisions about credit, employment, and housing. This incredible achievement (…)

To sign or not to sign: Practical vulnerabilities in GPG & friends

This technical talk points out several vulnerabilities in PGP implementations that are not caused by errors in the underlying cryptographic algorithms. It serves as a great reminder that software engineering is not just ‘writing code.’ To actually implement the entire stack correctly from algorithm to user interface, it is a craft that requires an understanding (…)

The Pentagon’s Post Quantum Cryptography (PQC) Mandate

The DoD’s post-quantum mandate sends a clear message: data protection against quantum threats requires urgent action. If the Pentagon, with all its resources, is treating 2030 as an urgent matter, then large organisations and administrations with sensitive data should follow suit immediately. Fortunately, the approach focuses on proven processes rather than exotic technologies: taking an (…)

The new C4DT Focus #10, titled “Swiss democracy faces its digital crossroads”, is now available!

This issue explores how Switzerland’s deeply trusted democratic system is being challenged by digitalisation, from signature collection scandals to the razor-thin vote on the federal e-ID. Through expert interviews with Andrzej Nowak (SICPA) and Imad Aad (C4DT), the publication examines the tensions between speed and trust, technology and legitimacy, and asks how digital tools can strengthen — rather than undermine — democratic participation in an era of deepfakes, platform power, and growing scepticism.

Dtangle & Hafnova join the C4DT through its Start-up Program

We are delighted to announce that 2 additional start-ups have joined the C4DT community through the C4DT start-up program. For two years Dtangle and Hafnova will complement the already diverse group of partner companies through their start-up perspectives to collaborate and share insights on trust-building technologies. Their agility and innovation capacity have permitted these start-ups (…)

New Ways to Corrupt LLMs

It seems that LLMs are hitting a wall where new models don’t improve capabilities anymore. At the same time, attacks and cautious tales of what is wrong with current models increase and make us weary of using them without enough supervision. In fact, little is known about all the quirks these models have when they (…)

Lawmakers Want to Ban VPNs—And They Have No Idea What They’re Doing

While protecting citizens is crucial, this VPN ban proposal exemplifies the consequences of non-technical politicians legislating technology. The goal of child safety is valid, but the proposed measures are technically impossible and would harm businesses, students and vulnerable groups. To avoid such issues, impact assessments and structured dialogue between IT experts and lawmakers should be (…)

The Normalization of Deviance in AI

This thought-provoking article challenges the rationale behind the increased integration of large language models (LLMs) into our daily workflows. Is it the result of a thorough risk-benefit analysis, or rather because of us steadily normalising the inherent problems of these systems to the point of becoming complacent to their potentially disastrous consequences?

Large language mistake

AI is often anthropomorphized, with terms like ‘hallucinations’ having entered technical jargon. All the more striking that when the capabilities of popular models today are discussed, ‘intelligence’ is reduced to one set of benchmarks or another, but seldom considered in a holistic way. I liked this article for putting the human, and not the machine, (…)

Cloudflare outage causes error messages across the internet

Last month’s outage of Amazon’s AWS US-East-1 barely became yesterday’s news when another outage, this time chez Cloudflare, took down major parts of the internet again. Not only do these incidents show just how brittle the Internet’s underlying infrastructure is becoming, they also serve as a stark reminder of how much it relies on only (…)

FFmpeg to Google: Fund Us or Stop Sending Bugs

Are LLMs helping discover new bugs, or are they merely making the life of Open Source developers more difficult? Currently it looks more like the latter, with many Open Source projects being overwhelmed by bad quality bug reports created automatically by LLMs. This is a problem that won’t go away quickly, and adding a fix (…)

‘People thought I was a communist doing this as a non-profit’: is Wikipedia’s Jimmy Wales the last decent tech baron?

I really appreciate Jimmy Wales’s insistence that sticking to Wikipedia’s core principals of neutrality, factuality and fostering a diverse community around the globe will eventually prevail in today’s polarized digital landscape. As a nerd myself I also relate very strongly to his enthusiasm of working on Wikipedia for the sake of doing something interesting over (…)

New prompt injection papers: Agents Rule of Two and The Attacker Moves Second

I found Simon Willison’s blog post interesting because he self-critically builds on his lethal trifecta concept. He clearly explains why prompt injection remains an unsolved risk for AI agents and highlights the practical “Rule of Two” for safer design (proposed by Meta AI). He also discusses new research showing that technical defenses consistently fail. His (…)

The army chief is resisting Microsoft

Switzerland’s military faces a critical dilemma: 90% of its data seems to be too sensitive for Microsoft’s US cloud, highlighting the tension between the efficiency of a cloud solution and digital sovereignty. This raises questions about whether open-source alternatives can match proprietary solutions, how to balance interoperability with protection from foreign legal jurisdiction (e.g. the (…)

The end of the rip-off economy

I do not agree with the article’s conclusion that the “days of the know-nothing consumer are well and truly over”. The article does discuss potential shortfalls, such as both sides of a negotiation relying on specialised chatbots to conduct it, but fails to point out the root issue, namely the reliability of the information. As (…)