This article is interesting because it shifts the Moltbook debate away from sci‑fi “bot consciousness” and toward concrete security architecture risks. What could possibly go wrong when thousands of OpenClaw‑like agents —with full root access to their owners’ machines— are frolicking in a shared, untrusted environment, swapping prompts, payloads, and jailbreak tricks? His advice is (…)
The volatile start to 2026 has shaken the foundation of global digital trust. Digital sovereignty has always been a running theme in the background of European conversations around tech, but this year, it is likely to be front and center, as governments and businesses scramble to seek digital solutions that provide reliable data security and (…)
This article highlights a crucial reality: basic connectivity must exist before any digital transformation can take place. With only 38% of the population online, Africa’s digital divide represents both a massive challenge and an opportunity. It is fascinating to observe the competition between space-based solutions (Starlink, Amazon Leo) and submarine cables (Meta’s 2Africa, Google’s Equiano). (…)
Beyond the author’s predictions for the AI bubble’s aftermath, what struck me most was his explanation of ‘accountability sinks’—people who take the blame for AI’s mistakes. Understanding this concept, which emerges from examining AI companies’ business model, leads to a crucial insight: it’s not just white-collar workers whose jobs will be eliminated, but specifically those (…)
AI empowers journalists by enabling rapid access to and analysis of vast document sets, but it also brings risks: it can be misused to unmask anonymous sources or to fabricate convincing misinformation. Without strong governance, AI may hallucinate, producing false or defamatory claims. In this track, co‑organized by C4DT, we highlight the needs and tools for robust safeguards to ensure that AI strengthens, rather than undermines, journalistic integrity.
This article highlights the vital role of governments and regulators in ensuring responsible digital innovation and protecting citizens—especially vulnerable groups—from AI-driven exploitation. It also raises the issue of digital sovereignty, showing that regulation is not just technical or legal but shaped by geopolitical pressures. For instance, action against the nudifier tool on Musk’s X platform (…)
This article is interesting because it links digital trust to systemic AI dangers—not just small tech glitches. It predicts how in 2026 AI might spread lies, spy on people, or disrupt jobs and markets faster than today’s content‑moderation and governance mechanisms can manage. This leaves us to question who should control key AI technology.
California’s Privacy Protection Agency has launched its new Delete Request and Opt-out Platform (DROP), which allows residents to demand all registered data brokers delete their personal information in a single request. This data— Social Security numbers, income, political preferences, and so on—is used to profile people for decisions about credit, employment, and housing. This incredible achievement (…)
This technical talk points out several vulnerabilities in PGP implementations that are not caused by errors in the underlying cryptographic algorithms. It serves as a great reminder that software engineering is not just ‘writing code.’ To actually implement the entire stack correctly from algorithm to user interface, it is a craft that requires an understanding (…)
The DoD’s post-quantum mandate sends a clear message: data protection against quantum threats requires urgent action. If the Pentagon, with all its resources, is treating 2030 as an urgent matter, then large organisations and administrations with sensitive data should follow suit immediately. Fortunately, the approach focuses on proven processes rather than exotic technologies: taking an (…)
This issue explores how Switzerland’s deeply trusted democratic system is being challenged by digitalisation, from signature collection scandals to the razor-thin vote on the federal e-ID. Through expert interviews with Andrzej Nowak (SICPA) and Imad Aad (C4DT), the publication examines the tensions between speed and trust, technology and legitimacy, and asks how digital tools can strengthen — rather than undermine — democratic participation in an era of deepfakes, platform power, and growing scepticism.
We are delighted to announce that 2 additional start-ups have joined the C4DT community through the C4DT start-up program. For two years Dtangle and Hafnova will complement the already diverse group of partner companies through their start-up perspectives to collaborate and share insights on trust-building technologies. Their agility and innovation capacity have permitted these start-ups (…)
It seems that LLMs are hitting a wall where new models don’t improve capabilities anymore. At the same time, attacks and cautious tales of what is wrong with current models increase and make us weary of using them without enough supervision. In fact, little is known about all the quirks these models have when they (…)
While protecting citizens is crucial, this VPN ban proposal exemplifies the consequences of non-technical politicians legislating technology. The goal of child safety is valid, but the proposed measures are technically impossible and would harm businesses, students and vulnerable groups. To avoid such issues, impact assessments and structured dialogue between IT experts and lawmakers should be (…)
This thought-provoking article challenges the rationale behind the increased integration of large language models (LLMs) into our daily workflows. Is it the result of a thorough risk-benefit analysis, or rather because of us steadily normalising the inherent problems of these systems to the point of becoming complacent to their potentially disastrous consequences?
How does trust translate to social media use by children? Does it mean a ban on their access? Australia decided that children under the age of 16 are not allowed on social media. Companies are complying and installing age verification mechanisms to avoid fines. I’m looking forward to seeing how this large-scale experiment turns out.
AI is often anthropomorphized, with terms like ‘hallucinations’ having entered technical jargon. All the more striking that when the capabilities of popular models today are discussed, ‘intelligence’ is reduced to one set of benchmarks or another, but seldom considered in a holistic way. I liked this article for putting the human, and not the machine, (…)
Last month’s outage of Amazon’s AWS US-East-1 barely became yesterday’s news when another outage, this time chez Cloudflare, took down major parts of the internet again. Not only do these incidents show just how brittle the Internet’s underlying infrastructure is becoming, they also serve as a stark reminder of how much it relies on only (…)
Are LLMs helping discover new bugs, or are they merely making the life of Open Source developers more difficult? Currently it looks more like the latter, with many Open Source projects being overwhelmed by bad quality bug reports created automatically by LLMs. This is a problem that won’t go away quickly, and adding a fix (…)
The number of AI-generated images of child sex abuses is rapidly increasing. But thanks to a new UK law, tech companies and child safety agencies are joining forces and being given legal testing permission, allowing experts or audit models to proactively screen for CSAM risk rather than wait for illegal content to appear. The law (…)
I really appreciate Jimmy Wales’s insistence that sticking to Wikipedia’s core principals of neutrality, factuality and fostering a diverse community around the globe will eventually prevail in today’s polarized digital landscape. As a nerd myself I also relate very strongly to his enthusiasm of working on Wikipedia for the sake of doing something interesting over (…)
I found Simon Willison’s blog post interesting because he self-critically builds on his lethal trifecta concept. He clearly explains why prompt injection remains an unsolved risk for AI agents and highlights the practical “Rule of Two” for safer design (proposed by Meta AI). He also discusses new research showing that technical defenses consistently fail. His (…)
Switzerland’s military faces a critical dilemma: 90% of its data seems to be too sensitive for Microsoft’s US cloud, highlighting the tension between the efficiency of a cloud solution and digital sovereignty. This raises questions about whether open-source alternatives can match proprietary solutions, how to balance interoperability with protection from foreign legal jurisdiction (e.g. the (…)
I do not agree with the article’s conclusion that the “days of the know-nothing consumer are well and truly over”. The article does discuss potential shortfalls, such as both sides of a negotiation relying on specialised chatbots to conduct it, but fails to point out the root issue, namely the reliability of the information. As (…)