Skip to content

AMLD Intelligence Summit 2026 – AI & Media, how to secure and verify info?

AI empowers journalists by enabling rapid access to and analysis of vast document sets, but it also brings risks: it can be misused to unmask anonymous sources or to fabricate convincing misinformation. Without strong governance, AI may hallucinate, producing false or defamatory claims. In this track, co‑organized by C4DT, we highlight the needs and tools for robust safeguards to ensure that AI strengthens, rather than undermines, journalistic integrity.

On Evaluating Cognitive Capabilities in Machines (and Other “Alien” Intelligences)

I always appreciate this author’s talent for breaking down her research for laypersons like me. This article gives an overview of the current state of evaluating the cognitive capabilities and the shortcomings of these methods. Most importantly, it also provides suggestions for improving them. Definitely a recommendation for enthusiasts, sceptics and everyone in-between!

The State of OpenSSL for pyca/cryptography

Another shaky foundation of the internet: OpenSSL is an Open Source cryptographic library. It is used by most programs on the internet for cryptographic operations, including, for example, setting up a secure internet connection and encrypting emails. Like many other Open Source projects, the people behind the project are not paid, so there is a (…)

UK threatens action against X over sexualised AI images of women and children

This article highlights the vital role of governments and regulators in ensuring responsible digital innovation and protecting citizens—especially vulnerable groups—from AI-driven exploitation. It also raises the issue of digital sovereignty, showing that regulation is not just technical or legal but shaped by geopolitical pressures. For instance, action against the nudifier tool on Musk’s X platform (…)

The AI race is creating a new world order

This article caught my attention for several reasons. Firstly, Russia’s surprising absence from the AI race. Secondly, there is the strategic positioning of Middle Eastern players, such as the UAE, who are using their immense investment capabilities to manoeuvre between superpowers. Thirdly, this is not just about AI; it’s also about broader digital sovereignty. Unlike (…)

OpenAI Is Asking Contractors to Upload Work From Past Jobs to Evaluate the Performance of AI Agents

OpenAI is asking third-party contractors to upload real work assignments from their jobs to create human performance baselines for evaluating its AI models. While this approach holds substantial technical promise, it also comes with major legal, ethical, and governance risks, including the risk that contractors may violate their previous employers’ nondisclosure agreements or expose trade (…)

New California tool can stop brokers from selling your personal online data. Here’s how

California’s Privacy Protection Agency has launched its new  Delete Request and Opt-out Platform (DROP), which allows residents to demand all registered data brokers delete their personal information in a single request. This data— Social Security numbers, income, political preferences, and so on—is used to profile people for decisions about credit, employment, and housing. This incredible achievement (…)

Airbus aims to migrate workloads to European cloud

I found this article particularly interesting because it highlights how major European companies like Airbus are now treating cloud sovereignty as a strategic criterion in their procurement processes. By explicitly requiring ‘a European provider’ in their tenders, they set an important precedent for other enterprises and even governments across Europe. This move reinforces the idea (…)

The Pentagon’s Post Quantum Cryptography (PQC) Mandate

The DoD’s post-quantum mandate sends a clear message: data protection against quantum threats requires urgent action. If the Pentagon, with all its resources, is treating 2030 as an urgent matter, then large organisations and administrations with sensitive data should follow suit immediately. Fortunately, the approach focuses on proven processes rather than exotic technologies: taking an (…)

Why AI Keeps Falling for Prompt Injection Attacks

Large language models often seem very human-like, but at the same time can behave in truly baffling ways. Trying to explain this seemingly erratic behaviour can very easily lead one to get lost in technical details. All the more reason why I appreciate articles like this one that provide an accessible explanation, in this case (…)

The new C4DT Focus #10, titled “Swiss democracy faces its digital crossroads”, is now available!

This issue explores how Switzerland’s deeply trusted democratic system is being challenged by digitalisation, from signature collection scandals to the razor-thin vote on the federal e-ID. Through expert interviews with Andrzej Nowak (SICPA) and Imad Aad (C4DT), the publication examines the tensions between speed and trust, technology and legitimacy, and asks how digital tools can strengthen — rather than undermine — democratic participation in an era of deepfakes, platform power, and growing scepticism.

C4DT FOCUS 10 Swiss democracy faces its digital crossroads

C4DT Focus #10, entitled “Swiss democracy faces its digital crossroads”, by C4DT, in collaboration with Gregory Wicky. Fake signature collection, fake ID scandals, and a razor-thin vote on the new federal e-ID have presented the country with an uncomfortable question: how do our institutions and the trust they are built on evolve in a digital world? (…)

Dtangle & Hafnova join the C4DT through its Start-up Program

We are delighted to announce that 2 additional start-ups have joined the C4DT community through the C4DT start-up program. For two years Dtangle and Hafnova will complement the already diverse group of partner companies through their start-up perspectives to collaborate and share insights on trust-building technologies. Their agility and innovation capacity have permitted these start-ups (…)

New Ways to Corrupt LLMs

It seems that LLMs are hitting a wall where new models don’t improve capabilities anymore. At the same time, attacks and cautious tales of what is wrong with current models increase and make us weary of using them without enough supervision. In fact, little is known about all the quirks these models have when they (…)

Lawmakers Want to Ban VPNs—And They Have No Idea What They’re Doing

While protecting citizens is crucial, this VPN ban proposal exemplifies the consequences of non-technical politicians legislating technology. The goal of child safety is valid, but the proposed measures are technically impossible and would harm businesses, students and vulnerable groups. To avoid such issues, impact assessments and structured dialogue between IT experts and lawmakers should be (…)

The Normalization of Deviance in AI

This thought-provoking article challenges the rationale behind the increased integration of large language models (LLMs) into our daily workflows. Is it the result of a thorough risk-benefit analysis, or rather because of us steadily normalising the inherent problems of these systems to the point of becoming complacent to their potentially disastrous consequences?

Digital Omnibus – First Legal Analysis

Although the target audience of this legal analysis from noyb of the European Commission’s ‘Digital Omnibus’ proposal is clearly lawyers, it still gives the layperson a good overview of the proposed changes to the GDPR and their practical implications.

How IT Managers Fail Software Projects

Leading IT projects is a very difficult task, and the difficulty only increases with the size of the project. Sometimes it feels like most of the failures involve government projects, but this article also presents business cases that went awry. There is a lot of evidence on how to do better; unfortunately this requires competent (…)

Large language mistake

AI is often anthropomorphized, with terms like ‘hallucinations’ having entered technical jargon. All the more striking that when the capabilities of popular models today are discussed, ‘intelligence’ is reduced to one set of benchmarks or another, but seldom considered in a holistic way. I liked this article for putting the human, and not the machine, (…)

Disrupting the first reported AI-orchestrated cyber espionage campaign

Improvements in agentic AI and increasing competency of hackers are driving more sophisticated cyber attacks, from mere AI-augmented intrusions to ever more autonomous operations. While this is deeply concerning, such efforts can only succeed when infrastructure is vulnerable. Investing in the cybersecurity of our infrastructures has never been more pressing.

Mozilla Says It’s Finally Done With Two-Faced Onerep

Software developers and users share vulnerability information through standardized formats and processes (e.g., CVEs) to alert affected parties. Users can check their Software Bills of Materials to identify and fix vulnerabilities. I wonder whether the same will eventually happen with governance vulnerabilities such as this one. How can affected parties be notified of such trust (…)