Les attaques de phishing, de plus en plus ciblées et difficiles à détecter, touchent PME, institutions et particuliers ; pour y répondre la HEIG‑VD a lancé le projet « Combattre le Phishing – quelles innovations apporter » en partenariat avec le Center for Digital Trust (C4DT) de l’EPFL, l’UNIL, la Police cantonale vaudoise, la DGNSI, (…)
Europe’s economy now largely runs on digital payments, making reliance on U.S. card rails a systemic exposure. Sanctions, policy shifts, outages, and data-access demands can all cause economic shocks. This is precisely why Wero is both interesting and timely: it is a bank-led, pan-European layer that can reduce strategic dependency. I’m eager to see transparent (…)
OpenClaw is just the latest in a series of AI-powered tools that turn out to be an absolute security nightmare. It is easy (and up to a certain point justified) to blame individual developers for lowering their guard and abandoning good security practices. On the other hand, there is an enormous pressure on developers nowadays (…)
This article is interesting because it shifts the Moltbook debate away from sci‑fi “bot consciousness” and toward concrete security architecture risks. What could possibly go wrong when thousands of OpenClaw‑like agents —with full root access to their owners’ machines— are frolicking in a shared, untrusted environment, swapping prompts, payloads, and jailbreak tricks? His advice is (…)
Parler de souveraineté numérique en Europe devient de plus en plus courant. Même si ça va prendre beaucoup d’années d’avoir un contre-poids aux grandes entreprises américaines, les solutions émergent et s’organisent. Une fois qu’il y aura de l’argent qui alimente ces solutions alternatives, open source ou pas, l’utilité de ces solutions va augmenter drastiquement.
Beyond the author’s predictions for the AI bubble’s aftermath, what struck me most was his explanation of ‘accountability sinks’—people who take the blame for AI’s mistakes. Understanding this concept, which emerges from examining AI companies’ business model, leads to a crucial insight: it’s not just white-collar workers whose jobs will be eliminated, but specifically those (…)
Encrypted messaging app Threema, which is used by the Swiss army and cantonal polices, got acquired by a German private company. Two questions come to mind. First, what kind of implications could a private acquisition such as this have on a country’s sovereignty and critical operations, and how does it deal with them? Second, and (…)
AI empowers journalists by enabling rapid access to and analysis of vast document sets, but it also brings risks: it can be misused to unmask anonymous sources or to fabricate convincing misinformation. Without strong governance, AI may hallucinate, producing false or defamatory claims. In this track, co‑organized by C4DT, we highlight the needs and tools for robust safeguards to ensure that AI strengthens, rather than undermines, journalistic integrity.
I always appreciate this author’s talent for breaking down her research for laypersons like me. This article gives an overview of the current state of evaluating the cognitive capabilities and the shortcomings of these methods. Most importantly, it also provides suggestions for improving them. Definitely a recommendation for enthusiasts, sceptics and everyone in-between!
This article caught my attention for several reasons. Firstly, Russia’s surprising absence from the AI race. Secondly, there is the strategic positioning of Middle Eastern players, such as the UAE, who are using their immense investment capabilities to manoeuvre between superpowers. Thirdly, this is not just about AI; it’s also about broader digital sovereignty. Unlike (…)
This article is interesting because it links digital trust to systemic AI dangers—not just small tech glitches. It predicts how in 2026 AI might spread lies, spy on people, or disrupt jobs and markets faster than today’s content‑moderation and governance mechanisms can manage. This leaves us to question who should control key AI technology.
How do our institutions and the trust they are built on evolve in a digital world? Interview of Imad Aad, project manager at the Center for Digital Trust (C4DT) at EPFL.
I find this article interesting because it puts a spotlight on how the technical limits and product decisions of LLMs can shape, and sometimes distort, people’s perception of real-world events. It’s striking to see the same news prompt produce authoritative, up-to-date answers from some models and a blunt, incorrect denial from others, simply because of (…)
I found this article particularly interesting because it highlights how major European companies like Airbus are now treating cloud sovereignty as a strategic criterion in their procurement processes. By explicitly requiring ‘a European provider’ in their tenders, they set an important precedent for other enterprises and even governments across Europe. This move reinforces the idea (…)
C4DT Focus #10, entitled “Swiss democracy faces its digital crossroads”, by C4DT, in collaboration with Gregory Wicky. Fake signature collection, fake ID scandals, and a razor-thin vote on the new federal e-ID have presented the country with an uncomfortable question: how do our institutions and the trust they are built on evolve in a digital world? (…)
Although the target audience of this legal analysis from noyb of the European Commission’s ‘Digital Omnibus’ proposal is clearly lawyers, it still gives the layperson a good overview of the proposed changes to the GDPR and their practical implications.
AI is often anthropomorphized, with terms like ‘hallucinations’ having entered technical jargon. All the more striking that when the capabilities of popular models today are discussed, ‘intelligence’ is reduced to one set of benchmarks or another, but seldom considered in a holistic way. I liked this article for putting the human, and not the machine, (…)
Improvements in agentic AI and increasing competency of hackers are driving more sophisticated cyber attacks, from mere AI-augmented intrusions to ever more autonomous operations. While this is deeply concerning, such efforts can only succeed when infrastructure is vulnerable. Investing in the cybersecurity of our infrastructures has never been more pressing.
In this paper, Mélanie Kolbe-Guyot, Head of Data Governance and Compliance at Statistisches Amt Basel-Stadt, and formerly C4DT’s Head of Policy, examines the adoption of artificial intelligence (AI) in Swiss public administration and provides recommendations for its responsible, trustworthy and effective use.
The number of AI-generated images of child sex abuses is rapidly increasing. But thanks to a new UK law, tech companies and child safety agencies are joining forces and being given legal testing permission, allowing experts or audit models to proactively screen for CSAM risk rather than wait for illegal content to appear. The law (…)
The ‘hacker paragraph’ in Germany is a law saying that you are not allowed to break into foreign IT systems, not even for research, nor as a white hat hacker who discloses their findings responsibly. The development or distribution of such software is also prohibited. For researchers and white hat hackers alike, this is of (…)
Although written from the point of view of the U.S. healthcare system, quite a few of the issues raised in this essay are universal. I appreciated this in-depth discussion of the impact of AI on the healthcare system because it not only points out the detrimental consequences these technologies have, but also puts these consequences (…)
Bubble or no bubble? In this interview, the president of Signal shares her analysis: dominant AI is not just a neutral technological advance, but the result of an economic model of platforms that concentrate data and computing power among a few giants, creating monopolies, geopolitical and security risks—and requiring strict regulation (e.g., enforcement of the (…)
I really appreciate Jimmy Wales’s insistence that sticking to Wikipedia’s core principals of neutrality, factuality and fostering a diverse community around the globe will eventually prevail in today’s polarized digital landscape. As a nerd myself I also relate very strongly to his enthusiasm of working on Wikipedia for the sake of doing something interesting over (…)