Skip to content

6 Scary Predictions for AI in 2026

This article is interesting because it links digital trust to systemic AI dangers—not just small tech glitches. It predicts how in 2026 AI might spread lies, spy on people, or disrupt jobs and markets faster than today’s content‑moderation and governance mechanisms can manage. This leaves us to question who should control key AI technology.

The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT Disagrees

I find this article interesting because it puts a spotlight on how the technical limits and product decisions of LLMs can shape, and sometimes distort, people’s perception of real-world events. It’s striking to see the same news prompt produce authoritative, up-to-date answers from some models and a blunt, incorrect denial from others, simply because of (…)

To sign or not to sign: Practical vulnerabilities in GPG & friends

This technical talk points out several vulnerabilities in PGP implementations that are not caused by errors in the underlying cryptographic algorithms. It serves as a great reminder that software engineering is not just ‘writing code.’ To actually implement the entire stack correctly from algorithm to user interface, it is a craft that requires an understanding (…)

C4DT Insight #5: From cautious experimentation to coherent strategy: Harnessing AI’s potential in the Swiss public administration

This white paper examines the adoption of artificial intelligence (AI) in Swiss public administration and provides recommendations for its responsible, trustworthy and effective use. Drawing on original research as well as Swiss and international studies, it outlines motivations and potential use cases and examines applications in sensitive domains such as welfare, taxation, and automated decision-making, assessing risks and safeguards. It then maps current AI practice across federal and cantonal levels, identifies the principal barriers to effective adoption, and proposes measures to overcome them, including strategic prioritization, regulatory and governance reforms, and organizational actions.

Au CHUV, une intelligence artificielle générative passe son premier essai clinique

Voici un bon exemple de comment créer de la confiance grâce aux nouveaux outils comme les LLMs. Au CHUV, prof. Marie-Ann Hartley de l’EPFL, dirige un projet pour soutenir les médecins dans la prise en charge des patients aux urgences. J’aime l’approche très élaborée de ce projet qui implique des médecins dans toutes les étapes. (…)

French court probes TikTok on algorithms’ risks regarding suicide

This case highlights the conflict between platform profitability and user safety, as TikTok’s algorithm prioritises engagement over the welfare of its teenage users. Notably, the Chinese version of the app shows that technical solutions exist — stricter moderation is feasible when providers choose to implement it. This divergence reveals that while China enforces protective measures, (…)

AI bubble: “70% of the cloud is controlled by three American companies,” a conversation with Meredith Whittaker, president of Signal

Bubble or no bubble? In this interview, the president of Signal shares her analysis: dominant AI is not just a neutral technological advance, but the result of an economic model of platforms that concentrate data and computing power among a few giants, creating monopolies, geopolitical and security risks—and requiring strict regulation (e.g., enforcement of the (…)

The army chief is resisting Microsoft

Switzerland’s military faces a critical dilemma: 90% of its data seems to be too sensitive for Microsoft’s US cloud, highlighting the tension between the efficiency of a cloud solution and digital sovereignty. This raises questions about whether open-source alternatives can match proprietary solutions, how to balance interoperability with protection from foreign legal jurisdiction (e.g. the (…)

Autonomous Infrastructure-as-Code : Leveraging Agentic LLM Verifiers for Robust Infrastructure Monitoring

Modern infrastructure management increasingly relies on infrastructure-as-code (IaC), which is the paradigm of automatically managing computing infrastructure using programming languages such as Ansible. Furthermore, there is an increasing interest to leverage Large Language Models (LLMs) to 1) automatically generate the specification code that provisions the desired infrastructure, and 2) to periodically check if the infrastructure (…)

Privacy-preserving and distributed processing of public data in hybrid trust networks

One of the increasingly popular paradigms for managing the growing size and complexity of modern ML models is the adoption of collaborative and decentralized approaches. While this has enabled new possibilities in privacy-preserving and scalable frameworks for distributed data analytics and model training over large-scale real-world models, current approaches often assume a uniform trust-levels among (…)

ANORA : Anomalous regime detection in dynamic networks

This project proposes to design metrics, methods, and scalable algorithms for detecting anomalies in dynamic networks. The temporal evolution of the structure of dynamic networks carries critical information about the development of complex systems in various applications, from biology to social networks. Deviations from regular network structure evolution may also provide critical information about anomalies (…)

Microsoft ‘illegally’ tracked students via 365 Education, says data watchdog

noyb’s latest victory may sound like a technicality – who is responsible for complying with the GDPR – but it is actually very important, because if no one knows who is responsible, no one really is responsible. All the more important that the ruling clearly holds Microsoft U.S. as the company actually selling the product (…)

Three ways formally verified code can go wrong in practice

For security reasons, people want code to be ‘formally verified’, for example for libraries doing cryptographic operations. But what does this actually mean? And is ‘formally verified’ the panacea for secure and correct code in all situations? Of course not. Hillel gives some very easy examples where even the definition of ‘correct’ is not easy (…)

Agentic AI at Scale: Redefining Management for a Superhuman Workforce

Companies are beginning to incorporate AI agents into their workstreams, even as they play catch up (or are falling behind, depending on how you look at it) to articulate frameworks to assign accountability for AI-driven decisions and weigh the trade-off between human oversight and explainability. This article nicely summarizes the findings of a survey of (…)

Microsoft cuts off some services used by Israeli military unit

Without defending any party or attacking the other, I find this article interesting because it somehow presents a new situation whose implications we should carefully consider: First, can Microsoft’s logic be extended from the use of cloud storage and AI to the use of operating systems? What about communications services or even hardware? Can this (…)

Datenschutzfreundliche Perioden-Apps: Zyklus-Tracking ohne Tracking

Zyklus-Apps sind eine praktische Hilfe, um den eigenen Zyklus zu beobachten und besser verstehen zu lernen, und können sogar bei der Familienplanung unterstützen. Die dabei geteilten Daten sind sehr sensibel, aber auch gleichzeitig sehr begehrt, denn sie geben einen Hinweis darauf, dass sich Konsumgewohnheiten ändern könnten, wenn sich eine Schwangerschaft andeutet. Deswegen besteht das Risiko, (…)

New C4DT Focus #9, entitled “Smart surveillance on the rise: A legal and ethical crossroads”, is out!

This issue explores the rapid spread of AI-powered video surveillance, from supermarkets to large public gatherings, and examines the legal, ethical, and societal challenges it raises. With insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH), it looks at Switzerland’s sectoral approach versus the EU’s new AI Act, and what this means (…)

C4DT FOCUS 9 Smart surveillance on the rise: A legal and ethical crossroads

From supermarket checkouts to Olympic stadiums, smart surveillance technologies are spreading rapidly, raising new questions about privacy, trust, and oversight. How should societies balance the benefits of AI-powered cameras with the risks of bias, misuse, and erosion of democratic freedoms? And how will the upcoming European AI Act reshape the governance of biometric surveillance, both in the EU and in Switzerland? This edition of C4DT Focus examines these pressing issues by offering a legal and ethical perspective on intelligent video surveillance, with insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH).

The New Digital Sovereignty: Why Payments and Identity Now Shape National Policy

I like this article because it offers a thought-provoking perspective. It discusses how digital payment and identity systems are closely linked and can help countries become more independent, inclusive, and resilient—in other words, more sovereign. By linking digital trust to infrastructure ownership and policy-making, the article encourages reflection on whether societies truly control their digital (…)