This article is interesting because it links digital trust to systemic AI dangers—not just small tech glitches. It predicts how in 2026 AI might spread lies, spy on people, or disrupt jobs and markets faster than today’s content‑moderation and governance mechanisms can manage. This leaves us to question who should control key AI technology.
I find this article interesting because it puts a spotlight on how the technical limits and product decisions of LLMs can shape, and sometimes distort, people’s perception of real-world events. It’s striking to see the same news prompt produce authoritative, up-to-date answers from some models and a blunt, incorrect denial from others, simply because of (…)
This technical talk points out several vulnerabilities in PGP implementations that are not caused by errors in the underlying cryptographic algorithms. It serves as a great reminder that software engineering is not just ‘writing code.’ To actually implement the entire stack correctly from algorithm to user interface, it is a craft that requires an understanding (…)
How does trust translate to social media use by children? Does it mean a ban on their access? Australia decided that children under the age of 16 are not allowed on social media. Companies are complying and installing age verification mechanisms to avoid fines. I’m looking forward to seeing how this large-scale experiment turns out.
📣 New Publication Alert! 📣
📽️ The recordings of the November 19th Conference on “Anticipating the Agentic Era: Assessing the Disruptions by AI Agents” are now accessible here
This white paper examines the adoption of artificial intelligence (AI) in Swiss public administration and provides recommendations for its responsible, trustworthy and effective use. Drawing on original research as well as Swiss and international studies, it outlines motivations and potential use cases and examines applications in sensitive domains such as welfare, taxation, and automated decision-making, assessing risks and safeguards. It then maps current AI practice across federal and cantonal levels, identifies the principal barriers to effective adoption, and proposes measures to overcome them, including strategic prioritization, regulatory and governance reforms, and organizational actions.
Voici un bon exemple de comment créer de la confiance grâce aux nouveaux outils comme les LLMs. Au CHUV, prof. Marie-Ann Hartley de l’EPFL, dirige un projet pour soutenir les médecins dans la prise en charge des patients aux urgences. J’aime l’approche très élaborée de ce projet qui implique des médecins dans toutes les étapes. (…)
This case highlights the conflict between platform profitability and user safety, as TikTok’s algorithm prioritises engagement over the welfare of its teenage users. Notably, the Chinese version of the app shows that technical solutions exist — stricter moderation is feasible when providers choose to implement it. This divergence reveals that while China enforces protective measures, (…)
Bubble or no bubble? In this interview, the president of Signal shares her analysis: dominant AI is not just a neutral technological advance, but the result of an economic model of platforms that concentrate data and computing power among a few giants, creating monopolies, geopolitical and security risks—and requiring strict regulation (e.g., enforcement of the (…)
Switzerland’s military faces a critical dilemma: 90% of its data seems to be too sensitive for Microsoft’s US cloud, highlighting the tension between the efficiency of a cloud solution and digital sovereignty. This raises questions about whether open-source alternatives can match proprietary solutions, how to balance interoperability with protection from foreign legal jurisdiction (e.g. the (…)
Modern infrastructure management increasingly relies on infrastructure-as-code (IaC), which is the paradigm of automatically managing computing infrastructure using programming languages such as Ansible. Furthermore, there is an increasing interest to leverage Large Language Models (LLMs) to 1) automatically generate the specification code that provisions the desired infrastructure, and 2) to periodically check if the infrastructure (…)
One of the increasingly popular paradigms for managing the growing size and complexity of modern ML models is the adoption of collaborative and decentralized approaches. While this has enabled new possibilities in privacy-preserving and scalable frameworks for distributed data analytics and model training over large-scale real-world models, current approaches often assume a uniform trust-levels among (…)
This project proposes to design metrics, methods, and scalable algorithms for detecting anomalies in dynamic networks. The temporal evolution of the structure of dynamic networks carries critical information about the development of complex systems in various applications, from biology to social networks. Deviations from regular network structure evolution may also provide critical information about anomalies (…)
Damit die Regierung das Vertrauen der Bevölkerung behalten kann, ist es wichtig, so transparent als möglich zu sein. Deshalb begrüsse ich es sehr, dass vier von fünf Verträgen von 2023 mit den offiziell anerkannten Cloud-Providern offengelegt wurde. Für mich ist es interessant zu sehen, dass in den Verträgen das Offenlegungsprinzip klar erwähnt wird. Weshalb haben (…)
noyb’s latest victory may sound like a technicality – who is responsible for complying with the GDPR – but it is actually very important, because if no one knows who is responsible, no one really is responsible. All the more important that the ruling clearly holds Microsoft U.S. as the company actually selling the product (…)
For security reasons, people want code to be ‘formally verified’, for example for libraries doing cryptographic operations. But what does this actually mean? And is ‘formally verified’ the panacea for secure and correct code in all situations? Of course not. Hillel gives some very easy examples where even the definition of ‘correct’ is not easy (…)
Companies are beginning to incorporate AI agents into their workstreams, even as they play catch up (or are falling behind, depending on how you look at it) to articulate frameworks to assign accountability for AI-driven decisions and weigh the trade-off between human oversight and explainability. This article nicely summarizes the findings of a survey of (…)
Without defending any party or attacking the other, I find this article interesting because it somehow presents a new situation whose implications we should carefully consider: First, can Microsoft’s logic be extended from the use of cloud storage and AI to the use of operating systems? What about communications services or even hardware? Can this (…)
Zyklus-Apps sind eine praktische Hilfe, um den eigenen Zyklus zu beobachten und besser verstehen zu lernen, und können sogar bei der Familienplanung unterstützen. Die dabei geteilten Daten sind sehr sensibel, aber auch gleichzeitig sehr begehrt, denn sie geben einen Hinweis darauf, dass sich Konsumgewohnheiten ändern könnten, wenn sich eine Schwangerschaft andeutet. Deswegen besteht das Risiko, (…)
This issue explores the rapid spread of AI-powered video surveillance, from supermarkets to large public gatherings, and examines the legal, ethical, and societal challenges it raises. With insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH), it looks at Switzerland’s sectoral approach versus the EU’s new AI Act, and what this means (…)
From supermarket checkouts to Olympic stadiums, smart surveillance technologies are spreading rapidly, raising new questions about privacy, trust, and oversight. How should societies balance the benefits of AI-powered cameras with the risks of bias, misuse, and erosion of democratic freedoms? And how will the upcoming European AI Act reshape the governance of biometric surveillance, both in the EU and in Switzerland? This edition of C4DT Focus examines these pressing issues by offering a legal and ethical perspective on intelligent video surveillance, with insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH).
I like this article because it offers a thought-provoking perspective. It discusses how digital payment and identity systems are closely linked and can help countries become more independent, inclusive, and resilient—in other words, more sovereign. By linking digital trust to infrastructure ownership and policy-making, the article encourages reflection on whether societies truly control their digital (…)