Last month’s outage of Amazon’s AWS US-East-1 barely became yesterday’s news when another outage, this time chez Cloudflare, took down major parts of the internet again. Not only do these incidents show just how brittle the Internet’s underlying infrastructure is becoming, they also serve as a stark reminder of how much it relies on only (…)
Are LLMs helping discover new bugs, or are they merely making the life of Open Source developers more difficult? Currently it looks more like the latter, with many Open Source projects being overwhelmed by bad quality bug reports created automatically by LLMs. This is a problem that won’t go away quickly, and adding a fix (…)
The number of AI-generated images of child sex abuses is rapidly increasing. But thanks to a new UK law, tech companies and child safety agencies are joining forces and being given legal testing permission, allowing experts or audit models to proactively screen for CSAM risk rather than wait for illegal content to appear. The law (…)
I really appreciate Jimmy Wales’s insistence that sticking to Wikipedia’s core principals of neutrality, factuality and fostering a diverse community around the globe will eventually prevail in today’s polarized digital landscape. As a nerd myself I also relate very strongly to his enthusiasm of working on Wikipedia for the sake of doing something interesting over (…)
I found Simon Willison’s blog post interesting because he self-critically builds on his lethal trifecta concept. He clearly explains why prompt injection remains an unsolved risk for AI agents and highlights the practical “Rule of Two” for safer design (proposed by Meta AI). He also discusses new research showing that technical defenses consistently fail. His (…)
Switzerland’s military faces a critical dilemma: 90% of its data seems to be too sensitive for Microsoft’s US cloud, highlighting the tension between the efficiency of a cloud solution and digital sovereignty. This raises questions about whether open-source alternatives can match proprietary solutions, how to balance interoperability with protection from foreign legal jurisdiction (e.g. the (…)
I do not agree with the article’s conclusion that the “days of the know-nothing consumer are well and truly over”. The article does discuss potential shortfalls, such as both sides of a negotiation relying on specialised chatbots to conduct it, but fails to point out the root issue, namely the reliability of the information. As (…)
If real-life applications of quantum computing emerge, it could revolutionize chemistry, physics, computer sciences and more. Despite the apparent progress achieved by Google here, I am cautious about placing full trust in the advances claimed by commercial companies, as their competitive approach may prioritize hype or market value. Given the extent to which scientific research (…)
Modern infrastructure management increasingly relies on infrastructure-as-code (IaC), which is the paradigm of automatically managing computing infrastructure using programming languages such as Ansible. Furthermore, there is an increasing interest to leverage Large Language Models (LLMs) to 1) automatically generate the specification code that provisions the desired infrastructure, and 2) to periodically check if the infrastructure (…)
One of the increasingly popular paradigms for managing the growing size and complexity of modern ML models is the adoption of collaborative and decentralized approaches. While this has enabled new possibilities in privacy-preserving and scalable frameworks for distributed data analytics and model training over large-scale real-world models, current approaches often assume a uniform trust-levels among (…)
State-of-the-art architectures in many software applications and critical infrastructures are based on deep learning models. These models have been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose fundamental questions in terms of safety, security, or performance guarantees at large. Several defense mechanisms have been developed in the last years (…)
A Geneva Democracy Week Event.
The impact of AI on all levels of society is undeniable and growing, making this debate more timely than ever. On 10 October, as part of Geneva Democracy Week, we warmly invite you to take part in an Oxford-style debate on the motion: ‘”This House believes that AI will save democracy.”
With our conference on “Assessing the Disruptions by AI Agents” in mind, I found this article compelling because it documents the alarming acceleration of cyberattack capabilities thanks to AI agents. This raises the critical question of whether we are approaching a tipping point at which defence becomes structurally impossible. However, the authors offer cautious optimism, (…)
Suite au récent refus de l’Allemagne d’adopter la loi européenne dite « Chat Control », et à l’approche de son vote au niveau européen, cet article — accompagné d’une courte vidéo — propose une synthèse claire et accessible des points de vue divergents entre législateurs et experts scientifiques.
Many countries are investing in “sovereign” AI to compensate for the linguistic, cultural, and security shortcomings of American and Chinese models. Examples include SEA-LION in Singapore, ILMUchat in Malaysia, and Apertus in Switzerland, which are designed for local uses and characteristics. However, these initiatives face major obstacles: very high costs, massive computing power and talent (…)
An important part of digital trust is being in charge of your own data. Unfortunately, nowadays this is not at all the case. Some of your data resides in the cloud, e.g., with cloud-drives like Google Drive, directly accessible to you. But most of your data is hidden from you, saved on the servers of (…)
Companies are beginning to incorporate AI agents into their workstreams, even as they play catch up (or are falling behind, depending on how you look at it) to articulate frameworks to assign accountability for AI-driven decisions and weigh the trade-off between human oversight and explainability. This article nicely summarizes the findings of a survey of (…)
Having witnessed the suspense and uncertainties of last Sunday’s voting on the Swiss eID law, and since then, the threat to cancel the result due to the alleged interference of Swisscom, I am left pondering what is specifically controversial about the eID. If we compare the eID to other services that have moved from analogue (…)
This heartfelt appeal by Tim Berners-Lee, the inventor of the world wide web, is not to complain about all of the problems of today’s internet, but to remind us that they are not set in stone – it started out differently, and we can take it back to its roots if we choose to.
I think it’s really important that we can point to specific examples to explain why misinformation can be harmful. In this report, the researchers used the Meta Content Library, which allows Facebook comments to be analyzed in an anonymized way, to demonstrate four examples of how misinformation caused clear harm in Australia. Countering misinformation is (…)
With the increased usage of LLMs in programming, the problem of supply chain attacks multiplies: first of all, the programmers need to make sure that the libraries proposed by the LLM are secure, maintained, and trustworthy. Now it turns out that LLMs even change the quality of the code depending on the indicated goal of (…)
This ban is notable because, rather than targeting cutting-edge AI chips, it focuses on mass-produced processors that are essential for wider industry use. By disrupting the supply of equipment and forcing Chinese tech giants to innovate internally, it raises the stakes in the US–China tech conflict. This will likely accelerate the development of domestic production (…)
This issue explores the rapid spread of AI-powered video surveillance, from supermarkets to large public gatherings, and examines the legal, ethical, and societal challenges it raises. With insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH), it looks at Switzerland’s sectoral approach versus the EU’s new AI Act, and what this means (…)
Today’s identity security faces challenges like misuse and tracking. Our goal is to enable secure, anonymous, unlinkable E-ID interactions by researching novel cryptographic algorithms. This boosts user trust, creates new business opportunities, and cuts financial losses after data breaches.