One of the increasingly popular paradigms for managing the growing size and complexity of modern ML models is the adoption of collaborative and decentralized approaches. While this has enabled new possibilities in privacy-preserving and scalable frameworks for distributed data analytics and model training over large-scale real-world models, current approaches often assume a uniform trust-levels among (…)
State-of-the-art architectures in many software applications and critical infrastructures are based on deep learning models. These models have been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose fundamental questions in terms of safety, security, or performance guarantees at large. Several defense mechanisms have been developed in the last years (…)
A Geneva Democracy Week Event.
The impact of AI on all levels of society is undeniable and growing, making this debate more timely than ever. On 10 October, as part of Geneva Democracy Week, we warmly invite you to take part in an Oxford-style debate on the motion: ‘”This House believes that AI will save democracy.”
With our conference on “Assessing the Disruptions by AI Agents” in mind, I found this article compelling because it documents the alarming acceleration of cyberattack capabilities thanks to AI agents. This raises the critical question of whether we are approaching a tipping point at which defence becomes structurally impossible. However, the authors offer cautious optimism, (…)
Suite au récent refus de l’Allemagne d’adopter la loi européenne dite « Chat Control », et à l’approche de son vote au niveau européen, cet article — accompagné d’une courte vidéo — propose une synthèse claire et accessible des points de vue divergents entre législateurs et experts scientifiques.
Many countries are investing in “sovereign” AI to compensate for the linguistic, cultural, and security shortcomings of American and Chinese models. Examples include SEA-LION in Singapore, ILMUchat in Malaysia, and Apertus in Switzerland, which are designed for local uses and characteristics. However, these initiatives face major obstacles: very high costs, massive computing power and talent (…)
An important part of digital trust is being in charge of your own data. Unfortunately, nowadays this is not at all the case. Some of your data resides in the cloud, e.g., with cloud-drives like Google Drive, directly accessible to you. But most of your data is hidden from you, saved on the servers of (…)
Companies are beginning to incorporate AI agents into their workstreams, even as they play catch up (or are falling behind, depending on how you look at it) to articulate frameworks to assign accountability for AI-driven decisions and weigh the trade-off between human oversight and explainability. This article nicely summarizes the findings of a survey of (…)
Having witnessed the suspense and uncertainties of last Sunday’s voting on the Swiss eID law, and since then, the threat to cancel the result due to the alleged interference of Swisscom, I am left pondering what is specifically controversial about the eID. If we compare the eID to other services that have moved from analogue (…)
This heartfelt appeal by Tim Berners-Lee, the inventor of the world wide web, is not to complain about all of the problems of today’s internet, but to remind us that they are not set in stone – it started out differently, and we can take it back to its roots if we choose to.
I think it’s really important that we can point to specific examples to explain why misinformation can be harmful. In this report, the researchers used the Meta Content Library, which allows Facebook comments to be analyzed in an anonymized way, to demonstrate four examples of how misinformation caused clear harm in Australia. Countering misinformation is (…)
With the increased usage of LLMs in programming, the problem of supply chain attacks multiplies: first of all, the programmers need to make sure that the libraries proposed by the LLM are secure, maintained, and trustworthy. Now it turns out that LLMs even change the quality of the code depending on the indicated goal of (…)
This ban is notable because, rather than targeting cutting-edge AI chips, it focuses on mass-produced processors that are essential for wider industry use. By disrupting the supply of equipment and forcing Chinese tech giants to innovate internally, it raises the stakes in the US–China tech conflict. This will likely accelerate the development of domestic production (…)
This issue explores the rapid spread of AI-powered video surveillance, from supermarkets to large public gatherings, and examines the legal, ethical, and societal challenges it raises. With insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH), it looks at Switzerland’s sectoral approach versus the EU’s new AI Act, and what this means (…)
Today’s identity security faces challenges like misuse and tracking. Our goal is to enable secure, anonymous, unlinkable E-ID interactions by researching novel cryptographic algorithms. This boosts user trust, creates new business opportunities, and cuts financial losses after data breaches.
Even Homer sometimes nods. I chose this article for two key reasons. First, it shows that phishing isn’t just a threat to non-technical users—even seasoned IT professionals can fall victim, despite using multi-factor authentication (MFA). Second, this incident was part of a larger supply chain attack with potentially catastrophic consequences. The takeaway? Think a thousand (…)
I find this article interesting because it highlights the tension between digital sovereignty and the expansion of global technology. With 75% market penetration compared to the single-digit presence of US alternatives, Pix demonstrates how public digital goods can effectively challenge the dominance of Big Tech. This case raises the question of whether payment systems constitute (…)
This article talks about deepening digital estrangement, digital intrusion, and digital distraction from the perspective of a teacher who has seen the harm that overreliance on AI has caused to her students’ educational attainment. Hers is another testimony to the need for the definition of responsible and trustworthy AI to include when it should be (…)
Semiconductors power nearly all modern devices, so controlling their production is strategically crucial. By revoking TSMC’s authorization to export advanced US chipmaking tools to China, the US hinders China’s ability to produce state-of-the-art chips (though TSMC only makes less advanced chips there). While this may curb China’s capacities in the short run, in the long-term, (…)
Using the infamous example of the backdoor in the xz library, this piece astutely dissects the systematic failure of the software economy to properly support open-source software development, leaving our so-called software ‘supply’ chain vulnerable to attacks. I agree wholeheartedly with the author that if we do not stop treating open-source software as a free (…)
While public LLM APIs are convenient, they store all queries on providers’ servers. Running open LLMs locally offers privacy and offline access, though setup can be challenging depending on hardware and model requirements. ‘Anyway’ addresses this by distributing queries across multiple GPUs with dynamic scaling. Professor Guerraoui’s lab is developing “Anyway”, a tool that can (…)
I follow the advances of quantum computers with great interest, mainly because I’m curious when, or if, they will ever be able to break current cryptography algorithms. The holy grail of the algorithms is called ‘Shor’s algorithm’, which can factorize numbers quickly. Already in 2001, a quantum computer factorized 15! Yet since then, no quantum (…)
I particularly enjoyed this article because it challenges today’s automation-at-all-costs mindset, urging us to prioritize human-AI collaboration over replacement, with the goal that AI plus human expertise exceeds what AI can achieve alone. Learning when to collaborate versus automate is vital for more trustworthy and effective outcomes.
A successful—and almost uplifting—example of collaboration across law enforcement, government agencies, and businesses against cybercrime is exemplified by the Cybercrime Atlas project. In a sweeping INTERPOL-coordinated operation, authorities across Africa arrested 1,209 cybercriminals who targeted nearly 88,000 victims. The crackdown recovered USD 97.4 million and dismantled 11,432 malicious infrastructures. This operation demonstrates how cross-border collaboration (…)