Many countries are investing in “sovereign” AI to compensate for the linguistic, cultural, and security shortcomings of American and Chinese models. Examples include SEA-LION in Singapore, ILMUchat in Malaysia, and Apertus in Switzerland, which are designed for local uses and characteristics. However, these initiatives face major obstacles: very high costs, massive computing power and talent (…)
An important part of digital trust is being in charge of your own data. Unfortunately, nowadays this is not at all the case. Some of your data resides in the cloud, e.g., with cloud-drives like Google Drive, directly accessible to you. But most of your data is hidden from you, saved on the servers of (…)
This long portrait of one of the leading authorities in the world on AI touches only lightly on concepts that those focused on safe and trustworthy AI prioritize, like explainability. But I think the key take-away from this article is that ‘it is hard, in the current AI race, to separate out purely intellectual inquiry (…)
Without defending any party or attacking the other, I find this article interesting because it somehow presents a new situation whose implications we should carefully consider: First, can Microsoft’s logic be extended from the use of cloud storage and AI to the use of operating systems? What about communications services or even hardware? Can this (…)
While we programmers are still figuring out where and how LLMs can help us get our work done, it’s worth taking a step back to reflect on what we know so far. I like this piece, which compares LLMs to “lightning-fast” junior programmers and describes how we can deploy them to deliver results. Although not (…)
No sooner had I begun to express my bemusement at the apparent popularity of a new app that pays users to record their phone calls and sells the data to AI firms, than the app was summarily shut down after a security flaw was reported that allowed anyone to access the phone numbers, call recordings, (…)
This ban is notable because, rather than targeting cutting-edge AI chips, it focuses on mass-produced processors that are essential for wider industry use. By disrupting the supply of equipment and forcing Chinese tech giants to innovate internally, it raises the stakes in the US–China tech conflict. This will likely accelerate the development of domestic production (…)
This issue explores the rapid spread of AI-powered video surveillance, from supermarkets to large public gatherings, and examines the legal, ethical, and societal challenges it raises. With insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH), it looks at Switzerland’s sectoral approach versus the EU’s new AI Act, and what this means (…)
From supermarket checkouts to Olympic stadiums, smart surveillance technologies are spreading rapidly, raising new questions about privacy, trust, and oversight. How should societies balance the benefits of AI-powered cameras with the risks of bias, misuse, and erosion of democratic freedoms? And how will the upcoming European AI Act reshape the governance of biometric surveillance, both in the EU and in Switzerland? This edition of C4DT Focus examines these pressing issues by offering a legal and ethical perspective on intelligent video surveillance, with insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH).
While one of my previous weekly picks showed that there is currently no mathematical proof for the reliability of today’s cryptographic algorithms, this article shows a way out: if a quantum computer is used as a basis to build a cryptographic algorithm, the foundation can be shown to protect against attacks to the system. While (…)
This article is fascinating because it exposes how indirect prompt injection attacks against LLM assistants like Google Gemini are not just theoretical—they have real-world implications, enabling hackers to hijack smart homes through poisoned data. This highlights a fundamental security flaw: current LLMs cannot reliably distinguish trusted commands from untrusted, external data.
I find this article interesting because it highlights the tension between digital sovereignty and the expansion of global technology. With 75% market penetration compared to the single-digit presence of US alternatives, Pix demonstrates how public digital goods can effectively challenge the dominance of Big Tech. This case raises the question of whether payment systems constitute (…)
This article talks about deepening digital estrangement, digital intrusion, and digital distraction from the perspective of a teacher who has seen the harm that overreliance on AI has caused to her students’ educational attainment. Hers is another testimony to the need for the definition of responsible and trustworthy AI to include when it should be (…)
This is a nice reminder of the state of the foundation upon which our public key infrastructure stands. Depending on the angle you’re looking at, it is either stable or shaky. The incident in question was a certificate authority that emitted a rogue certificate for “test purposes.” What ensued and how Cloudflare responded shows how (…)
The collaboration between the Swiss Data Science Center (SDSC) and the Canton of Vaud aims to generate a tangible and lasting impact on the economy and public community of the Vaud region. In this context, the SDSC supports collaborative projects in the field of data science, bringing together the strengths of academic excellence, companies, particularly SMEs and public actors.
While public LLM APIs are convenient, they store all queries on providers’ servers. Running open LLMs locally offers privacy and offline access, though setup can be challenging depending on hardware and model requirements. ‘Anyway’ addresses this by distributing queries across multiple GPUs with dynamic scaling. Professor Guerraoui’s lab is developing “Anyway”, a tool that can (…)
After many thought that LLMs and image-generators will remove jobs from writers and image artists, the pendulum swings back: clients realize that these tools only get you halfway to a useful result. So they turn to the ones they wanted to replace, and ask them to fix the half-baken results. I find i interesting how (…)
While it was to be expected that Anthropic will also use the users’ chats for training, I think the way they’re approaching this is not too bad. Perhaps the pop-up is not clear enough, but at least past chats will not get in the LLM training grinder. One of the big question will be of (…)
The Israeli airstrike campaign against Iranian military and cyber infrastructure on 12 June had an ‘interesting’ side effect. Accounts that had previously been identified as allegedly being managed by the Iranian Revolutionary Guard Corps (IRGC) and that promoted Scottish independence fell silent following the strikes. This resulted in a 4% reduction in all discussion related (…)
I found this article interesting because, rather than perpetuating fear-driven narratives, it provides a thorough analysis backed by demographic realities in the Western world. Labour shortages, it suggests, make it unlikely that AI will ‘take all our jobs’. It emphasises how AI can increase access to specialist roles for a wider range of workers. The (…)
With all the hype around agentic AI, the industry is rushing to embrace it. However, alarm bells have been sounded again and again concerning misaligned behaviour of LLMs and Large Reasoning Models (LRMs), ranging from ‘harmless’ misinformation to deliberately malicious actions. This raises serious questions whether the current technology is really mature enough to be (…)
From a cryptographer’s point of view, the big breakthrough in quantum computing would be if it can successfully factorize numbers in the 1000-digit range. As it turns out, this is actually quite difficult. The record from 2012 of factorizing the number 21 is still unbeaten! And all reports of factorizing bigger numbers used very, very (…)
Severe floods in Texas sparked a wave of conspiracy theories, with claims circulating online that the disaster was caused by geoengineering or weather weapons. This highlights a growing tension between the speed at which formal institutions can communicate accurate information and the rapid spread of AI-fueled disinformation. The resulting vandalism of radar infrastructure and threats (…)
As a software engineer, I’m looking at LLMs both as a tool for, but potentially also a danger to, my job: will it replace me one day? In this study, they measured the time that seasoned software needed to fix a bug, both with and without the aid of LLMs. The outcome in this specific (…)