Many countries are investing in “sovereign” AI to compensate for the linguistic, cultural, and security shortcomings of American and Chinese models. Examples include SEA-LION in Singapore, ILMUchat in Malaysia, and Apertus in Switzerland, which are designed for local uses and characteristics. However, these initiatives face major obstacles: very high costs, massive computing power and talent (…)
	 
	
	
	
	
		An important part of digital trust is being in charge of your own data. Unfortunately, nowadays this is not at all the case. Some of your data resides in the cloud, e.g., with cloud-drives like Google Drive, directly accessible to you. But most of your data is hidden from you, saved on the servers of (…)
	 
	
	
	
	
		Companies are beginning to incorporate AI agents into their workstreams, even as they play catch up (or are falling behind, depending on how you look at it) to articulate frameworks to assign accountability for AI-driven decisions and weigh the trade-off between human oversight and explainability. This article nicely summarizes the findings of a survey of (…)
	 
	
	
	
	
		Having witnessed the suspense and uncertainties of last Sunday’s voting on the Swiss eID law, and since then, the threat to cancel the result due to the alleged interference of Swisscom, I am left pondering what is specifically controversial about the eID. If we compare the eID to other services that have moved from analogue (…)
	 
	
	
	
	
		This heartfelt appeal by Tim Berners-Lee, the inventor of the world wide web, is not to complain about all of the problems of today’s internet, but to remind us that they are not set in stone – it started out differently, and we can take it back to its roots if we choose to.
	 
	
	
	
	
		I think it’s really important that we can point to specific examples to explain why misinformation can be harmful. In this report, the researchers used the Meta Content Library, which allows Facebook comments to be analyzed in an anonymized way, to demonstrate four examples of how misinformation caused clear harm in Australia. Countering misinformation is (…)
	 
	
	
	
	
		With the increased usage of LLMs in programming, the problem of supply chain attacks multiplies: first of all, the programmers need to make sure that the libraries proposed by the LLM are secure, maintained, and trustworthy. Now it turns out that LLMs even change the quality of the code depending on the indicated goal of (…)
	 
	
	
	
	
		This ban is notable because, rather than targeting cutting-edge AI chips, it focuses on mass-produced processors that are essential for wider industry use. By disrupting the supply of equipment and forcing Chinese tech giants to innovate internally, it raises the stakes in the US–China tech conflict. This will likely accelerate the development of domestic production (…)
	 
	
	
	
		
			 
		
		
	
		This issue explores the rapid spread of AI-powered video surveillance, from supermarkets to large public gatherings, and examines the legal, ethical, and societal challenges it raises. With insights from Sébastien Marcel (Idiap Research Institute) and Johan Rochel (EPFL CDH), it looks at Switzerland’s sectoral approach versus the EU’s new AI Act, and what this means (…)
	 
	
	
	
	
		Today’s identity security faces challenges like misuse and tracking. Our goal is to enable secure, anonymous, unlinkable E-ID interactions by researching novel cryptographic algorithms. This boosts user trust, creates new business opportunities, and cuts financial losses after data breaches.
	 
	
	
	
	
		Even Homer sometimes nods. I chose this article for two key reasons. First, it shows that phishing isn’t just a threat to non-technical users—even seasoned IT professionals can fall victim, despite using multi-factor authentication (MFA). Second, this incident was part of a larger supply chain attack with potentially catastrophic consequences. The takeaway? Think a thousand (…)
	 
	
	
	
	
		I find this article interesting because it highlights the tension between digital sovereignty and the expansion of global technology. With 75% market penetration compared to the single-digit presence of US alternatives, Pix demonstrates how public digital goods can effectively challenge the dominance of Big Tech. This case raises the question of whether payment systems constitute (…)
	 
	
	
	
	
		This article talks about deepening digital estrangement, digital intrusion, and digital distraction from the perspective of a teacher who has seen the harm that overreliance on AI has caused to her students’ educational attainment. Hers is another testimony to the need for the definition of responsible and trustworthy AI to include when it should be (…)
	 
	
	
	
	
		Semiconductors power nearly all modern devices, so controlling their production is strategically crucial. By revoking TSMC’s authorization to export advanced US chipmaking tools to China, the US hinders China’s ability to produce state-of-the-art chips (though TSMC only makes less advanced chips there). While this may curb China’s capacities in the short run, in the long-term, (…)
	 
	
	
	
	
		Using the infamous example of the backdoor in the xz library, this piece astutely dissects the systematic failure of the software economy to properly support open-source software development, leaving our so-called software ‘supply’ chain vulnerable to attacks. I agree wholeheartedly with the author that if we do not stop treating open-source software as a free (…)
	 
	
	
	
		
			 
		
		
	
		While public LLM APIs are convenient, they store all queries on providers’ servers. Running open LLMs locally offers privacy and offline access, though setup can be challenging depending on hardware and model requirements. ‘Anyway’ addresses this by distributing queries across multiple GPUs with dynamic scaling. Professor Guerraoui’s lab is developing “Anyway”, a tool that can (…)
	 
	
	
	
	
		I follow the advances of quantum computers with great interest, mainly because I’m curious when, or if, they will ever be able to break current cryptography algorithms. The holy grail of the algorithms is called ‘Shor’s algorithm’, which can factorize numbers quickly. Already in 2001, a quantum computer factorized 15! Yet since then, no quantum (…)
	 
	
	
	
	
		I particularly enjoyed this article because it challenges today’s automation-at-all-costs mindset, urging us to prioritize human-AI collaboration over replacement, with the goal that AI plus human expertise exceeds what AI can achieve alone. Learning when to collaborate versus automate is vital for more trustworthy and effective outcomes.
	 
	
	
	
	
		A successful—and almost uplifting—example of collaboration across law enforcement, government agencies, and businesses against cybercrime is exemplified by the Cybercrime Atlas project. In a sweeping INTERPOL-coordinated operation, authorities across Africa arrested 1,209 cybercriminals who targeted nearly 88,000 victims. The crackdown recovered USD 97.4 million and dismantled 11,432 malicious infrastructures. This operation demonstrates how cross-border collaboration (…)
	 
	
	
	
	
		Whether you’re for or against the proposed E-ID, a public discussion is the healthiest if it is founded on factually correct arguments. While this piece is clearly opinionated, it also tries to examine the main arguments from the opposition as neutrally as possible, and provides a good explanation and discussion for each of them, pointing (…)
	 
	
	
	
	
		The Israeli airstrike campaign against Iranian military and cyber infrastructure on 12 June had an ‘interesting’ side effect. Accounts that had previously been identified as allegedly being managed by the Iranian Revolutionary Guard Corps (IRGC) and that promoted Scottish independence fell silent following the strikes. This resulted in a 4% reduction in all discussion related (…)
	 
	
	
	
		
			 
		
		
	
		A secure and reliable electronic identity (e-ID) is both a challenge and a crucial issue in today’s digital landscape. EPFL and SICPA are joining forces to design an innovative system of cryptographic algorithms.
	 
	
	
	
		
			 
		
		
	
		To promote research and education in cyber-defence, EPFL and the Cyber-Defence (CYD) Campus launched a rolling call for Master Thesis Fellowships – A Talent Program for Cyber-Defence Research.
This month we introduce you to Hamza Abid, a CYD Master Thesis Fellowship recipient, who is finishing up his Master Thesis in the Laboratory of Sensing and Networking Systems at EPFL.
	 
	
	
	
	
		This article prompts reflection on what we mean by ‘trust’ when we talk about ‘trustworthy’ AI. There are many dimensions to trust, and the author helpfully breaks them down. In human-AI interactions, misalignments can occur when stakeholders interpret ‘trust’ differently. For example, companies might emphasize the epistemic aspect—reliance on knowledge and its acquisition—of trust, while (…)