Digital Threat Modeling Under Authoritarianism
Even if you’re not living in the U.S., this article provides some useful insights about how to make informed decisions about protecting your data.
Even if you’re not living in the U.S., this article provides some useful insights about how to make informed decisions about protecting your data.
Having witnessed the suspense and uncertainties of last Sunday’s voting on the Swiss eID law, and since then, the threat to cancel the result due to the alleged interference of Swisscom, I am left pondering what is specifically controversial about the eID. If we compare the eID to other services that have moved from analogue (…)
While we programmers are still figuring out where and how LLMs can help us get our work done, it’s worth taking a step back to reflect on what we know so far. I like this piece, which compares LLMs to “lightning-fast” junior programmers and describes how we can deploy them to deliver results. Although not (…)
No sooner had I begun to express my bemusement at the apparent popularity of a new app that pays users to record their phone calls and sells the data to AI firms, than the app was summarily shut down after a security flaw was reported that allowed anyone to access the phone numbers, call recordings, (…)
I think it’s really important that we can point to specific examples to explain why misinformation can be harmful. In this report, the researchers used the Meta Content Library, which allows Facebook comments to be analyzed in an anonymized way, to demonstrate four examples of how misinformation caused clear harm in Australia. Countering misinformation is (…)
I appreciate this article because it demonstrates—through use cases and comparisons—how applying democratic processes, such as referendums, to digital transformation can lead to more democracy-friendly digitalization. The Swiss e-ID referendum is a prime example: although the process was relatively slow, it ultimately resulted in a privacy-by-design solution with a high level of transparency.
A short and nice definition of agents: ‘An LLM agent runs tools in a loop to achieve a goal.’ Of course, this is only the technical description, and the applications are also very important. But for the moment we need to be clear that, as long as agents do not possess the ability to make (…)
With the increased usage of LLMs in programming, the problem of supply chain attacks multiplies: first of all, the programmers need to make sure that the libraries proposed by the LLM are secure, maintained, and trustworthy. Now it turns out that LLMs even change the quality of the code depending on the indicated goal of (…)
This ban is notable because, rather than targeting cutting-edge AI chips, it focuses on mass-produced processors that are essential for wider industry use. By disrupting the supply of equipment and forcing Chinese tech giants to innovate internally, it raises the stakes in the US–China tech conflict. This will likely accelerate the development of domestic production (…)
Today’s identity security faces challenges like misuse and tracking. Our goal is to enable secure, anonymous, unlinkable E-ID interactions by researching novel cryptographic algorithms. This boosts user trust, creates new business opportunities, and cuts financial losses after data breaches.
Phishing is a growing societal threat and requires urgent, effective solutions. Through a collaborative process involving field feedback, ideation workshops, and state-of-the-art analysis, this open innovation project will explore technical, organizational, and legal solutions. The results will be prototypes, practical guidelines, and recommendations to strengthen digital security against phishing.
While one of my previous weekly picks showed that there is currently no mathematical proof for the reliability of today’s cryptographic algorithms, this article shows a way out: if a quantum computer is used as a basis to build a cryptographic algorithm, the foundation can be shown to protect against attacks to the system. While (…)
This article is fascinating because it exposes how indirect prompt injection attacks against LLM assistants like Google Gemini are not just theoretical—they have real-world implications, enabling hackers to hijack smart homes through poisoned data. This highlights a fundamental security flaw: current LLMs cannot reliably distinguish trusted commands from untrusted, external data.
Even Homer sometimes nods. I chose this article for two key reasons. First, it shows that phishing isn’t just a threat to non-technical users—even seasoned IT professionals can fall victim, despite using multi-factor authentication (MFA). Second, this incident was part of a larger supply chain attack with potentially catastrophic consequences. The takeaway? Think a thousand (…)
I find this article interesting because it highlights the tension between digital sovereignty and the expansion of global technology. With 75% market penetration compared to the single-digit presence of US alternatives, Pix demonstrates how public digital goods can effectively challenge the dominance of Big Tech. This case raises the question of whether payment systems constitute (…)
This issue paper outlines the urgent need to foster the secondary use of health data as a strategic priority for Switzerland’s health and innovation ecosystem.
The latest C4DT Insight, written by Dr. Paola Daniore, is out! In this issue paper, she outlines the urgent need to foster the secondary use of health data as a strategic priority for Switzerland’s health and innovation ecosystem.
This article talks about deepening digital estrangement, digital intrusion, and digital distraction from the perspective of a teacher who has seen the harm that overreliance on AI has caused to her students’ educational attainment. Hers is another testimony to the need for the definition of responsible and trustworthy AI to include when it should be (…)
Using the infamous example of the backdoor in the xz library, this piece astutely dissects the systematic failure of the software economy to properly support open-source software development, leaving our so-called software ‘supply’ chain vulnerable to attacks. I agree wholeheartedly with the author that if we do not stop treating open-source software as a free (…)
The collaboration between the Swiss Data Science Center (SDSC) and the Canton of Vaud aims to generate a tangible and lasting impact on the economy and public community of the Vaud region. In this context, the SDSC supports collaborative projects in the field of data science, bringing together the strengths of academic excellence, companies, particularly SMEs and public actors.
While public LLM APIs are convenient, they store all queries on providers’ servers. Running open LLMs locally offers privacy and offline access, though setup can be challenging depending on hardware and model requirements. ‘Anyway’ addresses this by distributing queries across multiple GPUs with dynamic scaling. Prof. Guerraoui works on fault tolerance in distributed systems. This (…)
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model — a milestone in generative AI for transparency and diversity.
I follow the advances of quantum computers with great interest, mainly because I’m curious when, or if, they will ever be able to break current cryptography algorithms. The holy grail of the algorithms is called ‘Shor’s algorithm’, which can factorize numbers quickly. Already in 2001, a quantum computer factorized 15! Yet since then, no quantum (…)
I particularly enjoyed this article because it challenges today’s automation-at-all-costs mindset, urging us to prioritize human-AI collaboration over replacement, with the goal that AI plus human expertise exceeds what AI can achieve alone. Learning when to collaborate versus automate is vital for more trustworthy and effective outcomes.