This article highlights significant flaws within the proposed NO FAKES Act, whose repercussions would extend far beyond U.S. borders. I found it particularly insightful because of the parallels it draws between this bill and existing mechanisms for addressing copyright infringement, outlining how the deficiencies within the latter are likely to be mirrored in the former.
Driven by ethical concerns about using existing artwork to train gen AI models, an artist created his own model that produces output untrained on any data at all. What was interesting to me is that, in exploring whether gen AI could create original art, he also demonstrated a potential path to better understanding how such (…)
As LLM agents become ‘en vogue’, we need to rethink the attacks they open to malicious third parties. Here Simon Willison describes a combination often seen in such agents that will put your private data at risk. Unfortunately, there is currently not much you can do, except be aware that all the data that agents (…)
Cycle tracking apps are not only helpful for those trying to conceive, but also serve as important tools for keeping track of one’s general reproductive health. As this article discusses, however, such tools can quickly become a double-edged sword due to the high value of the data they collect, which can potentially end up in (…)
This article underscores that neither digital policies nor technologies can be discussed in isolation. Using Indonesia as an example, it lays out how the country’s laws and regulations on internet content are actually implemented by the ISPs and examines how the combination of vaguely worded laws and sweeping filtering methods ultimately impacts citizens’ access to (…)
This article is interesting because it highlights the opportunities and challenges of personal data ownership. Although tools such as dWallet claim to empower users, they can encourage the poorest and least educated people to sell their data without understanding the risks, thereby widening the digital divide. True data empowerment means that everyone must have the (…)
To foster wallets, credentials and trusted infrastructure for the benefit of all humans. Leading organizations from across the globe coming together to shape the future of digital identity, in particular in the realm of secure, interoperable wallets, credentials and trusted infrastructure.
That is a very nice attack on privacy-protection in the mobile browsers: even if you don’t allow any cookies and don’t consent on being tracked, you’re browsing behaviour is still tracked. The idea of communicating from the mobile browser to your locally installed app is technically very interesting, and seems to be difficult to avoid (…)
This atlas of algorithmic systems curated by AlgorithmWatch CH is a nonexhaustive yet revealing list of algorithms currently deployed in Switzerland, whether to ‘predict, recommend, affect or take decisions about human beings’ or to ‘generate content used by or on human beings.’ The atlas is really eye-opening for me – so many systems that we (…)
Agentic AI has only recently emerged, yet it is already being used to commit fraud. This trend is not new; historically, fraudsters have exploited new technologies to target unsuspecting users and weak security systems, as seen with the first instances of voice phishing during the rise of telephony in the early 20th-century. These challenges have (…)
This policy paper highlights the crucial trade-off between privacy and utility in data sharing and call for a shift from technology-centric solutions to purpose-driven policies. The paper formulates eight actionable recommendations to guide realistic, privacy-preserving data-sharing practices in Europe.
Join the movement! Swiss {ai} Weeks calls on researchers, developers, businesses, and citizens to come together and build the future of AI — hands-on, ethical, and open. This isn’t just exploration, it’s collaboration in action.
To promote research and education in cyber-defence, the EPFL and the Cyber-Defence (CYD) Campus have jointly launched the “CYD Fellowships – A Talent Program for Cyber-Defence Research.”
The 12th call for applications is now open, with a rolling call for Master Thesis Fellowship applications and Proof of Concept Fellowship applications, and with a deadline of 20 August 2025 (17:00 CEST) for Doctoral and Distinguished Postdoctoral Fellowship applications.
In this paper, the authors highlight the crucial trade-off between privacy and utility in data sharing and call for a shift from technology-centric solutions to purpose-driven policies. The paper formulates eight actionable recommendations to guide realistic, privacy-preserving data-sharing practices in Europe.
It’s fascinating to see the tightrope dance Microsoft is doing with open source. While most of its operating system is closed source, Microsoft actively participates in several open source projects and provides some of its programs under an open source license. Open sourcing is beneficial because it allows security researchers to examine the source code (…)
ENISA’s new vulnerability database is a significant development in the pursuit of European digital sovereignty. It reduces reliance on US-dominated resources and could lead to better alignment with EU regulations, such as the GDPR and the NIS2 Directive. However, key questions remain about coordination with existing global databases, disclosure policies, and the participation of non-EU (…)
‘I call it the ‘AI dilemma’: while AI may threaten many jobs, it also serves as an essential tool to mitigate its own impact by boosting re-skilling and upskilling initiatives. I appreciate this article because it demonstrates how agentic AI can be employed in lifelong learning systems to reduce skill gaps, which are in part (…)
Should we use the tools that can destroy us to help us? This high-school student developed a tool to flag potential extremists on Reddit and then engage with them to de-radicalize them. According to the student, he never actually employed the chat function on real persons, only on fake accounts. Reddit’s terms forbid using AI (…)
Supply chain attacks are improving through automation. Adding new libraries to a software project has always been a point of vulnerability, but now that tools like ‘Cursor’ can add libraries automatically, developers are paying less attention to what gets installed. Some tools add libraries that send API keys of LLMs to attackers and load other (…)
Interested in this training? Reach out to us to discuss Teaser Step into the future of digital identity with our immersive training, “SWIYU Integration – Privacy-Preserving Solutions for the Swiss eID.”Tailored for engineers, this course provides a comprehensive exploration of the Swiss eID system, offering an in-depth understanding of its authentication protocols, security measures, and (…)
November 7th, 2025, 09h00-12h30, EPFL Introduction Agentic AI—capable of autonomously handling tasks, coordinatingworkflows, and interacting with people and systems—continues totransform how organizations operate. Unlock the potential of agentic AI with our course, “Agentic AI Unveiled: Myth, Reality, and Trust.” Designed specifically for decision makers, this 2.75-hour program provides a clear and comprehensive overview of AI (…)
I like this insight into how a cybersecurity company works to defend against adversaries trying to infiltrate them. According to this article, one of the most important attack vectors is job applicants who want to infiltrate the company. Which of course makes sense, now that other attacks become more difficult, going back to the good (…)
Large language models (LLMs) are trained on huge amounts of data, but companies rarely explain exactly what data they use. This makes it hard to trust these models, since bad data can lead to wrong answers. There’s also a legal problem: Is it allowed to use free online content (like books or articles) for training, (…)
This article reveals how North Korean agents are finding jobs in IT by exploiting remote working opportunities and AI-powered interviews. This combination of espionage and “legitimate” job hunting creates a worrying new security challenge, as companies cannot verify who they are actually hiring. This sophisticated operation illustrates the evolution of cybersecurity threats, which are no (…)