Skip to content

Huawei could seize China’s AI chip crown in 2026 as Nvidia’s H200 shipments stall in regulatory limbo — Beijing pushes homegrown AI hardware dominance in a market projected to hit $67 billion by 2030

This text is compelling because it confirms a core geopolitical truth that I often emphasise: digital sovereignty cannot exist without physical infrastructure. Remarkably, China has swiftly progressed from AI policy intent to fabrication plants, chips and deployment, initially for inference purposes, but already on a strategic scale. The export-clearance imbroglio also shows how regulation can (…)

Wie Tesla Unfälle verheimlichte, um seinen Autopiloten zu testen

Es ist schon lange bekannt, dass der Autopilot von Tesla nicht so gut funktioniert wie beworben. Diese Daten von 2022 zeigen aber das große Ausmaß des Risikos, das dieser Autopilot für die Insassen und die Umgebung darstellt. Brisant ist auch, dass Tesla behauptet, nicht von diesen Fehlfunktionen zu wissen, selbst im Falle eines Unfalls. Und (…)

US builds website that will allow Europeans to view blocked content

Unsurprisingly, the cyberspace reflects the geopolitical conflicts of the real world. We see western democracies criticising authoritarian regimes for cutting off their populations from the Internet for fear of external interference and fuelling demonstrations. Similarly, within the western democracies, the current US administration criticises the EU for its regulations and censorship of hate speeches, fake (…)

Can social media age verification really protect kids?

I found this article interesting because it highlights the tension between protecting children online — not just on social media, but also on shopping, gambling and adult sites — and preserving privacy. The challenge of enforcing age laws without collecting sensitive data remains, regardless of whether the burden is placed on users or platforms. eID (…)

US cyber defense chief accidentally uploaded secret government info to ChatGPT

Apparently, it happens even to the best of us — even seasoned professionals with over 24 years of IT experience and a ‘deep understanding of both the complexities and practical realities of infrastructure security!’ Jokes aside, this incident is fascinating: it exposes elite-level lapses in AI tool governance despite regulatory warnings, underscoring the enduring risks (…)

Why AI Keeps Falling for Prompt Injection Attacks

Large language models often seem very human-like, but at the same time can behave in truly baffling ways. Trying to explain this seemingly erratic behaviour can very easily lead one to get lost in technical details. All the more reason why I appreciate articles like this one that provide an accessible explanation, in this case (…)

Mozilla Says It’s Finally Done With Two-Faced Onerep

Software developers and users share vulnerability information through standardized formats and processes (e.g., CVEs) to alert affected parties. Users can check their Software Bills of Materials to identify and fix vulnerabilities. I wonder whether the same will eventually happen with governance vulnerabilities such as this one. How can affected parties be notified of such trust (…)

Hacker Paragraph: BSI Chief Calls for Decriminalization of Security Researchers

The ‘hacker paragraph’ in Germany is a law saying that you are not allowed to break into foreign IT systems, not even for research, nor as a white hat hacker who discloses their findings responsibly. The development or distribution of such software is also prohibited. For researchers and white hat hackers alike, this is of (…)

Microsoft ‘illegally’ tracked students via 365 Education, says data watchdog

noyb’s latest victory may sound like a technicality – who is responsible for complying with the GDPR – but it is actually very important, because if no one knows who is responsible, no one really is responsible. All the more important that the ruling clearly holds Microsoft U.S. as the company actually selling the product (…)

Three ways formally verified code can go wrong in practice

For security reasons, people want code to be ‘formally verified’, for example for libraries doing cryptographic operations. But what does this actually mean? And is ‘formally verified’ the panacea for secure and correct code in all situations? Of course not. Hillel gives some very easy examples where even the definition of ‘correct’ is not easy (…)

Second Call for Vaud Projects

The collaboration between the Swiss Data Science Center (SDSC) and the Canton of Vaud aims to generate a tangible and lasting impact on the economy and public community of the Vaud region. In this context, the SDSC supports collaborative projects in the field of data science, bringing together the strengths of academic excellence, companies, particularly SMEs and public actors.

AI Could Actually Help Rebuild The Middle Class

I found this article interesting because, rather than perpetuating fear-driven narratives, it provides a thorough analysis backed by demographic realities in the Western world. Labour shortages, it suggests, make it unlikely that AI will ‘take all our jobs’. It emphasises how AI can increase access to specialist roles for a wider range of workers. The (…)

Japan enacts new Active Cyberdefense Law allowing for offensive cyber operations

The new bill shifts Japan’s strategy from defensive cybersecurity to active threat disruption, similar to approaches in other countries like the U.S. However, it uniquely empowers military and law enforcement to take preemptive actions, including deploying ‘cyber harm prevention officers’ to disrupt enemy servers without explicit oversight during critical incidents, raising concerns about potential ‘vigilante (…)

Nintendo says your Switch 2 isn’t really yours even if you paid for it

Companies are taking advantage of the digital world to keep control over physical devices, even after you buy them. The latest licensing terms of the Nintendo Switch 2 contains wording that allows the company to permanently disable the console if it determines you’ve violated their terms. This highlights a serious trust concern: even after paying (…)

Democratizing large-scale AI for the benefit of society: Open calls for disruptive ideas

In addition to its core research activities as outlined below, the Swiss AI Initiative is distributing 10-20 million GPU hours in 2025 for disruptive research projects through open calls. We look for research projects that aim to contribute to advances in AI fundamentals or impactful applications of AI. Researchers outside of Switzerland are encouraged to apply if they team up with at least one of our PIs and aim to create novel open science artifacts that benefit the Swiss, European or global ecosystem and societal context.