EPFL researchers have developed new software – now spun-off into a start-up – that eliminates the need for data to be sent to third-party cloud services when AI is used to complete a task. This could challenge the business model of Big Tech.
Cyber-threats have been accelerating due to the exponential growth of network connectivity. These new capabilities provide myriad opportunities for security hackers to wreak significant damage for commercial, political, or other gains.
Software developers and users share vulnerability information through standardized formats and processes (e.g., CVEs) to alert affected parties. Users can check their Software Bills of Materials to identify and fix vulnerabilities. I wonder whether the same will eventually happen with governance vulnerabilities such as this one. How can affected parties be notified of such trust (…)
The ‘hacker paragraph’ in Germany is a law saying that you are not allowed to break into foreign IT systems, not even for research, nor as a white hat hacker who discloses their findings responsibly. The development or distribution of such software is also prohibited. For researchers and white hat hackers alike, this is of (…)
EPFL researchers have developed and tested Votegral a complete e-voting pipeline, demonstrating for the first time that there is a plausible and practical approach to coercion-resistant electronic voting in elections.
noyb’s latest victory may sound like a technicality – who is responsible for complying with the GDPR – but it is actually very important, because if no one knows who is responsible, no one really is responsible. All the more important that the ruling clearly holds Microsoft U.S. as the company actually selling the product (…)
For security reasons, people want code to be ‘formally verified’, for example for libraries doing cryptographic operations. But what does this actually mean? And is ‘formally verified’ the panacea for secure and correct code in all situations? Of course not. Hillel gives some very easy examples where even the definition of ‘correct’ is not easy (…)
No sooner had I begun to express my bemusement at the apparent popularity of a new app that pays users to record their phone calls and sells the data to AI firms, than the app was summarily shut down after a security flaw was reported that allowed anyone to access the phone numbers, call recordings, (…)
A short and nice definition of agents: ‘An LLM agent runs tools in a loop to achieve a goal.’ Of course, this is only the technical description, and the applications are also very important. But for the moment we need to be clear that, as long as agents do not possess the ability to make (…)
The collaboration between the Swiss Data Science Center (SDSC) and the Canton of Vaud aims to generate a tangible and lasting impact on the economy and public community of the Vaud region. In this context, the SDSC supports collaborative projects in the field of data science, bringing together the strengths of academic excellence, companies, particularly SMEs and public actors.
I found this article interesting because, rather than perpetuating fear-driven narratives, it provides a thorough analysis backed by demographic realities in the Western world. Labour shortages, it suggests, make it unlikely that AI will ‘take all our jobs’. It emphasises how AI can increase access to specialist roles for a wider range of workers. The (…)
Partner with the World Bank to develop digital solutions for Ghana Tax Administration this autumn.
Dieser Beitrag erläutert anschaulich am Beispiel des jüngsten Datenleaks der Schweizer Bundespolitiker:innen warum wir als Gesellschaft solche Leaks nicht bagatellisieren sollten.
Contrary to what the author posits, I don’t think there is anything to fix about social media: it gives people the adrenaline rush they want, and comforts VC money by showing exponential growth. What can be changed is the way we prioritize seeking spaces to hang out with people we like. Mastodon is one such (…)
The new bill shifts Japan’s strategy from defensive cybersecurity to active threat disruption, similar to approaches in other countries like the U.S. However, it uniquely empowers military and law enforcement to take preemptive actions, including deploying ‘cyber harm prevention officers’ to disrupt enemy servers without explicit oversight during critical incidents, raising concerns about potential ‘vigilante (…)
Companies are taking advantage of the digital world to keep control over physical devices, even after you buy them. The latest licensing terms of the Nintendo Switch 2 contains wording that allows the company to permanently disable the console if it determines you’ve violated their terms. This highlights a serious trust concern: even after paying (…)
Rejoignez-nous, le 13 mai prochain, pour la conférence de lancement de la 4ème édition du programme d’accompagnement en confiance numérique et cybersécurité Trust4SMEs de la Trust Valley.
In addition to its core research activities as outlined below, the Swiss AI Initiative is distributing 10-20 million GPU hours in 2025 for disruptive research projects through open calls. We look for research projects that aim to contribute to advances in AI fundamentals or impactful applications of AI. Researchers outside of Switzerland are encouraged to apply if they team up with at least one of our PIs and aim to create novel open science artifacts that benefit the Swiss, European or global ecosystem and societal context.
In our latest edition of the C4DT Digital Governance Book Review, we discuss Marietje Schaake’s book which explores the deep intertwining of Big Tech with politics, highlighting its threat to democracy and proposing practical regulatory solutions to reclaim democratic processes and safeguard state sovereignty.
Schaake, Marietje (2024). The Tech Coup – How to Save Democracy from Silicon Valley. Princeton University Press, 336 pages. By Melanie Kolbe-Guyot It is safe to say that probably no other book should more be on your reading list this year than Marietje Schaake’s 2024 “The Tech Coup – How to Save Democracy from Silicon Valley”. (…)
Welcome to the Factory Update for Fall 2024. Twice a year we take the time to present some of the projects we see coming out of our affiliated labs and give you a short summary of what we’ve been doing the past 12 months. Please also give us a short feedback on what you most (…)
I found this article fascinating since it highlights the complexity of defining what ‘open-source AI’ should mean. Today, it is a confusing topic, with many models being ‘open-source’ in name only. For neural networks, training data and source code are inseparable; together, they define how the model is programmed. Therefore, should we be allowed to (…)
Wow – this is counter-attack made right! Sophos explains how they tracked hackers of their firewall product by adding code which tags attacks and reports them back to Sophos HQ. They managed to get a lot of information about the hackers, including their whereabouts. What I really liked about the article is how it shows (…)
The leaked internal TikTok documents confirm the long-held suspicion that we urgently need to stop entrusting social media companies with putting up safety-rails. Dampening addictive features, bursting filter bubbles and moderating content directly contradicts maximising user engagement, the metric by which such companies live and die. We need binding regulations with real teeth to protect (…)