C4DT collaborated with NCSC on implementation of Motion 23.3002. It organized the first of a series of three workshops, actively participated in two additional workshops, and delivered a summary of the workshop results from C4DT’s perspective for the final report.”
This full-day conference explores the potential disruptions caused by the rise of AI agents and their impact on existing systems and structures. Bringing together industry leaders, researchers, policymakers, and stakeholders, the event will facilitate in-depth discussions on the challenges and opportunities presented by AI agents. Participants will assess the risks, examine strategies to mitigate emerging threats, and collaborate on establishing resilient frameworks for responsible innovation.
This event is organized by the Center for Digital Trust (C4DT) at EPFL.
Am Beispiel deutscher Medien wird die Entwicklung des Online-Journalismus seit den 90ern bis heute nachvollzogen. Besonders interessant finde ich die Diskussion der jeweiligen Problematiken denen die Verlage im Web 1.0, in den sozialen Medien und heute mit generativer AI konfrontiert sind/waren.
Here’s an interesting take on what happens if security bugs are found in Open Source libraries. Now that more and more of Open Source libraries find their way into commercial products from Google, Microsoft, Amazon, and others, the problem of fixing security bugs in a timely manner is becoming a bigger problem. Open Source projects (…)
There has been a lot of discussion about the high cost of cyber-attacks, and the recent announcement that the credentials of Swiss parliamentarians are circulating on the Darknet serves as a reminder of this. Interestingly, stolen access credential is the main intrusion vector (16%) in companies and remains undetected for the longest time (an average (…)
Dieser Beitrag erläutert anschaulich am Beispiel des jüngsten Datenleaks der Schweizer Bundespolitiker:innen warum wir als Gesellschaft solche Leaks nicht bagatellisieren sollten.
Contrary to what the author posits, I don’t think there is anything to fix about social media: it gives people the adrenaline rush they want, and comforts VC money by showing exponential growth. What can be changed is the way we prioritize seeking spaces to hang out with people we like. Mastodon is one such (…)
The Center for Digital Trust hosted a successful workshop on Privacy-Preserving eID last week. We welcomed 14 participants from seven partner organizations including Be-Ys, ELCA, FOITT, Kudelski, SICPA, Swiss Post/SwissSign, and Swisscom. The day-long event combined theoretical foundations with hands-on technical demonstrations. Our focus centered on swiyu, Switzerland’s proposed eID project developed by FOITT, and (…)
This report reveals new vulnerabilities in the equipment used in solar power grids and smart homes. It shows that attackers can exploit flaws in the supply chain and insecure protocols to disrupt power generation or grid stability. Clear infographics illustrate the network structure, risks and the worldwide share of solar use. As solar energy continues (…)
This article highlights the alarming reliance of critical infrastructure on outdated technology, exposing significant vulnerabilities in essential systems. The need for uninterrupted operation and compatibility requirements presents major challenges to the modernization of these legacy systems, and the costs to upgrade are steep. Yet the potential for catastrophic failure due to obsolete equipment underscores the (…)
This essay stands out because it moves beyond sensational fears about AI replacing humans and instead offers a practical framework for understanding when AI genuinely outperforms humans and when human skills remain irreplaceable… (at least for now).
This article highlights significant flaws within the proposed NO FAKES Act, whose repercussions would extend far beyond U.S. borders. I found it particularly insightful because of the parallels it draws between this bill and existing mechanisms for addressing copyright infringement, outlining how the deficiencies within the latter are likely to be mirrored in the former.
Driven by ethical concerns about using existing artwork to train gen AI models, an artist created his own model that produces output untrained on any data at all. What was interesting to me is that, in exploring whether gen AI could create original art, he also demonstrated a potential path to better understanding how such (…)
Images façon “studio Ghibli”, tendance Starter Pack. derrière leur aspect ludique, ces images générées par l’intelligence artificielle générative posent des questions environnementales très concrètes. Réponses avec Babak Falsafi, professeur ordinaire à la faculté d’Informatique et de Communications de l’EPFL, président et fondateur de l’Association suisse pour l’efficacité énergétique dans les centres de données (SDEA).
I was intrigued by this article, as it highlights how war impacts a country’s digital assets – something that is very relevant, yet little discussed, in today’s digitalized world.
This article underscores that neither digital policies nor technologies can be discussed in isolation. Using Indonesia as an example, it lays out how the country’s laws and regulations on internet content are actually implemented by the ISPs and examines how the combination of vaguely worded laws and sweeping filtering methods ultimately impacts citizens’ access to (…)
This article is interesting because it highlights the opportunities and challenges of personal data ownership. Although tools such as dWallet claim to empower users, they can encourage the poorest and least educated people to sell their data without understanding the risks, thereby widening the digital divide. True data empowerment means that everyone must have the (…)
As LLM agents become ‘en vogue’, we need to rethink the attacks they open to malicious third parties. Here Simon Willison describes a combination often seen in such agents that will put your private data at risk. Unfortunately, there is currently not much you can do, except be aware that all the data that agents (…)
Cycle tracking apps are not only helpful for those trying to conceive, but also serve as important tools for keeping track of one’s general reproductive health. As this article discusses, however, such tools can quickly become a double-edged sword due to the high value of the data they collect, which can potentially end up in (…)
To foster wallets, credentials and trusted infrastructure for the benefit of all humans. Leading organizations from across the globe coming together to shape the future of digital identity, in particular in the realm of secure, interoperable wallets, credentials and trusted infrastructure.
That is a very nice attack on privacy-protection in the mobile browsers: even if you don’t allow any cookies and don’t consent on being tracked, you’re browsing behaviour is still tracked. The idea of communicating from the mobile browser to your locally installed app is technically very interesting, and seems to be difficult to avoid (…)
This atlas of algorithmic systems curated by AlgorithmWatch CH is a nonexhaustive yet revealing list of algorithms currently deployed in Switzerland, whether to ‘predict, recommend, affect or take decisions about human beings’ or to ‘generate content used by or on human beings.’ The atlas is really eye-opening for me – so many systems that we (…)
Cybercriminals are using U.S. cloud providers like AWS and Azure to hide their activities, creating an “infrastructure laundering” trend that complicates cybersecurity. The whac-a-mole responses highlight the urgent need for better coordination and reveal that current strategies can’t keep up with criminals’ quick adaptation and evasion tactics.
Agentic AI has only recently emerged, yet it is already being used to commit fraud. This trend is not new; historically, fraudsters have exploited new technologies to target unsuspecting users and weak security systems, as seen with the first instances of voice phishing during the rise of telephony in the early 20th-century. These challenges have (…)