From data theft to interference in democratic processes, we’ve often warned of the negative consequences of digitalization in our weekly picks. To end the year on a more positive note, I like the way this site allows you to visualize the migrations flows of different bird species. Beautiful and peaceful.
Looking for something different this holiday season, but don’t want to go overboard? Consider exploring social media feeds without relying on “The Algorithm” to dictate what you see. While this option isn’t available on every platform and isn’t equally sensible for all, the EU has made it possible for you to have this choice. We’re (…)
The 2025 edition of AI-days will take place from 27 to 29 January 2025, in Geneva (27 and 28) and Lausanne (29 January). The HES-SO AI-days are organised by the Swiss Artificial Intelligence Centre for SMEs (CSIA-PMEs), which is funded by the Engineering and Architecture Domain of HES-SO. The aim of the event is to provide a forum for discussing the practical use of new AI technologies in the economic fabric.
A l’occasion de la Journée de la protection des données, la Faculté de droit, des sciences criminelles et d’administration publique de l’Université de Lausanne organise une conférence publique sur le thème « La liberté de choisir à l’ère numérique » en collaboration avec le Préposé fédéral à la protection des données et à la transparence, le Centre universitaire d’informatique de l’Université de Genève et ThinkServices
Privacy Enhancing Technologies, or PETs for short, is an umbrella term for a wide range of technologies and tools designed to protect our privacy online. You may not realize it, but you probably already use PETs on a daily basis. Some common examples [1] include HTTPS, securing connections between you and websites End-to-end encryption, ensuring (…)
This report provides an overview of the cybersecurity maturity across the EU, highlighting that member states have developed their own cybersecurity strategies. What stands out most is the term “heterogeneous” used in the findings. In response, the report uses the terms “common,” “unified,” “harmonized,” “comprehensive,” “coherent,” and “coordinated” to describe recommended policy efforts.
I like this step-by-step discovery of a potential supply chain vulnerability described in the article. It’s easy to follow and shows some of the impasses the author went through, and how they solved the problem in the end. Bonus points for OpenWRT to be really reactive and fixing their systems in a timely manner!
For two years Brightside AI will complement the already diverse group of partner companies through its start-up perspective to collaborate and share insights on trust-building technologies.
Brightside prepares a company’s employees for cyberattacks. Using employees’ digital footprints, Brightside runs AI-driven phishing simulations to prepare teams for potential cyberattacks. It gives employees tools to control their online presence in compliance with GDPR.
On the one hand, I love the idea of having a personal shopper, someone who can tick off my shopping list while finding me the best deals. On the other hand, just thinking about the level of information the AI shopping agent has to collect and consolidate about me (and millions of other consumers) – (…)
This article is an overview of how AI-generated content, most notably images, are used by far right parties across Europe. It incidentally also highlights how lacking current abuse safeguards and strategies truly are, as almost all of the content presented in the article would surely fall under most companies’ prohibited use.
If you thought it’s hard to keep systems safe in your datacenter, think about these poor people who have to do the same in space! I was always marvelled by the fact that NASA can keep contact with the Voyagers probes. But things will get more and more complicated with people having unauthorized access to (…)
If you want to attack a network of an organization which has a good firewall, a good entry point is the wireless network. But what do you do if the organization is on the other side of the globe? Easy: you hack a nearby organization with a bad firewall, then use their wireless network to (…)
Researchers at Google DeepMind are making progress in interpretability of AI models using “sparse autoencoders”, a tool that helps shine light on the inner workings of a model’s logic, how it works, and when it errs. By identifying these features of a model, the model can, in principle, be steered away from undesirable outcomes like (…)
This article discusses a study suggesting algorithmic bias favoring Republican-leaning content, and its owner Elon Musk’s posts in particular, on the social media platform X. The study further claims that this bias dates back to when Musk officially started supporting Donald Trump. While it is of course impossible to prove these allegations without access to (…)
The on-going journalist investigation into a data set obtained as a free sample from a US-based data broker continues to show how problematic the global data market really is. The latest article focusing on US Army bases in Germany reveals that not only can critical personnel be tracked by identifying their movement profiles, but that (…)
The collaboration between Bryan Ford’s DEDIS lab and the C4DT Factory team has culminated in the successful development and deployment of D-Voting. It is a privacy-preserving, secure, and auditable e-voting system for EPFL’s internal elections. Building on a six-year journey that began with the initial e-voting platform in 2018, the D-Voting system incorporates DEDIS’s latest (…)
Arkworks has started under the supervision of prof. Alessandro Chiesa. It is a collection of fundamental algorithms and structures used in various types of zero knowledge proofs. Several libraries working with zero knowledge proofs use arkworks as a foundation for the cryptographical part. It is used for example by the Horizon project, which creates a (…)
The Orchard project is developed at EPFL’s HexHive research lab under the supervision of Prof. Mathias Payer in collaboration with the Center for Digital Trust. The project aims to provide a standardized platform for the software and systems security community to submit their research paper’s software artifacts. These submissions will be automatically built and assessed, (…)
aiFlows is developed in EPFL’s Data Science Lab (dlab) under the supervision of Prof. Robert West. Originally a code-to-code translation tool leveraging feedback loops between LLMs, it evolved organically into a broader framework for defining interactions between AI agents and other agents. Such collaborations between AI agents, non-AI agents and humans will become increasingly common (…)
Welcome to the Factory Update for Fall 2024. Twice a year we take the time to present some of the projects we see coming out of our affiliated labs and give you a short summary of what we’ve been doing the past 12 months. Please also give us a short feedback on what you most (…)
Perhaps you heard of iPhones rebooting if they are not in use for some time. This addresses security vulnerabilities where the phones have been unlocked, so the decryption keys are in the memory, but currently the screen-lock is active. In this article you’ll learn which parts of the iPhone are responsible for triggering the reboot. (…)
Interesting to see that 12 out of the 15 top vulnerabilities published by CISA, America’s cyber defense agency, are from 2023. Log4j2 from 2021 is also still in the list! So make sure that your systems are up-to-date with regard to these vulnerabilities, even if it’s not 0-days anymore.Bleeping Computer
The Markup’s guide to protecting your privacy when seeking abortion care has thankfully less urgency in Switzerland than in the United States. It serves nonetheless as a stark reminder that preserving privacy in the digital age is neither an academic exercise nor an end in itself, but a means to protect vulnerable people should the (…)
Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. For our C4DT partners we are also hosting a hands-on workshop aimed at engineers, software developers and cybersecurity experts on Tuesday, 26th of November, which will allow the participants to develop skills and expertise in identifying and combating cyberattacks through deepfakes.