Interesting work from OpenAI, who are testing how good their models are at convincing people to change their minds. Currently, they’re running the tests only internally on pre-selected human testers. But who knows where this will eventually be used, and whether in the open or hidden? For that matter, what about the LLM-generated messages Meta (…)
While the ‘Code of conduct on countering illegal hate speech online’ that the European Commission included into the Digital Services Act (DSA) is work in progress, the fact that even companies such as Meta and X feel compelled to sign shows that regulations are far from the toothless tigers that they are often made out (…)
Friday, February 7th, 2025, 14h-17h, BC 410, EPFL Introduction Artificial Intelligence has the potential to revolutionize also software development and IT in general. To explore the implications of AI on these domains, we organize a roundtable discussion. The objective of this roundtable is to gather insights from visionaries and experts to understand the impact of (…)
I find this article interesting because it reveals how popular apps are being used to collect personal location data through real-time bidding (RTB), all without the knowledge of the app developers. The hacked Gravy Analytics files prove how apps, even those that are supposed to be private, can accidentally become part of this data supply (…)
Here is an article, in Cory Doctorow’s signature style, discussing social networks and what drives them and what makes people leave or stay. I like specifically how he dissects the way the once-good services these platforms used to provide got untethered from the profits their creators and CEOs were chasing over the years. Towards the (…)
The awesome Molly White throws light upon how to calculate the market cap of a crypto coin. I still think that decentralized systems like blockchains are very useful in some cases. However, the run for the coin with the most money seems very sad to me, and not just because of all the investors who (…)
The Ethics of Privacy and Surveillance by Carissa Véliz, Oxford University Press – 256 pages by Hector Garcia Morales “Privacy matters because it shields us from possible abuses of power”. Such a strong statement opens the introduction of the book, setting the grounds for the following pages. The thesis is that, in digital societies, there (…)
Meta lays out in this blog post their rationale behind axing third-party fact checking and sweeping changes in content moderation on Facebook, Instagram and Threads. It is important to read this (or watch Mark Zuckerberg’s video) with recent company history in mind: Facebook’s failure to properly moderate content helped fan the flames in the Rohingya (…)
In a sea of unsettling news, the US’s new Cyber Trust Mark labelling program is a welcome beacon of light. With consumers’ personal and home office spaces increasingly populated by connected devices, from door locks and doorbells, to baby monitors, vacuums, and TVs, the security of “smart home ecosystems” has never been more important. The (…)
Privacy Enhancing Technologies, or PETs for short, is an umbrella term for a wide range of technologies and tools designed to protect our privacy online. You may not realize it, but you probably already use PETs on a daily basis. Some common examples [1] include HTTPS, securing connections between you and websites End-to-end encryption, ensuring (…)
This article reveals how smart devices gather more information than typically required for their functions, including personal data, location, audio recordings … and this even for devices (like air fryers) that clearly do not require those. This (again) shows a lack of transparency and poses critical questions about consumer privacy.
If you want to attack a network of an organization which has a good firewall, a good entry point is the wireless network. But what do you do if the organization is on the other side of the globe? Easy: you hack a nearby organization with a bad firewall, then use their wireless network to (…)
This article discusses a study suggesting algorithmic bias favoring Republican-leaning content, and its owner Elon Musk’s posts in particular, on the social media platform X. The study further claims that this bias dates back to when Musk officially started supporting Donald Trump. While it is of course impossible to prove these allegations without access to (…)
The on-going journalist investigation into a data set obtained as a free sample from a US-based data broker continues to show how problematic the global data market really is. The latest article focusing on US Army bases in Germany reveals that not only can critical personnel be tracked by identifying their movement profiles, but that (…)
This publication revisits key questions with speakers from the Oct 1st conference on “Deepfakes, Distrust and Disinformation” through the lens of public perception and seeks to advance the debate surrounding AI, misinformation, and disinformation, especially in political contexts.
Welcome to the Factory Update for Fall 2024. Twice a year we take the time to present some of the projects we see coming out of our affiliated labs and give you a short summary of what we’ve been doing the past 12 months. Please also give us a short feedback on what you most (…)
Interesting to see that 12 out of the 15 top vulnerabilities published by CISA, America’s cyber defense agency, are from 2023. Log4j2 from 2021 is also still in the list! So make sure that your systems are up-to-date with regard to these vulnerabilities, even if it’s not 0-days anymore.Bleeping Computer
Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. For our C4DT partners we are also hosting a hands-on workshop aimed at engineers, software developers and cybersecurity experts on Tuesday, 26th of November, which will allow the participants to develop skills and expertise in identifying and combating cyberattacks through deepfakes.
Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. For our C4DT partners we are hosting a high-level roundtable for executives, senior managers and project managers on Tuesday, 19th of November, during which strategies to address the challenges posed by deepfakes, and collaboration opportunities and projects to counter them will be discussed.
An interesting question about how current LLMs are answering our questions: do they have an accurate world model which they use to answer? Or are they more like a ‘stochastical parrot’, and simply answer using previously seen reasoning? The difference is important, because it indicates the maximum precision these models can attain – the better (…)
The temporal evolution of the structure of dynamic networks carries critical information about the development of complex systems in various applications, from biology to social networks. While this topic is of importance, the literature in network science, graph theory, or network machine learning, still lacks of relevant models for dynamic networks, proper metrics for comparing network structures, as well as scalable algorithms for anomaly detection. This project exactly aims at bridging these gaps.
Large Language Models (LLMs) have gained widespread adoption for their ability to generate coherent text, and perform complex tasks. However, concerns around their safety such as biases, misinformation, and user data privacy have emerged. Using LLMs to automatically perform red-teaming has become a growing area of research. In this project, we aim to use techniques like prompt engineering or adversarial paraphrasing to force the victim LLM to generate drastically different, often undesirable responses.
In this project we introduce a new family of prompt injection attacks, termed Neural Exec. Unlike known attacks that rely on handcrafted strings (e.g., “Ignore previous instructions and…”), we show that it is possible to conceptualize the creation of execution triggers as a differentiable search problem and use learning-based methods to autonomously generate them.
Aircraft and their ground counterparts have been communicating via the ACARS data-link protocol for more than five decades. Researchers discovered that some actors encrypt ACARS messages using an insecure, easily reversible encryption method. In this project, we propose BRUTUS, a decision-support system that support human analysts to detect the use of insecure ciphers in the ACARS network in an efficient and scalable manner. We propose and evaluate three different methods to automatically label ACARS messages that are likely to be encrypted with insecure ciphers.