Skip to content

Study suggests X turned right just in time for election season

This article discusses a study suggesting algorithmic bias favoring Republican-leaning content, and its owner Elon Musk’s posts in particular, on the social media platform X. The study further claims that this bias dates back to when Musk officially started supporting Donald Trump. While it is of course impossible to prove these allegations without access to (…)

D-Voting

The collaboration between Bryan Ford’s DEDIS lab and the C4DT Factory team has culminated in the successful development and deployment of D-Voting. It is a privacy-preserving, secure, and auditable e-voting system for EPFL’s internal elections. Building on a six-year journey that began with the initial e-voting platform in 2018, the D-Voting system incorporates DEDIS’s latest (…)

Orchard

The Orchard project is developed at EPFL’s HexHive research lab under the supervision of Prof. Mathias Payer in collaboration with the Center for Digital Trust. The project aims to provide a standardized platform for the software and systems security community to submit their research paper’s software artifacts. These submissions will be automatically built and assessed, (…)

aiFlows

aiFlows is developed in EPFL’s Data Science Lab (dlab) under the supervision of Prof. Robert West. Originally a code-to-code translation tool leveraging feedback loops between LLMs, it evolved organically into a broader framework for defining interactions between AI agents and other agents. Such collaborations between AI agents, non-AI agents and humans will become increasingly common (…)

Artificial Intelligence Day

EPFL and UNIL invite you to a special event on the theme of Artificial Intelligence on Saturday 23 November 2024 at the Rolex Learning Center.

A unique opportunity to better understand and embrace the AI revolution, while discussing the issues surrounding its use with an exclusive panel of scientists and experts. Laboratory demonstrations and workshops for adults and young people will also be on offer. The event is free of charge and open to all, aged 10 and over.

C4DT Deepfakes Hands-on Workshop (for C4DT Partners only)

Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. For our C4DT partners we are also hosting a hands-on workshop aimed at engineers, software developers and cybersecurity experts on Tuesday, 26th of November, which will allow the participants to develop skills and expertise in identifying and combating cyberattacks through deepfakes.

C4DT Roundtable on Deepfakes (for C4DT Partners only)

Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. For our C4DT partners we are hosting a high-level roundtable for executives, senior managers and project managers on Tuesday, 19th of November, during which strategies to address the challenges posed by deepfakes, and collaboration opportunities and projects to counter them will be discussed.

Monitoring Swiss industrial and technological landscape 2

The main objective of the project is to perform online monitoring of technologies and technology actors in publicly accessible information sources. The monitoring concerns the early detection of mentions of new technologies, of new actors in the technology space, and the facts related to new relations between technologies and technology actors (subsequently, all these will be called technology mentions). The project will build on earlier results obtained on the retrieval of technology-technology actors using Large Language Models (LLMs).

Monitoring Swiss industrial and technological landscape 1

The main objective of the project is to perform online monitoring of technologies and technology actors in publicly accessible information sources. The monitoring concerns the early detection of mentions of new technologies, of new actors in the technology space, and the facts related to new relations between technologies and technology actors (subsequently, all these will be called technology mentions). The project will build on earlier results obtained on the retrieval of technology-technology actors using Large Language Models (LLMs).

RAEL: Robustness Analysis of Foundation Models

Pre-trained foundation models are widely used in deep learning applications due to their advanced capabilities and extensive training on large datasets. However, these models may have safety risks because they are trained on potentially unsafe internet-sourced data. Additionally, fine-tuned specialized models built on these foundation models often lack proper behavior verification, making them vulnerable to adversarial attacks and privacy breaches. The project aim is to study and explore these attacks in for foundation models.

ANEMONE: Analysis and improvement of LLM robustness

Large Language Models (LLMs) have gained widespread adoption for their ability to generate coherent text, and perform complex tasks. However, concerns around their safety such as biases, misinformation, and user data privacy have emerged. Using LLMs to automatically perform red-teaming has become a growing area of research. In this project, we aim to use techniques like prompt engineering or adversarial paraphrasing to force the victim LLM to generate drastically different, often undesirable responses.

MAXIM: Improving and explaining robustness of NMT systems

Neural Machine Translation (NMT) models have been shown to be vulnerable to adversarial attacks, wherein carefully crafted perturbations of the input can mislead the target model. In this project, we introduce novel attack framework against NMT. Unlike previous attacks, our new approaches have a more substantial effect on the translation by altering the overall meaning. This new framework can reveal the vulnerabilities of NMT systems compared to tradition methods.

Enabling Health Data-Sharing in Decentralized Systems

Advancements in artificial intelligence, machine learning, and big data analytics highlight the potential of secondary health data use to enhance healthcare by uncovering insights for precision medicine and public health. This issue paper will provide clarity on the different types of health data, how they are shared and used, and propose approaches for enabling secondary health data use that align with Switzerland’s decentralized political structure, Swiss and EU regulatory frameworks, and technological developments in health data sharing.

Automated Detection Of Non-standard Encryption In ACARS Communications

In this project we introduce a new family of prompt injection attacks, termed Neural Exec. Unlike known attacks that rely on handcrafted strings (e.g., “Ignore previous instructions and…”), we show that it is possible to conceptualize the creation of execution triggers as a differentiable search problem and use learning-based methods to autonomously generate them.

Automated Detection Of Non-standard Encryption In ACARS Communications

Aircraft and their ground counterparts have been communicating via the ACARS data-link protocol for more than five decades. Researchers discovered that some actors encrypt ACARS messages using an insecure, easily reversible encryption method. In this project, we propose BRUTUS, a decision-support system that support human analysts to detect the use of insecure ciphers in the ACARS network in an efficient and scalable manner. We propose and evaluate three different methods to automatically label ACARS messages that are likely to be encrypted with insecure ciphers.

Should We Chat, Too? Security Analysis of WeChat’s MMTLS Encryption Protocol

I really like this report and its accompanying FAQ for non-technical readers. Citizen Lab is of course a defender for human rights and freedom of expression, but in this article, they don’t rail on about how China’s weak data protection ecosystem impinges on people’s right to privacy. They just do the technical legwork and let (…)

End-to-End Encrypted Cloud Storage in the Wild:A Broken Ecosystem

A group from ETHZ looked into the end-to-end encryption vows of five providers and found that only one actually fulfils its promise. The problem is mostly that the servers can read the files, even though they should not be able to do so! What worries me more is that some of the companies didn’t even (…)

E-ID hands-on Workshop

We’re thrilled to share the success of our recent hands-on workshop on crafting more privacy-preserving E-IDs! In the morning, Imad Aad from C4DT set the stage with an insightful overview of the importance of E-IDs and the essentials for ensuring their effectiveness. The afternoon sessions, led by Linus Gasser and Ahmed Elghareeb, were a deep dive (…)

Machines of Loving Grace

Anthropic’s CEO, Dario Amodei, is one of today’s leading figures in AI. In his essay, he envisions a future where powerful AI could radically improve human life by accelerating progress in areas such as biology, mental health, economic development, and governance. He foresees a more equitable and prosperous world resulting from these advancements. I particularly (…)

Applied Machine Learning Days 2025 – Cyberattacks through Deepfakes, Disinformation & AI

The increasing prevalence of deepfakes and disinformation calls for proactive measures to tackle the associated cybersecurity threats. This track, entitled “Unmasking the Digital Deception: Defending Against DeepFakes and Disinformation Attacks”, is organized by the C4DT and addresses the urgent need to raise awareness, share best practices, and enhance skills in detecting and preventing cyberattacks induced through deepfakes. By participating in this track, individuals and organizations can strengthen their cybersecurity defenses, protect their reputation, and contribute to a safer digital environment.