Skip to content

Factory Update, Fall 2024

Welcome to the Factory Update for Fall 2024. Twice a year we take the time to present some of the projects we see coming out of our affiliated labs and give you a short summary of what we’ve been doing the past 12 months. Please also give us a short feedback on what you most (…)

FBI, CISA, and NSA reveal most exploited vulnerabilities of 2023

Interesting to see that 12 out of the 15 top vulnerabilities published by CISA, America’s cyber defense agency, are from 2023. Log4j2 from 2021 is also still in the list! So make sure that your systems are up-to-date with regard to these vulnerabilities, even if it’s not 0-days anymore.Bleeping Computer

C4DT Deepfakes Hands-on Workshop (for C4DT Partners only)

Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. For our C4DT partners we are also hosting a hands-on workshop aimed at engineers, software developers and cybersecurity experts on Tuesday, 26th of November, which will allow the participants to develop skills and expertise in identifying and combating cyberattacks through deepfakes.

C4DT Roundtable on Deepfakes (for C4DT Partners only)

Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. For our C4DT partners we are hosting a high-level roundtable for executives, senior managers and project managers on Tuesday, 19th of November, during which strategies to address the challenges posed by deepfakes, and collaboration opportunities and projects to counter them will be discussed.

Anomaly detection in dynamic networks

The temporal evolution of the structure of dynamic networks carries critical information about the development of complex systems in various applications, from biology to social networks. While this topic is of importance, the literature in network science, graph theory, or network machine learning, still lacks of relevant models for dynamic networks, proper metrics for comparing network structures, as well as scalable algorithms for anomaly detection. This project exactly aims at bridging these gaps.

ANEMONE: Analysis and improvement of LLM robustness

Large Language Models (LLMs) have gained widespread adoption for their ability to generate coherent text, and perform complex tasks. However, concerns around their safety such as biases, misinformation, and user data privacy have emerged. Using LLMs to automatically perform red-teaming has become a growing area of research. In this project, we aim to use techniques like prompt engineering or adversarial paraphrasing to force the victim LLM to generate drastically different, often undesirable responses.

Automated Detection Of Non-standard Encryption In ACARS Communications

In this project we introduce a new family of prompt injection attacks, termed Neural Exec. Unlike known attacks that rely on handcrafted strings (e.g., “Ignore previous instructions and…”), we show that it is possible to conceptualize the creation of execution triggers as a differentiable search problem and use learning-based methods to autonomously generate them.

Automated Detection Of Non-standard Encryption In ACARS Communications

Aircraft and their ground counterparts have been communicating via the ACARS data-link protocol for more than five decades. Researchers discovered that some actors encrypt ACARS messages using an insecure, easily reversible encryption method. In this project, we propose BRUTUS, a decision-support system that support human analysts to detect the use of insecure ciphers in the ACARS network in an efficient and scalable manner. We propose and evaluate three different methods to automatically label ACARS messages that are likely to be encrypted with insecure ciphers.

Wie gut verstehen LLMs die Welt?

Inwiefern ‘verstehen’ LLMs die Welt, und können sie ‘denken’? Oder ist ihre vermeintliche Intelligenz doch nur eine Illusion der Statistik? Dieser Artikel arbeitet ein aktuelles Papier zu diesem Thema für ein nicht-technisches Publikum auf – ein wichtiger Beitrag dazu, dass das Wissen um die Fähigkeiten und Grenzen solcher Technologien nicht nur im Kreis von ExpertInnen (…)

TikTok executives know about app’s effect on teens, lawsuit documents allege

The leaked internal TikTok documents confirm the long-held suspicion that we urgently need to stop entrusting social media companies with putting up safety-rails. Dampening addictive features, bursting filter bubbles and moderating content directly contradicts maximising user engagement, the metric by which such companies live and die. We need binding regulations with real teeth to protect (…)

Curtain Call for our Demonstrators: A Summary

  One of our jobs at the C4DT Factory is to work on promising projects from our affiliated labs. This helps the faculties translate their research into formats accessible to different audiences. For a newly on-boarded project, we evaluate its current state and identify the required steps towards a final product. We may then also (…)

Turkey blocks instant messaging platform Discord

The banning of Discord in Russia and Turkey is concerning because it serves as a crucial communication tool (without suitable alternatives available), and both countries justify the ban by citing security concerns, such as misuse for illegal activities. At the core of the ban is also Discord’s alleged unwillingness to comply with local laws and (…)

Crypto is betting it all on the 2024 elections

“In the US, despite public skepticism and lack of trust, the crypto-currency industry is determined to assert its influence in Washington, spending record amounts on political campaigns. I find it interesting how they are giving the subject a prominent place on the political agenda, when it’s clearly not a priority concern for the majority of (…)

[seal] call for projects

Canton Vaud’s [seal] Program funds projects in digital trust and cybersecurity with up to CHF 100K or 90% of cost ! The aim of this latest call for projects is to stimulate collaborative innovation in order to propose solutions that help meet the challenges of multimedia content security, from data confidentiality to emerging threats linked (…)

C4DT DeepFakes workshops

Introduction Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. We are hosting two distinct workshops tailored for (…)

“Deepfakes, Distrust and Disinformation” Conference: Acknowledgements and Thanks

C4DT would like to express a BIG THANK YOU to the speakers, panelists, moderators, to the attendees and to the partner Centers who have made this C4DT conference such a memorable and interesting event! We are very thankful for this opportunity and what it’s brought us!

For those who could not make it to the conference, within the next 2 weeks we will publish the recordings of the talks and panels on C4DT’s Publications Page

US proposes ban on smart cars with Chinese and Russian tech

Similar to the TikTok ban, this initiative is driven by a combination of protectionism, national security concerns, and data privacy fears. While it is possible for state and non-state actors to hack into any car system if they are determined to do so, the primary concern is China’s completely legal ability to access data collected (…)

Hacking Kia: Remotely Controlling Cars With Just a License Plate

One more thing which doesn’t stop inspiring security-related articles: cars with a 24/7 internet connection. This time it’s KIA where attackers found a way to remotely open/lock the vehicles, start the motor, and many more things. The only thing needed is the license-plate number. So, it seems that the car manufacturers still don’t test their (…)

A Realist Perspective on AI Regulation

A good discussion on the reasons behind the regulatory fervor regarding AI which reveals that, at its core, it is a struggle for power—specifically, the power to determine the values, goals, and means that will eventually be enshrined in regional and international institutional settings governing AI.

AI in Public Sector Decision-Making: Challenges, Risks, and Recommendations

This white paper will analyze the extent to which concerns surrounding different classes of public sector use of AI—process automation, AI-driven decision-making, and citizen service delivery with AI tools—differ, consider the existing national and supranational regulatory frameworks, and develop recommendations for strategic areas necessary to guide the usage of AI decision-making tools in the public sector. To achieve this the project will include document analysis, stakeholder interviews, and comparative research from other countries’ use cases and regulations.

Data Policy and Data Regulation in Switzerland

This project addresses the growing need for a strategic policy approach to data and data spaces in Switzerland, especially as the EU is rapidly advancing in this field. Since 2023, the Federal Chancellery, particularly the DTI (Digital Transformation and ICT Steering), has been consolidating efforts towards the “Swiss Data Ecosystem.” The C4DT at EPFL supports these efforts, aiming to develop a foundational document for Swiss data policy, focusing on the state’s role in the data ecosystem. This document will be crafted in collaboration with key policy actors in Switzerland and will include practical recommendations for the Federal Council and Parliament.

Deepfake Mini-Hackathon

The EPFL AI Center and LauzHack is hosting a DeepFake Mini-Hackathon. While rapid advances in GenAI and the increased accessibility to models/compute can lead to impressive advances in science and technology, these tools can be also for malicious purposes, notably deepfake generation. The goal of this hackathon is to leverage the intelligence and creativity of the EPFL community (and surroundings) to better understand and raise awareness about the technology that can be used for deepfake generation and detection.