Skip to content

Wie gut verstehen LLMs die Welt?

Inwiefern ‘verstehen’ LLMs die Welt, und können sie ‘denken’? Oder ist ihre vermeintliche Intelligenz doch nur eine Illusion der Statistik? Dieser Artikel arbeitet ein aktuelles Papier zu diesem Thema für ein nicht-technisches Publikum auf – ein wichtiger Beitrag dazu, dass das Wissen um die Fähigkeiten und Grenzen solcher Technologien nicht nur im Kreis von ExpertInnen (…)

TikTok executives know about app’s effect on teens, lawsuit documents allege

The leaked internal TikTok documents confirm the long-held suspicion that we urgently need to stop entrusting social media companies with putting up safety-rails. Dampening addictive features, bursting filter bubbles and moderating content directly contradicts maximising user engagement, the metric by which such companies live and die. We need binding regulations with real teeth to protect (…)

Curtain Call for our Demonstrators: A Summary

  One of our jobs at the C4DT Factory is to work on promising projects from our affiliated labs. This helps the faculties translate their research into formats accessible to different audiences. For a newly on-boarded project, we evaluate its current state and identify the required steps towards a final product. We may then also (…)

Turkey blocks instant messaging platform Discord

The banning of Discord in Russia and Turkey is concerning because it serves as a crucial communication tool (without suitable alternatives available), and both countries justify the ban by citing security concerns, such as misuse for illegal activities. At the core of the ban is also Discord’s alleged unwillingness to comply with local laws and (…)

Crypto is betting it all on the 2024 elections

“In the US, despite public skepticism and lack of trust, the crypto-currency industry is determined to assert its influence in Washington, spending record amounts on political campaigns. I find it interesting how they are giving the subject a prominent place on the political agenda, when it’s clearly not a priority concern for the majority of (…)

[seal] call for projects

Canton Vaud’s [seal] Program funds projects in digital trust and cybersecurity with up to CHF 100K or 90% of cost ! The aim of this latest call for projects is to stimulate collaborative innovation in order to propose solutions that help meet the challenges of multimedia content security, from data confidentiality to emerging threats linked (…)

C4DT DeepFakes workshops

Introduction Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. We are hosting two distinct workshops tailored for (…)

“Deepfakes, Distrust and Disinformation” Conference: Acknowledgements and Thanks

C4DT would like to express a BIG THANK YOU to the speakers, panelists, moderators, to the attendees and to the partner Centers who have made this C4DT conference such a memorable and interesting event! We are very thankful for this opportunity and what it’s brought us!

For those who could not make it to the conference, within the next 2 weeks we will publish the recordings of the talks and panels on C4DT’s Publications Page

US proposes ban on smart cars with Chinese and Russian tech

Similar to the TikTok ban, this initiative is driven by a combination of protectionism, national security concerns, and data privacy fears. While it is possible for state and non-state actors to hack into any car system if they are determined to do so, the primary concern is China’s completely legal ability to access data collected (…)

Hacking Kia: Remotely Controlling Cars With Just a License Plate

One more thing which doesn’t stop inspiring security-related articles: cars with a 24/7 internet connection. This time it’s KIA where attackers found a way to remotely open/lock the vehicles, start the motor, and many more things. The only thing needed is the license-plate number. So, it seems that the car manufacturers still don’t test their (…)

A Realist Perspective on AI Regulation

A good discussion on the reasons behind the regulatory fervor regarding AI which reveals that, at its core, it is a struggle for power—specifically, the power to determine the values, goals, and means that will eventually be enshrined in regional and international institutional settings governing AI.

AI in Public Sector Decision-Making: Challenges, Risks, and Recommendations

This white paper will analyze the extent to which concerns surrounding different classes of public sector use of AI—process automation, AI-driven decision-making, and citizen service delivery with AI tools—differ, consider the existing national and supranational regulatory frameworks, and develop recommendations for strategic areas necessary to guide the usage of AI decision-making tools in the public sector. To achieve this the project will include document analysis, stakeholder interviews, and comparative research from other countries’ use cases and regulations.

Data Policy and Data Regulation in Switzerland

This project addresses the growing need for a strategic policy approach to data and data spaces in Switzerland, especially as the EU is rapidly advancing in this field. Since 2023, the Federal Chancellery, particularly the DTI (Digital Transformation and ICT Steering), has been consolidating efforts towards the “Swiss Data Ecosystem.” The C4DT at EPFL supports these efforts, aiming to develop a foundational document for Swiss data policy, focusing on the state’s role in the data ecosystem. This document will be crafted in collaboration with key policy actors in Switzerland and will include practical recommendations for the Federal Council and Parliament.

Deepfake Mini-Hackathon

The EPFL AI Center and LauzHack is hosting a DeepFake Mini-Hackathon. While rapid advances in GenAI and the increased accessibility to models/compute can lead to impressive advances in science and technology, these tools can be also for malicious purposes, notably deepfake generation. The goal of this hackathon is to leverage the intelligence and creativity of the EPFL community (and surroundings) to better understand and raise awareness about the technology that can be used for deepfake generation and detection.

ADAN: Adaptive Adversarial Training for Robust Machine Learning (2024)

Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use adversarial learning, in which the model is fine-tuned with these adversarial perturbations. However, this method has several drawbacks. It is computationally costly, has convergence instabilities and it does not protect against multiple types of corruptions at the same time. The objective of this project is to develop improved and effective adversarial training solutions that tackle these drawbacks.

Pitfalls in Fine-Tuning LLMs

On the 19th of June 2024 the C4DT Factory organized a hands-on workshop to show what can go wrong when Large Language Models (LLMs) are fine-tuned. It was a pleasure working with our partners from armasuisse, FOITT (BIT), ELCA, ICRC, Kudelski Security, SICPA, Swiss Post, and Swissquote. LLMs take the world by storm, but for (…)

Empowering Digital Identities: The SSI Protocol Landscape

In this fourth part of the blog series “Swiss e-ID journey”, we give an overview of the Self-Sovereign Identity (SSI) [10] landscape in CH, EU, and beyond. This allows the reader to put the current effort of the e-ID in context with international efforts in references and implementations regarding e-ID systems. You can read the (…)

The Swiss Confederation E-ID Public Sandbox Trust Infrastructure – Part 3

C4DT Demonstrator using the Swiss Public Sandbox Trust Infrastructure This is our third article about the Swiss e-ID Journey. An overview of the system can be found in our first article, Switzerland’s e-ID journey so far [1a], and an introduction to the first steps of using the sandbox [8] in our second article, The Swiss (…)

Secure, Transparent, and Verifiable Elections: The D-Voting Project

The d-voting project from the DEDIS lab at EPFL is a decentralized voting system that leverages blockchain technology and cryptographic algorithms to ensure secure, transparent, and verifiable elections. By using a blockchain as the storage medium, the system allows multiple entities to oversee the election process and enables easy access for public verifiers to ensure (…)

C4DT Conference on Disinformation, Elections and AI

October 1st, 2024, 09h30-17h30, SwissTech Convention Center, EPFL Introduction In 2024, more than 50 national elections are taking place or have already taken place across the globe, from Taiwan’s presidential elections in January, to India’s Lok Sabha election staged over seven phases from April to June, to the US presidential elections in November. Meanwhile, the (…)