Skip to content

C4DT Roundtable on Deepfakes (for C4DT Partners only)

Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. For our C4DT partners we are hosting a high-level roundtable for executives, senior managers and project managers on Tuesday, 19th of November, during which strategies to address the challenges posed by deepfakes, and collaboration opportunities and projects to counter them will be discussed.

RAEL: Robustness Analysis of Foundation Models

Pre-trained foundation models are widely used in deep learning applications due to their advanced capabilities and extensive training on large datasets. However, these models may have safety risks because they are trained on potentially unsafe internet-sourced data. Additionally, fine-tuned specialized models built on these foundation models often lack proper behavior verification, making them vulnerable to adversarial attacks and privacy breaches. The project aim is to study and explore these attacks in for foundation models.

ANEMONE: Analysis and improvement of LLM robustness

Large Language Models (LLMs) have gained widespread adoption for their ability to generate coherent text, and perform complex tasks. However, concerns around their safety such as biases, misinformation, and user data privacy have emerged. Using LLMs to automatically perform red-teaming has become a growing area of research. In this project, we aim to use techniques like prompt engineering or adversarial paraphrasing to force the victim LLM to generate drastically different, often undesirable responses.

Enabling Health Data-Sharing in Decentralized Systems

Advancements in artificial intelligence, machine learning, and big data analytics highlight the potential of secondary health data use to enhance healthcare by uncovering insights for precision medicine and public health. This issue paper will provide clarity on the different types of health data, how they are shared and used, and propose approaches for enabling secondary health data use that align with Switzerland’s decentralized political structure, Swiss and EU regulatory frameworks, and technological developments in health data sharing.

Applied Machine Learning Days 2025 – Journalism in the Era of AI and Cyber Threats

Journalists provide crucial insights into digital trust issues prevalent in our time, such as deepfakes and cyberattacks, by documenting these trends and their impacts. Their work demands a strong focus on cybersecurity, data privacy, and information trustworthiness, ensuring protection for themselves and their sources while verifying material authenticity. This real-world experience is invaluable to academia, providing essential context for discussions and research. A global dialogue will cover digital trust and trust-building technologies, highlighting topics like AI in military operations, AI-generated disinformation in elections, IoT cybersecurity, citizen surveillance, and algorithmic decision-making.

Wie gut verstehen LLMs die Welt?

Inwiefern ‘verstehen’ LLMs die Welt, und können sie ‘denken’? Oder ist ihre vermeintliche Intelligenz doch nur eine Illusion der Statistik? Dieser Artikel arbeitet ein aktuelles Papier zu diesem Thema für ein nicht-technisches Publikum auf – ein wichtiger Beitrag dazu, dass das Wissen um die Fähigkeiten und Grenzen solcher Technologien nicht nur im Kreis von ExpertInnen (…)

Should We Chat, Too? Security Analysis of WeChat’s MMTLS Encryption Protocol

I really like this report and its accompanying FAQ for non-technical readers. Citizen Lab is of course a defender for human rights and freedom of expression, but in this article, they don’t rail on about how China’s weak data protection ecosystem impinges on people’s right to privacy. They just do the technical legwork and let (…)

Here’s the paper no one read before declaring the demise of modern cryptography

Not sure if you heard of the latest misinterpretation of a paper describing an attack on symmetric encryption using quantum computers. It has been hyped by some journals as ‘the end of encryption’. But it is at best a demonstration that future quantum computers might be as fast as classical computers. For hacking a very (…)

The Disinformation Warning Coming From the Edge of Europe

The razor-thin victory for E.U. supports in Moldova is an uneasy one with the heavy Russian influence campaign on social media in the back of everyone’s mind. Platforms like Meta have repeatedly demonstrated their inability to effectively tackle influence campaigns, no matter their target or scale. With the U.S. election just days away, American democracy (…)

TikTok executives know about app’s effect on teens, lawsuit documents allege

The leaked internal TikTok documents confirm the long-held suspicion that we urgently need to stop entrusting social media companies with putting up safety-rails. Dampening addictive features, bursting filter bubbles and moderating content directly contradicts maximising user engagement, the metric by which such companies live and die. We need binding regulations with real teeth to protect (…)

E-ID hands-on Workshop

We’re thrilled to share the success of our recent hands-on workshop on crafting more privacy-preserving E-IDs! In the morning, Imad Aad from C4DT set the stage with an insightful overview of the importance of E-IDs and the essentials for ensuring their effectiveness. The afternoon sessions, led by Linus Gasser and Ahmed Elghareeb, were a deep dive (…)

Machines of Loving Grace

Anthropic’s CEO, Dario Amodei, is one of today’s leading figures in AI. In his essay, he envisions a future where powerful AI could radically improve human life by accelerating progress in areas such as biology, mental health, economic development, and governance. He foresees a more equitable and prosperous world resulting from these advancements. I particularly (…)

Applied Machine Learning Days 2025 – Cyberattacks through Deepfakes, Disinformation & AI

The increasing prevalence of deepfakes and disinformation calls for proactive measures to tackle the associated cybersecurity threats. This track, entitled “Unmasking the Digital Deception: Defending Against DeepFakes and Disinformation Attacks”, is organized by the C4DT and addresses the urgent need to raise awareness, share best practices, and enhance skills in detecting and preventing cyberattacks induced through deepfakes. By participating in this track, individuals and organizations can strengthen their cybersecurity defenses, protect their reputation, and contribute to a safer digital environment.

Applied Machine Learning Days 2025 – AI & Software Development Life Cycle

The integration of AI into the SDLC has the potential to revolutionize software development by automating tasks, improving efficiency, and enhancing decision-making. However, it also introduces risks and challenges that need to be addressed. This track, entitled “AI-Driven Software Development: Transforming the Life Cycle with Intelligent Automation”, is organized by the C4DT and is motivated by the need to explore the transformative potential of AI in the SDLC while ensuring responsible and ethical use. By understanding the advantages, risks, and best practices, participants can harness the power of AI to drive innovation, improve software quality, and optimize development processes.

Curtain Call for our Demonstrators: A Summary

  One of our jobs at the C4DT Factory is to work on promising projects from our affiliated labs. This helps the faculties translate their research into formats accessible to different audiences. For a newly on-boarded project, we evaluate its current state and identify the required steps towards a final product. We may then also (…)

Meta’s going to put AI-generated images in your Facebook and Instagram feeds

Social networks began by enabling us to connect online with our family and friends and with communities of interest. Influencers then helped generate the growth that was “missing” on the personal side. AI-generated content, which Mark Zuckerberg sees as the next “logical jump” in the engagement race, seems very creepy to me if you consider (…)

Facebook: Lieber Zensur als Datenschutz

[DE] “Ein sehr empfehlenswerter Überblicksartikel über laufende EU-Verfahren gegen Meta. Als wären dies allein nicht schon genug negative Presse, enthält der Beitrag eine besorgniserregende Beobachtung: Meta scheint sich offenbar das Recht vorzubehalten Beiträge zu zensieren die die Plattform kritisieren. Sollte das über die im Artikel genannten Einzelfälle hinaus Standardvorgehen sein, wäre dieser Missbrauch ihrer Position (…)

Turkey blocks instant messaging platform Discord

The banning of Discord in Russia and Turkey is concerning because it serves as a crucial communication tool (without suitable alternatives available), and both countries justify the ban by citing security concerns, such as misuse for illegal activities. At the core of the ban is also Discord’s alleged unwillingness to comply with local laws and (…)

The Disappearance of an Internet Domain

A very popular top-level domain (.io) is facing a weird situation I never knew could happen. This particular domain is, in fact, a ccTLD, which means it’s tied to a country code—particularly “the British Indian Ocean Territory”—whose ownership is about to be moved from the UK to a neighboring nation. This transfer could result in (…)

Newag admits: Dragon Sector hackers did not modify software in Impuls trains

Remember the trains which stopped after being maintained by a third-party repair shop? Then hackers ‘unlocked’ the trains? Here is a follow-up on the legal struggle between this train company and the hackers. It makes me uncomfortable thinking about the power companies have nowadays over things we buy. And much of buying is more like (…)