C4DT Conference on Disinformation, Elections and AI

October 1st, 2024, SwissTech Convention Center (STCC), EPFL

Introduction

The concept of digital sovereignty gains increasing traction in political discourses across Europe, including in Switzerland. Digital sovereignty is referred to when speaking about AI development, cloud computing, data and data spaces, hard- and software development, the provision of digital services and platforms, to name but a few aspects. Against this backdrop, this event will explore secure and trustworthy cloud computing in the context of digital sovereignty.

As such, cloud computing is an interesting prism of several digital trust challenges and opportunities at once: As cloud adoption increases, great potential for efficiency, elasticity and scale emerges. Yet, the incentives for malicious actors to exploit vulnerabilities, be they related to access and identity, data security and privacy or to hard- and software security, will increase simultaneously. Adding the fact that two-thirds of the cloud market are in the hand of only three cloud infrastructure service providers, the so-called hyperscalers, the question of digital sovereignty is raised more and more in the context of cloud computing – including issues such as data localization and access, legal frameworks that apply, etc..

With a variety of public and private actors working on the issue of digital sovereignty specifically and a strong digital Trust Ecosystem overall, as well as with its unique political history and neutrality, Switzerland is uniquely positioned to explore and test new models and approaches to what digital sovereignty may mean in the 21st century; especially in the context of cloud computing which is a key enabling factor of the current and future digital economy.


Objectives

  • To present observations and existing evidence on the political effects (and beyond) of disinformation and to predict its evolving role and impact
  • To spotlight cutting-edge research whose applications have led to promising technical solutions to identifying deepfakes and combating their online spread
  • To explore stakeholders’ roles and responsibilities in addressing deepfakes and misinformation, from citizens, civil society, and media, to platforms and governments.

(Tentative) Program


Part 1: Elections (Political science/political economy/ real world cases)

  1. What disinformation campaigns are there? Examples, observations?
    1. Example from the past, e.g.: Brexit / Carole Cadwalladr (The Observer)
    2. Recent, e.g.: India elections / Urvashi Aneja (Digital Futures Lab), Taiwan elections / Min. Audrey Tang (Taiwan Ministry of Digital Affairs)
    3. Upcoming, e.g.: US elections / TBD
  2. Does disinformation impact election outcomes? fragmentation of society?
    1. How difference audiences (young, old) interact with media / Mounir Krichane (EPFL)
    2. How AI generated content acts on our emotions, amplified by human predisposition toward online disinhibition / TBD “Cyberpsychologist”?
    3. Is fear overblown? To what extent is gen AI contributing to polarization? / Political scientist TBD
  3. Motivations of actors, internal opponents vs external nation-state actors
    1. Political destabilization of foreign adversaries (session on geopolitics – China, Russia, Middle East?)
    2. Political gain against domestic opponent
    3. Commercial gain
  4. What is the role of AI – is AI a game changer for better or for worse?

Part 2: Technical section (with a focus on AI technologies wherever possible)

  1. Multimedia – deepfakes / Touradj Ebrahimi (EPFL) and Sabine Süsstrunk (EPFL)
  2. LLMs, text generated, genAI / Yash Raj Shrestha (UNIL)
  3. Social media networks / Karl Aberer (EPFL)

Part 3: Stakeholder solutions and responsibilities

  1. Governments and Policymakers
    1. Implementation of existing regulations
      1. DSA and AI Act – making platforms accountable for the results of their algorithms
      2. GDPR – Employing GDPR to order non-compliant data processing to stop
    2. Creation of new agencies to detect deepfakes or implement evidence-based interventions
    3. Development of new regulations, e.g., mandatory watermarks, requiring platforms to share data with researchers, whistleblower protections, defining corporate accountability and who is liable for harms/damages caused by AI, defining rights to ownership of data and IP
    4. Standards setting and safety requirements
    5. Support research on the social and behavioral effects of gen AI
  2. Digital platforms
    1. New policies on content moderation, presentation of newsfeed, incorporating some version of the “fairness doctrine” or “equal time” policy into algorithms…
    2. Can AI be harnessed to fight disinformation? Use of AI to detect deepfakes, etc.
  3. Journalists and media outlets
    1. Informing the public about AI, disinformation and its consequences (political, physical/violences, social, trust in institutions), as well as what we still don’t know, e.g., gaps in our understanding, in the laws
    2. Prebunking, debunking, etc. 
  4. Academia
    1. Evaluating the impact of deepfakes on elections, commercial profits, down to individual level
  5. Civil society
    1. Education and awareness raising
    2. Building resilience
  6. Citizens and their responsibility