Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception

October 1st, 2024, 09h00-18h00,
SwissTech Convention Center, EPFL

Introduction

In 2024, more than 50 national elections are taking place or have already taken place across the globe, from Taiwan’s presidential elections in January, to India’s Lok Sabha election staged over seven phases from April to June, to the US presidential elections in November. Meanwhile, the rapidly evolving state of generative AI (gen AI) has drastically lowered the barriers for anyone, including individuals and state actors with malicious intent, to create spam, deepfakes and other synthetic content. In combination with scantly regulated social media platforms whose algorithms exploit user behavior to gain clicks, the rapid and widespread dissemination of mis- and disinformation has never been more accessible. As a result, it has never been easier to manipulate voter attitudes and behaviors and affect the integrity of the electoral process. According to the World Economic Forum’s Global Risk Report 2024, misinformation and disinformation emerged as the top risk over the next two years. 

This full-day conference will explore the impact of gen AI on disinformation campaigns – specifically, deepfakes and disinformation created by gen A, but also the proliferation of disinformation through AI – and how such campaigns may impact elections outcomes (and trust in electoral systems). It critically engages in the questions to what extent AI has transformed disinformation and its potential impact on electoral processes and trust (or not), what technical approaches to detecting and combating of AI- and non-AI generated disinformation exist, and what the roles and responsibilities of different stakeholders (media, digital platforms, government, civil society and consumers) im combating disinformation are?

This event is organized by the Center for Digital Trust (C4DT), EPFL in collaboration with EPFL’s AI Center and the Initiative for Media Innovation (IMI)

Objectives

  • To present observations and existing evidence on the political effects (and beyond) of disinformation and to predict its evolving role and impact
  • To spotlight cutting-edge research whose applications have led to promising technical solutions to identifying deepfakes and combating their online spread
  • To explore stakeholders’ roles and responsibilities in addressing deepfakes and misinformation, from citizens, civil society, and media, to platforms and governments.

Content

Part 1: Artificial Influence? The Intersection of AI, Disinformation, and Politics

Part 1 will critically explore the impact of AI on disinformation, and of AI-generated disinformation on political outcomes and public perception. It will examine recent cases of disinformation in political contexts and assess the extent to which AI is amplifying political polarization and erosion of public trust.

We will explore the following questions:

  • What is misinformation/disinformation? Presentation of recent cases of disinformation, including AI-generated disinformation, in connection to political events (in particular but not only elections) and public perception. How (and why) do different audiences interact with media and disinformation encounters?
  • What is the role of AI? How has AI changed disinformation and influence operations, and if yes, how? What are the motivations of actors? How AI-generated content acts on our emotions, amplified by human predisposition toward online disinhibition? Are there positive sides to AI in political and public communication?
  • Do disinformation campaigns impact election outcomes, public trust? What are recent examples? To what extent is gen AI contributing to political polarization or loss of trust?
  • Is AI’s influence on information, media, public opinion and public trust is indeed as profound as it is often made out to be? Are the fears surrounding AI-drive disinformation’s influence on electoral processes justified? Can AI also contribute in positive ways to political communication and help build trust in electoral processes?

Part 2: Research Frontiers: AI Technologies in Disinformation Creation and Control

Part 2 will delve into the technical aspects of generating, detecting, and combating disinformation, with a specific emphasis on AI technologies. Potential topics can include multimedia deepfakes and other types of synthetic content, large language models (LLMs) generating text, and the role of social media networks in spreading and mitigating false information.

Potential topics can include:

  • Multimedia deepfakes and other types of synthetic content,
  • Large language models (LLMs) generating text,
  • The role of social media networks in spreading and mitigating false information.

Part 3: Counteracting Disinformation: Stakeholders’ roles, responsibilities, and strategies

In part 3 we will address the multifaceted and collaborative approaches needed to effectively counteract disinformation, involving regulatory measures, technical solutions, public information, and social resilience. It will explore the roles and responsibilities of different stakeholders (media, digital platforms, government, civil society and consumers).

Potential topics can include:

  • Implementation and enforcement of regulations,
  • Necessity of new agencies,
  • Standard-setting,
  • Responsibilities of digital platforms in content moderation and the potential for harnessing AI to combat disinformation,
  • Strategies like prebunking and debunking,
  • Evaluation of the impacts of deepfakes and disinformation,
  • How civil society can contribute through education,
  • Awareness-raising,
  • Building public resilience against disinformation.

Speakers and Panelists

The list of speakers and panelists will be announced shortly…