Projects

Jul 2023Jun 2025

Status:Ongoing

DISCO-DHRIVE: Distributed Collaborative Learning for Data-driven Humanitarian Response in Insecure and Volatile Environments

DISCO-DHRIVE is developing a privacy-preserving collaborative learning platform using AI. It allows the building of AI models across different locations without the need to share sensitive data. Tailored to meet ICRC's unique challenges, including resource scarcity and stringent data confidentiality, the project integrates federated and distributed learning. This approach enables the extraction of valuable insights from sensitive data without compromising their security.

Type Machine Learning, Government & Humanitarian
Partner ICRC
Partner contact Fabrice Lauper, Dr. Javier Elkin
EPFL Laboratory Machine Learning and Optimization Laboratory (MLO)
Apr 2024Mar 2025

Status:Ongoing

RAEL: Robustness Analysis of Foundation Models

Pre-trained foundation models are widely used in deep learning applications due to their advanced capabilities and extensive training on large datasets. However, these models may have safety risks because they are trained on potentially unsafe internet-sourced data. Additionally, fine-tuned specialized models built on these foundation models often lack proper behavior verification, making them vulnerable to adversarial attacks and privacy breaches. The project aim is to study and explore these attacks in for foundation models.

Type Privacy Protection & Cryptography, Machine Learning
Partner armasuisse
Partner contact Gerome Bovet
EPFL Laboratory Signal Processing Laboratory 4
Apr 2024Jan 2025

Status:Ongoing

Monitoring Swiss industrial and technological landscape 2

The main objective of the project is to perform online monitoring of technologies and technology actors in publicly accessible information sources. The monitoring concerns the early detection of mentions of new technologies, of new actors in the technology space, and the facts related to new relations between technologies and technology actors (subsequently, all these will be called technology mentions). The project will build on earlier results obtained on the retrieval of technology-technology actors using Large Language Models (LLMs).

Type Machine Learning
Partner armasuisse
Partner contact Alain Mermoud
EPFL Laboratory Distributed Information Systems Laboratory
Apr 2024Dec 2024

Status:Ongoing

Anomaly detection in dynamic networks

The temporal evolution of the structure of dynamic networks carries critical information about the development of complex systems in various applications, from biology to social networks. While this topic is of importance, the literature in network science, graph theory, or network machine learning, still lacks of relevant models for dynamic networks, proper metrics for comparing network structures, as well as scalable algorithms for anomaly detection. This project exactly aims at bridging these gaps.

Type Machine Learning
Partner armasuisse
Partner contact Etienne Voutaz
EPFL Laboratory Signal Processing Laboratory 4
Mar 2022Dec 2024

Status:Ongoing

Unified Accelerators for Post-Moore Machine Learning

The slowdown in Moore’s Law has pushed high-end GPUs towards narrow number formats to improve logic density. This introduces new challenges for accurate Deep Neural Network (DNN) training and inference. Our research aims to bring novel solutions to the challenges introduced by ubiquitous ever-growing DNN models and datasets. Our proposal targets building DNN platforms that are optimal in performance/Watt across a broad class of workloads and improve utility by unifying the infrastructure for both training models and inference tasks.

Type Machine Learning
Partner Microsoft
EPFL Laboratory Parallel Systems Architecture Laboratory (PARSA)
Apr 2024Dec 2024

Status:Ongoing

ADAN: Adaptive Adversarial Training for Robust Machine Learning (2024)

Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use adversarial learning, in which the model is fine-tuned with these adversarial perturbations. However, this method has several drawbacks. It is computationally costly, has convergence instabilities and it does not protect against multiple types of corruptions at the same time. The objective of this project is to develop improved and effective adversarial training solutions that tackle these drawbacks.

Type Device & System Security, Machine Learning
Partner armasuisse
Partner contact Gérôme Bovet
EPFL Laboratory Signal Processing Laboratory (LTS4)
Apr 2024Dec 2024

Status:Ongoing

ANEMONE: Analysis and improvement of LLM robustness

Large Language Models (LLMs) have gained widespread adoption for their ability to generate coherent text, and perform complex tasks. However, concerns around their safety such as biases, misinformation, and user data privacy have emerged. Using LLMs to automatically perform red-teaming has become a growing area of research. In this project, we aim to use techniques like prompt engineering or adversarial paraphrasing to force the victim LLM to generate drastically different, often undesirable responses.

Type Privacy Protection & Cryptography, Machine Learning
Partner armasuisse
Partner contact Ljiljana Dolamic, Gerome Bovet
EPFL Laboratory Signal Processing Laboratory 4
Sep 2020Aug 2024

ADHes: Attacks and Defenses on FPGA-CPU Heterogeneous Systems

FPGAs are essential components in many computing systems. With conventional CPUs, FPGAs are deployed in various critical systems, such as wireless base stations, satellites, radars, electronic warfare platforms, and data centers. Both FPGAs and CPUs have security vulnerabilities; integrating them together presents new attack opportunities on both sides. In this project, we investigate the attacks made possible by closely integrating FPGAs with CPUs in heterogeneous computing platforms.

Type Device & System Security
Partner armasuisse
Partner contact Vincent Lenders
EPFL Laboratory Parallel Systems Architecture Laboratory (PARSA)
Apr 2023Mar 2024

MAXIM: Improving and explaining robustness of NMT systems

Neural Machine Translation (NMT) models have been shown to be vulnerable to adversarial attacks, wherein carefully crafted perturbations of the input can mislead the target model. In this project, we introduce novel attack framework against NMT. Unlike previous attacks, our new approaches have a more substantial effect on the translation by altering the overall meaning. This new framework can reveal the vulnerabilities of NMT systems compared to tradition methods.

Type Privacy Protection & Cryptography, Machine Learning
Partner armasuisse
Partner contact Ljiljana Dolamic, Gerome Bovet
EPFL Laboratory Signal Processing Laboratory 4
Feb 2022Feb 2024

Graph Embedding Methods for Scalable Knowledge Graph Completion

Knowledge graphs have recently attracted significant attention in scenarios that require exploiting large-scale heterogeneous data collections. When graph sizes reach high orders of magnitude a delicate balance between performance and computational cost might is required. This project presents an approach to construct a model that generates meaningful graph representations while maintaining the scalability and prediction performance as significant as possible.

Type Machine Learning
Partner Swisscom
Partner contact Samuel Benz, Daniel Dobos
EPFL Laboratory Laboratory for Information and Inference Systems (LIONS)
Jan 2022Dec 2023

Invariant Federated Learning: Decentralized Training of Robust Privacy-Preserving Models

As machine learning (ML) models are becoming more complex, there has been a growing interest in making use of decentrally generated data (e.g., from smartphones) and in pooling data from many actors. At the same time, however, privacy concerns about organizations collecting data have risen. As an additional challenge, decentrally generated data is often highly heterogeneous, thus breaking assumptions needed by standard ML models. Here, we propose to “kill two birds with one stone” by developing Invariant Federated Learning, a framework for training ML models without directly collecting data, while not only being robust to, but even benefiting from, heterogeneous data.

Type Machine Learning
Partner Microsoft
Partner contact Dimitrios Dimitriadis, Emre Kıcıman, Robert Sim, Shruti Tople
EPFL Laboratory Data Science Lab (dlab)
Mar 2022Dec 2023

TMM – Leveraging Language Models for Technology Landscape Monitoring

The objective of the project is to perform online monitoring of technologies and technology actors in publicly accessible information sources. The monitoring concerns the early detection of mentions of new technologies, of new actors in the technology space, and the facts related to new relations between technologies and technology actors (subsequently, all these will be called technology mentions). The project will build on earlier results obtained on the retrieval of technology-technology actors using state-of-the-art NLP approaches.

Type Machine Learning
Partner armasuisse
Partner contact Alain Mermoud
EPFL Laboratory Distributed Information Systems Laboratory (LSIR)
Jan 2022Dec 2023

Tyche: Confidential Computing on Yesterday’s Hardware

Confidential computing is an increasingly popular means to wider Cloud adoption. By offering confidential virtual machines and enclaves, Cloud service providers now host organizations, such as banks and hospitals, that abide by stringent legal requirement with regards to their client’s data confidentiality. Unfortunately, confidential computing solutions depend on bleeding-edge emerging hardware that (1) takes long to roll out at the Cloud scale and (2) as a recent technology, it is bound to frequent changes and potential security vulnerabilities. This proposal leverage existing commodity hardware combined with new programming language and formal method techniques and identify how to provide similar or even more elaborate confidentiality and integrity guarantees than the existing confidential hardware.

Type Privacy Protection & Cryptography
Partner Microsoft
Partner contact Adrien Ghosn, Marios Kogias
EPFL Laboratory Data Center Systems Laboratory (DCSL), HexHive Laboratory
Jan 2022Dec 2023

PAIDIT: Private Anonymous Identity for Digital Transfers

To serve the 80 million forcibly-displaced people around the globe, direct cash assistance is gaining acceptance. ICRC’s beneficiaries often do not have, or do not want, the ATM cards or mobile wallets normally used to spend or withdraw cash digitally, because issuers would subject them to privacy-invasive identity verification and potential screening against sanctions and counterterrorism watchlists. On top of that, existing solutions increase the risk of data leaks or surveillance induced by the many third parties having access to the data generated in the transactions. The proposed research focuses on the identity, account, and wallet management challenges in the design of a humanitarian cryptocurrency or token intended to address the above problems. This project is funded by Science and Technology for Humanitarian Action Challenges (HAC).

Type Privacy Protection & Cryptography, Blockchains & Smart Contracts, Device & System Security, Finance, Government & Humanitarian
Partner ICRC
Partner contact TBD
EPFL Laboratory Decentralized Distributed Systems Laboratory (DEDIS)
Jan 2023Dec 2023

Using Artifical Intelligence to Explore the Prognostic Value of Macroscopy in Liver Cancer

Liver cancer ranks third in terms of cancer-related mortality. Hepatocellular carcinoma (HCC) accounts for 90% of primary liver cancers. Tremendous efforts have been pursued to establish HCC prognostic, including clinical, radiological, pathological and even molecular readouts. Regardless of the strategy, the performance of these tools remains modest. Recent data using artificial intelligence (AI) on HCC histology (microscopy) have revealed promising results. We aim to submit pictures of liver cancers specimen to AI models to generate algorithms allowing to establish prognosis in a large-scale study including centers from North America, Europe and Asia.

Type Machine Learning, Health
Partner CHUV
Partner contact Ismail Labgaa
EPFL Laboratory Machine Learning and Optimization Laboratory (MLO), Intelligent Global Health Research Group
Jul 2023Dec 2023

Automated Detection Of Non-standard Encryption In ACARS Communications

In this project we introduce a new family of prompt injection attacks, termed Neural Exec. Unlike known attacks that rely on handcrafted strings (e.g., "Ignore previous instructions and..."), we show that it is possible to conceptualize the creation of execution triggers as a differentiable search problem and use learning-based methods to autonomously generate them.

Type Machine Learning
Partner armasuisse
Partner contact Martin Strohmeier
EPFL Laboratory Security and Privacy Engineering Lab (SPRING)
Jan 2023Dec 2023

Machine-Learning Prognostication in Patients Undergoing Surgery for Hepatocellular Carcinoma (Liver Cancer)

Liver cancer is the second deadliest malignancy. It essentially accounts hepatocellular carcinoma (HCC). Surgery with liver resection is the main curative option but unfortunately, it is only recommended in patients with early HCC. Prognosis of HCC is particularly challenging and results from numerous attempts using various strategies remain relatively poor.Artificial intelligence (AI) has demonstrated unmatched value to decipher complex traits and mechanisms. This multicentric effort will include 8 Academic centers from the United States, Europe and Asia, allowing to generate a large-scale dataset of patients undergoing liver resection for HCC. We aim to investigate the input of AI to improve prognostication of these patients.

Type Machine Learning, Health
Partner CHUV
Partner contact Ismail Labgaa
EPFL Laboratory Machine Learning and Optimization Laboratory (MLO), Intelligent Global Health Research Group
Jan 2023Dec 2023

Exploring Artificial Intelligence to Predict Complications after Major Digestive Surgery

Major digestive surgery is associated with a high comorbidity (i.e. high risk of complications after surgery). Anticipating Postoperative complications (POC) may help and guide clinicians in the postoperative management of surgical patients. Unfortunately, the available tools in clinical practice are of restraint value due to their limited accuracy. Recently, artificial intelligence (AI) has shown a meteoric rise in medicine, showing numerous clinical applications but its role to predict POC remains unknown. We aim to use AI to develop new models allowing to improve the prediction of POC in a dataset of >2000 patients undergoing major digestive surgery.

Type Machine Learning, Health
Partner CHUV
Partner contact Ismail Labgaa
EPFL Laboratory Machine Learning and Optimization Laboratory (MLO), Intelligent Global Health Research Group
Jan 2023Nov 2023

Monitoring Swiss industrial and technological landscape 1

The main objective of the project is to perform online monitoring of technologies and technology actors in publicly accessible information sources. The monitoring concerns the early detection of mentions of new technologies, of new actors in the technology space, and the facts related to new relations between technologies and technology actors (subsequently, all these will be called technology mentions). The project will build on earlier results obtained on the retrieval of technology-technology actors using Large Language Models (LLMs).

Type Machine Learning
Partner armasuisse
Partner contact Alain Mermoud
EPFL Laboratory Distributed Information Systems Laboratory
Oct 2021Oct 2023

RuralUS: Ultrasound adapted to resource limited settings

Point-of-Care Ultrasound (PoCUS) is a powerfully versatile and virtually consumable-free clinical tool for the diagnosis and management of a range of diseases. While the promise of this tool in resource-limited settings may seem obvious, it’s implementation is limited by inter-user bias, requiring specific training and standardisation.This makes PoCUS a good candidate for computer-aided interpretation support. Our study proposes the development of a PoCUS training program adapted to resource limited settings and the particular needs of the ICRC.

Type Machine Learning, Health
Partner CHUV, ICRC
Partner contact Mary-Anne Hartley
EPFL Laboratory Machine Learning and Optimization Laboratory (MLO), Intelligent Global Health Research Group
Oct 2020Sep 2023

Multi-Task Learning for Customer Understanding

Customer understanding is a ubiquitous and multifaceted business application whose mission lies in providing better experiences to customers by recognising their needs. A multitude of tasks, ranging from churn prediction to accepting upselling recommendations, fall under this umbrella. Common approaches model each task separately and neglect the common structure some tasks may share. The purpose of this project is to leverage multi-task learning to better understand the behaviour of customers by modeling similar tasks into a single model. This multi-objective approach utilises the information of all involved tasks to generate a common embedding that can be beneficial to all and provide insights into the connection between different user behaviours, i.e. tasks. The project will provide data-driven insights into customer needs leading to retention as well as revenue maximisation while providing a better user experience.

Type Machine Learning, Digital Information
Partner Swisscom
Partner contact Dan-Cristian Tomozei
EPFL Laboratory Signal Processing Laboratory (LTS4)
May 2023Aug 2023

Assessment of image hashing technologies – Visual Hash

In Visual Hash Project EPFL partners with SICPA in order to provide guidance and use the technical expertise of scientists from Multimedia Signal Processing Group for assessing the performance of novel imaging technologies for security, privacy and digital identity.

Type Digital Information
Partner SICPA
Partner contact Víctor Martínez Jurado
EPFL Laboratory Multimedia Signal Processing Group (MMSPG)
May 2021May 2023

Harmful Information Against Humanitarian Organizations

In this project, we are working with the ICRC to develop technical methods to combat social media-based attacks against humanitarian organizations. We are uncovering how the phenomenon of weaponizing information impacts humanitarian organizations and developing methods to detect and prevent such attacks, primarily via natural language processing and machine learning methods.

Type Machine Learning, Government & Humanitarian
Partner ICRC
Partner contact Fabrice Lauper
EPFL Laboratory Distributed Information Systems Laboratory (LSIR)
Apr 2022Mar 2023

MULAN: Adversarial Attacks in Neural Machine Translation Systems

Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various methods have been proposed to craft these examples in image data. However, these methods are not readily applicable to Natural Language Processing (NLP). In this project, we aim to propose methods to generate adversarial examples for NLP models such as neural machine translation models in different languages. Moreover, through adversarial attacks, we mean to analyze the vulnerability and interpretability of these models.

Type Device & System Security, Machine Learning
Partner armasuisse
Partner contact Ljiljana Dolamic
EPFL Laboratory Signal Processing Laboratory (LTS4)