Projects
Analysis of encryption techniques in ACARS communications
In this collaboration (structured in two projects) we develop an automated tool to flag messages sent by planes which are suspicious of using weak encryption mechanisms. We mainly focus on detecting the use of classical ciphers like substitution and transposition ciphers. The tool flags messages and identifies the family of ciphers. We also aim to develop automated decryption techniques for the weakest ciphers.
| Type | Privacy Protection & Cryptography, Critical Infrastructure |
| Partner | armasuisse |
| Partner contact | Martin Strohmeier |
| EPFL Laboratory | Security and Privacy Engineering Laboratory (SPRING) |
Distributed Privacy-Preserving Insurance Insight-Sharing Platform
The collection and analysis of risk data are essential for the insurance-business model. The models for evaluating risk and predicting events that trigger insurance policies are based on knowledge derived from risk data.
The purpose of this project is to assess the scalability and flexibility of the software-based secure computing techniques in an insurance benchmarking scenario and to demonstrate the range of analytics capabilities they provide. These techniques offer provable technological guarantees that only authorized users can access the global models (fraud and loss models) based on the data of a network of collaborating organizations. The system relies on a fully distributed architecture without a centralized database, and implements advanced privacy-protection techniques based on multiparty homomorphic encryption, which makes it possible to efficiently compute machine-learning models on encrypted distributed data.
| Type | Privacy Protection & Cryptography, Machine Learning, Finance |
| Partner | Swiss RE |
| Partner contact | Sebastian Eckhardt |
| EPFL Laboratory | Laboratory for Data Security (LDS) |
ROBIN – Robust Machine Learning
In communication systems, there are many tasks, like modulation recognition, for which Deep Neural Networks (DNNs) have obtained promising performance. However, these models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification. This raises questions about the security but also the general trust in model predictions. In this project, we propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition (AMC) models. We show that current state-of-the-art models benefit from adversarial training, which mitigates the robustness issues for some families of modulations. We use adversarial perturbations to visualize the features learned, and we found that in robust models the signal symbols are shifted towards the nearest classes in constellation space, like maximum likelihood methods. This confirms that robust models not only are more secure, but also more interpretable, building their decisions on signal statistics that are relevant to modulation recognition.
| Type | Device & System Security, Machine Learning |
| Partner | armasuisse |
| Partner contact | Gérôme Bovet |
| EPFL Laboratory | Signal Processing Laboratory (LTS4) |
Auditable Sharing and Management of Sensitive Data Across Jurisdictions
This work aims at creating a Proof of Concept of storing and managing data on a blockchain. This work answers the following two use-cases: (i) compliant storage, transfer and access management of (personal) sensitive data and (ii) compliant cross-border or cross-jurisdiction data sharing.
DEDIS brings to the table a permissioned blockchain and distributed ledger using a fast catch up mechanism that allows for very fast processing of the requests, while staying secure. It also includes a novel approach to encryption and decryption, where no central point of failure can let the documents be published to outsiders (Calypso). Swiss Re brings to the table interesting use cases which will require DEDIS to extend Calypso to implement data location policies.
| Type | Privacy Protection & Cryptography, Blockchains & Smart Contracts, Software Verification |
| Partner | Swiss RE |
| Partner contact | Stephan Schreckenberg |
| EPFL Laboratory | Decentralized Distributed Systems Laboratory (DEDIS) |
MedCo: Collective Protection of Medical Data
MedCo, developed in the LDS lab of professor Jean-Pierre Hubaux in collaboration with professor Bryan Ford’s DEDIS lab and the Lausanne University Hospital (CHUV), is the first operational system that makes sensitive medical-data available for research in a simple, privacy-conscious and secure way. It enables hundreds of clinical sites to collectively protect their data and to securely share them with investigators, without single points of failure. MedCo applies advanced privacy-enhancing techniques, such as: Multi-party homomorphic encryption, Secure distributed protocols and Differential privacy.
| Type | Privacy Protection & Cryptography, Health |
| Partner | CHUV |
| Partner contact | Nicolas Rosat, Jean-Louis Raisaro |
| EPFL Laboratory | Laboratory for Data Security (LDS) |
Production-Readiness Timeline for Skipchains with onChain secrets
The DEDIS team created a first version of the onChain secrets implementation using its skipchain blockchain. This implementation allows a client to store encrypted documents on a public but permissioned blockchain and to change the access rights to those documents after they have been written to the blockchain. The first implementation has been extensively tested by ByzGen and is ready to be used in a PoC demo.
This project aims at increasing its performance and stability, and make it production-ready. Further, it will add a more realistic testing platform that will allow to check the validity of new functionality in a real-world setting and find regressions before they are pushed to the stable repository.
| Type | Privacy Protection & Cryptography, Blockchains & Smart Contracts, Software Verification |
| Partner | ByzGen |
| Partner contact | Marcus Ralph |
| EPFL Laboratory | Decentralized Distributed Systems Laboratory (DEDIS) |
SafeAI
SafeAI aims to develop cyber-security solutions in the context of Artificial Intelligence (AI). With the advent of generative AI, it is possible to attack AI enhanced applications with targeted cyberattacks, and also to generate cyberattacks that are automated and enhanced via the use of AI. The main goal of SafeAI is the development of a software that enables automated generation of adversarial attacks and defences using AI.
| Type | Device & System Security, Machine Learning |
| Partner | CISCO |
| Partner contact | Frank Michaud |
| EPFL Laboratory | Signal Processing Laboratory (LTS4) |
ICRC: Digitalization as Enabler to Re-Connect Families in Time of War
More than one million families are separated due to conflicts. The ICRC and the EPFL through C4DT partnership have set themselves a challenge to analyse publicly available data through analytics techniques to identify missing persons that would arguably not have been identified using current, conventional methods. The goal of this project is to facilitate the search for missing individuals by building scalable, accurate systems tailored for that purpose.
| Type | Machine Learning, Government & Humanitarian |
| Partner | ICRC |