Nov 2018Dec 2021

Digitalizing search for missing persons

Partner: CICR, FLO

Partner contact: Fabrice Lauper

EPFL laboratory: Distributed Information Systems Laboratory (LSIR)

EPFL contact: Prof. Karl Aberer, Rémi Lebret

Armed conflicts, violence and migration are causing large scale separation of family members, dislocation of family links and missing persons. People must receive help to know what happened to reconnect to their loved ones as rapidly as possible. The ICRC and LSIR through its partnership have set themselves a challenge to analyse publicly available data through analytics techniques to identify missing persons that would arguably not have been identified using current, conventional methods. The goal of this project is to facilitate the search for missing individuals by building scalable, accurate systems tailored for that purpose.

Type Machine Learning, Government & Humanitarian
Jan 2019Dec 2021

TTL-MSR Taiming Tail-Latency for Microsecond-scale RPCs

Partner: Microsoft

Partner contact: Irene Zhang, Dan Ports, Marios Kogias

EPFL laboratory: Data Center Systems Laboratory (DCSL)

EPFL contact: Prof. Edouard Bugnion, Konstantinos Prasopoulos

We consider a web-scale application within a datacenter that comprises of hundreds of software components, deployed on thousands of servers. These versatile components communicate with each other via Remote Procedure Calls (RPCs) with the cost of an individual RPC service typically measured in microseconds. The end-user performance, availability and overall efficiency of the entire system are largely dependent on the efficient delivery and scheduling of these RPCs. We propose to make RPC first-class citizens of datacenter deployment. This requires a revisitation of the overall architecture, application API, and network protocols. We are also building the tools that are necessary to scientifically evaluate microsesecond-scale services.

Type Digital Information
Jan 2019Dec 2021

Monitoring, Modelling, and Modifying Dietary Habits and Nutrition Based on Large-Scale Digital Traces

Partner: Microsoft

Partner contact: Ryen W. White

EPFL laboratory: Data Science Lab

EPFL contact: Prof. Robert West, Kristina Gligoric

The overall goal of this project is to develop methods for monitoring, modeling, and modifying dietary habits and nutrition based on large-scale digital traces. We will leverage data from both EPFL and Microsoft, to shed light on dietary habits from different angles and at different scales.
Our agenda broadly decomposes into three sets of research questions: (1) Monitoring and modeling, (2) Quantifying and correcting biases and (3) Modifying dietary habits.
Applications of our work will include new methods for conducting population nutrition monitoring, recommending better-personalized eating practices, optimizing food offerings, and minimizing food waste.

Type Machine Learning, Health
Apr 2018Dec 2021

Data Protection in Personalized Health

Partner: CHUV, ETH

Partner contact: Prof. Jacques Fellay (EPFL/CHUV), Prof. Effy Vayena (ETH)

EPFL laboratory: Laboratory for Data Security (LDS)

EPFL contact: Prof. Jean-Pierre Hubaux

P4 (Predictive, Preventive, Personalized and Participatory) medicine is called to revolutionize healthcare by providing better diagnoses and targeted preventive and therapeutic measures. In order to enable effective P4 medicine, DPPH defines an optimal balance between usability, scalability and data protection, and develops required computing tools. The target result of the project will be a platform composed of software packages that seamlessly enable clinical and genomic data sharing and exploitation across a federation of medical institutions across Switzerland. The platform is scalable, secure, responsible and privacy-conscious. It can seamlessly integrate widespread cohort exploration tools (e.g., i2b2 and TranSMART).

Type Privacy Protection & Cryptography, Machine Learning, Health
Sep 2020Dec 2021

Secure Distributed-Learning on Threat Intelligence

Partner: armasuisse

Partner contact: Alain Mermoud

EPFL laboratory: Laboratory for Data Security (LDS)

EPFL contact: Prof. Jean-Pierre Hubaux, Juan Troncoso, Romain Bouyé

Cyber security information is often extremely sensitive and confidential, it introduces a tradeoff between the benefits of improved threat-response capabilities and the drawbacks of disclosing national-security-related information to foreign agencies or institutions. This results in the retention of valuable information (a.k.a. as the free-rider problem), which considerably limits the efficacy of data sharing. The purpose of this project is to resolve the cybersecurity information-sharing tradeoff by enabling more accurate insights on larger amounts of more relevant collective threat-intelligence data.
This project will have the benefit of enabling institutions to build better models by securely collaborating with valuable sensitive data that is not normally shared. This will expand the range of available intelligence, thus leading to new and better threat analyses and predictions.

Type Privacy Protection & Cryptography, Machine Learning
Sep 2019Nov 2021

Analysis of encryption techniques in ACARS communications

Partner: armasuisse

Partner contact: Martin Strohmeier

EPFL laboratory: Security and Privacy Engineering Laboratory (SPRING)

EPFL contact: Prof. Carmela Troncoso, Wouter Lueks

In this collaboration (structured in two projects) we develop an automated tool to flag messages sent by planes which are suspicious of using weak encryption mechanisms. We mainly focus on detecting the use of classical ciphers like substitution and transposition ciphers. The tool flags messages and identifies the family of ciphers. We also aim to develop automated decryption techniques for the weakest ciphers.

Type Privacy Protection & Cryptography, Critical Infrastructure
Jul 2021Nov 2021

Causal Inference Using Observational Data: A Review of Modern Methods

Partner: armasuisse

Partner contact: Albert Blarer

EPFL laboratory: Chair of Biostatistics

EPFL contact: Prof. Mats J. Stensrud

In this report we consider several real-life scenarios that may provoke causal research questions. As we introduce concepts in causal inference, we reference these case studies and other examples to clarify ideas and provide examples of how researchers are approaching topics using clear causal thinking.

Type Machine Learning
Dec 2020Jun 2021

Distributed Privacy-Preserving Insurance Insight-Sharing Platform

Partner: Swiss Re

Partner contact: Sebastian Eckhardt

EPFL laboratory: Laboratory for Data Security (LDS)

EPFL contact: Prof. Jean-Pierre Hubaux, Juan Troncoso, Romain Bouyé

The collection and analysis of risk data are essential for the insurance-business model. The models for evaluating risk and predicting events that trigger insurance policies are based on knowledge derived from risk data.
The purpose of this project is to assess the scalability and flexibility of the software-based secure computing techniques in an insurance benchmarking scenario and to demonstrate the range of analytics capabilities they provide. These techniques offer provable technological guarantees that only authorized users can access the global models (fraud and loss models) based on the data of a network of collaborating organizations. The system relies on a fully distributed architecture without a centralized database, and implements advanced privacy-protection techniques based on multiparty homomorphic encryption, which makes it possible to efficiently compute machine-learning models on encrypted distributed data.

Type Privacy Protection & Cryptography, Machine Learning, Finance
Mar 2020Feb 2021

ROBIN – Robust Machine Learning

Partner: armasuisse

Partner contact: Gérôme Bovet

EPFL laboratory: Signal Processing Laboratory (LTS4)

EPFL contact: Prof. Pascal Frossard

In communication systems, there are many tasks, like modulation recognition, for which Deep Neural Networks (DNNs) have obtained promising performance. However, these models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification. This raises questions about the security but also the general trust in model predictions. In this project, we propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition (AMC) models. We show that current state-of-the-art models benefit from adversarial training, which mitigates the robustness issues for some families of modulations. We use adversarial perturbations to visualize the features learned, and we found that in robust models the signal symbols are shifted towards the nearest classes in constellation space, like maximum likelihood methods. This confirms that robust models not only are more secure, but also more interpretable, building their decisions on signal statistics that are relevant to modulation recognition.

Type Device & System Security, Machine Learning
Apr 2019Apr 2020

Auditable Sharing and Management of Sensitive Data Across Jurisdictions

Partner: Swiss Re

Partner contact: Stephan Schreckenberg

EPFL laboratory: Decentralized Distributed Systems Laboratory (DEDIS)

EPFL contact: Prof. Bryan Ford

This work aims at creating a Proof of Concept of storing and managing data on a blockchain. This work answers the following two use-cases: (i) compliant storage, transfer and access management of (personal) sensitive data and (ii) compliant cross-border or cross-jurisdiction data sharing.

DEDIS brings to the table a permissioned blockchain and distributed ledger using a fast catch up mechanism that allows for very fast processing of the requests, while staying secure. It also includes a novel approach to encryption and decryption, where no central point of failure can let the documents be published to outsiders (Calypso). Swiss Re brings to the table interesting use cases which will require DEDIS to extend Calypso to implement data location policies.

Type Privacy Protection & Cryptography, Blockchains & Smart Contracts, Software Verification
Mar 2019Mar 2020

MedCo: Collective Protection of Medical Data

Partner: CHUV

Partner contact: Nicolas Rosat, Jean-Louis Raisaro

EPFL laboratory: Laboratory for Data Security (LDS)

EPFL contact: Prof. Jean-Pierre Hubaux

MedCo, developed in the LDS lab of professor Jean-Pierre Hubaux in collaboration with professor Bryan Ford’s DEDIS lab and the Lausanne University Hospital (CHUV), is the first operational system that makes sensitive medical-data available for research in a simple, privacy-conscious and secure way. It enables hundreds of clinical sites to collectively protect their data and to securely share them with investigators, without single points of failure. MedCo applies advanced privacy-enhancing techniques, such as: Multi-party homomorphic encryption, Secure distributed protocols and Differential privacy.

Type Privacy Protection & Cryptography, Health
Nov 2018Oct 2019

Production-Readiness Timeline for Skipchains with onChain secrets

Partner: ByzGen

Partner contact: Marcus Ralphs

EPFL laboratory: Decentralized Distributed Systems Laboratory (DEDIS)

EPFL contact: Prof. Bryan Ford

The DEDIS team created a first version of the onChain secrets implementation using its skipchain blockchain. This implementation allows a client to store encrypted documents on a public but permissioned blockchain and to change the access rights to those documents after they have been written to the blockchain. The first implementation has been extensively tested by ByzGen and is ready to be used in a PoC demo.
This project aims at increasing its performance and stability, and make it production-ready. Further, it will add a more realistic testing platform that will allow to check the validity of new functionality in a real-world setting and find regressions before they are pushed to the stable repository.

Type Privacy Protection & Cryptography, Blockchains & Smart Contracts, Software Verification
Jul 2018Oct 2018


Partner: Cisco

Partner contact: Frank Michaud

EPFL laboratory: Signal Processing Laboratory (LTS4)

EPFL contact: Prof. Pascal Frossard, Apostolos Modas

SafeAI aims to develop cyber-security solutions in the context of Artificial Intelligence (AI). With the advent of generative AI, it is possible to attack AI enhanced applications with targeted cyberattacks, and also to generate cyberattacks that are automated and enhanced via the use of AI. The main goal of SafeAI is the development of a software that enables automated generation of adversarial attacks and defences using AI.

Type Device & System Security, Machine Learning