ROBIN – Robust Machine Learning

Date 01/03/2020 - 28/02/2021
Type Device & System Security, Machine Learning

Partner: armasuisse
Partner contact: Gérôme Bovet
EPFL laboratory: Signal Processing Laboratory (LTS4)
EPFL contact: Prof. Pascal Frossard

Communication systems are important for both civil and military applications. Deep learning not only proved its usefulness across multiple fields of research in the last decade but presents numerous advantages that makes it attractive for wireless communication systems too. Compared with the previous state-of-the-art approaches, which are mainly based on feature extraction from the signals, Deep Neural Networks (DNNs) scale well with high quantities of data and are capable of end-to-end learning, which simplifies the model architecture and improves performance. DNNs have been successful in multiple tasks like wireless resource allocation, anomaly detection, or automatic modulation recognition (AMC), which is the main focus of this project. AMC is required to interpret the received data and allows for multiple applications ranging from detecting daily radio stations and managing spectrum resources, to eavesdropping and interfering with radio communications.
However, recent studies have highlighted security issues in DNNs models. Specifically, they have been shown to be vulnerable to adversarial examples, which add a carefully crafted but almost imperceptible perturbation, namely adversarial perturbation, to a real data sample. This security issue combined with the black-box nature of DNNs raises a question: can we trust the predictions of neural networks? The fact that a negligible change in the input would change the prediction, implies that DNNs base their decisions on features that do not seem to be aligned with the target task. Understanding the reasons for such vulnerabilities and making systems more robust form an active line of research.
In this project we achieve the following:

  • Successfully training a model that is robust against adversarial examples in AMC using a novel methodology based on adversarial training
  • Designing a new framework based on the specific properties of communication systems. It better captures and measures how secure AMC models are to adversarial attacks, inspired by realistic settings. Thus, differentiating between practical security and robustness in AMC;
  • Do an extensive analysis of the robustness and security of state-of-the-art AMC models on some popular modulation recognition datasets;
  • We promote good practices when tackling robustness on AMC (e.g., constraining the perturbation relative to the signal energy).