Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use adversarial learning, in which the model is fine-tuned with these adversarial perturbations. However, this method has several drawbacks. It is computationally costly, has convergence instabilities and it does not protect against multiple types of corruptions at the same time. The objective of this project is to develop improved and effective adversarial training solutions that tackle these drawbacks.
ADAN: Adaptive Adversarial Training for Robust Machine Learning
Date | 01/03/2021 - 28/02/2022 |
Type | Device & System Security, Machine Learning |
Partner | armasuisse |
Partner contact | Gérôme Bovet |
EPFL Laboratory | Signal Processing Laboratory (LTS4) |