State-of-the-art architectures for modulation recognition are typically based on deep learning models. However, recently these models have been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose serious questions in terms of safety, security, or performance guarantees at large. While adversarial training can improve the robustness of the network, there is still a large gap between the performance of the model against clean and perturbed samples. Based on recent experiments, the data used during training could be an important factor in the susceptibility of the models. Thus, the objective of this project is to research the effects of proper data selection, cleaning and preprocessing of the samples used during training on robustness.
ARNO: Adversarial robustness via Knowledge Distillation
Date | 01/03/2022 - 28/02/2023 |
Type | Device & System Security, Machine Learning |
Partner | armasuisse |
Partner contact | Gérôme Bovet |
EPFL Laboratory | Signal Processing Laboratory (LTS4) |