State-of-the-art architectures in many software applications and critical infrastructures are based on deep learning models. These models have been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose fundamental questions in terms of safety, security, or performance guarantees at large. Several defense mechanisms have been developed in the last years to make models more robust against data perturbations or targeted attacks, with the best one being adversarial training, in which the model is fine-tuned by adding properly modified samples in the training set.
ARFON : Adversarial Robustness of Foundation Models
| Date | 07/03/2025 - 02/11/2025 |
| Type | Device & System Security, Machine Learning |
| Partner | armasuisse |
| Partner contact | Gérôme Bovet |
| EPFL Laboratory | Signal Processing Laboratory |