armasuisse

TMM – Leveraging Language Models for Technology Landscape Monitoring

The objective of the project is to perform online monitoring of technologies and technology actors in publicly accessible information sources. The monitoring concerns the early detection of mentions of new technologies, of new actors in the technology space, and the facts related to new relations between technologies and technology actors…

ADHes: Attacks and Defenses on FPGA-CPU Heterogeneous Systems

FPGAs are essential components in many computing systems. With conventional CPUs, FPGAs are deployed in various critical systems, such as wireless base stations, satellites, radars, electronic warfare platforms, and data centers. Both FPGAs and CPUs have security vulnerabilities; integrating them together presents new attack opportunities on both sides. In this…

ARNO: Adversarial robustness via Knowledge Distillation

State-of-the-art architectures for modulation recognition are typically based on deep learning models. However, recently these models have been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose serious questions in terms of safety, security, or performance guarantees at large. While adversarial training can improve the…

MULAN: Adversarial Attacks in Neural Machine Translation Systems

Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various…

Technology Monitoring and Management (TMM)

The objective of the TMM project is to identify, at an early stage, the risks associated with new technologies and develop solutions to ward off such threats. It also aims to assess existing products and applications to pinpoint vulnerabilities. In that process, artificial intelligence and machine learning will play an…

ADAN: Adaptive Adversarial Training for Robust Machine Learning

Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use…

UNA: Universal Adversarial Perturbations in NLP

Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various…

Technology Monitoring and Management (TMM)

The objective of the TMM project is to identify, at an early stage, the risks associated with new technologies and develop solutions to ward off such threats. It also aims to assess existing products and applications to pinpoint vulnerabilities. In that process, artificial intelligence and machine learning will play an…