armasuisse

ADAN: Adaptive Adversarial Training for Robust Machine Learning (2024)

Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use…

TMM – Leveraging Language Models for Technology Landscape Monitoring

The objective of the project is to perform online monitoring of technologies and technology actors in publicly accessible information sources. The monitoring concerns the early detection of mentions of new technologies, of new actors in the technology space, and the facts related to new relations between technologies and technology actors…

ADHes: Attacks and Defenses on FPGA-CPU Heterogeneous Systems

FPGAs are essential components in many computing systems. With conventional CPUs, FPGAs are deployed in various critical systems, such as wireless base stations, satellites, radars, electronic warfare platforms, and data centers. Both FPGAs and CPUs have security vulnerabilities; integrating them together presents new attack opportunities on both sides. In this…

ARNO: Adversarial robustness via Knowledge Distillation

State-of-the-art architectures for modulation recognition are typically based on deep learning models. However, recently these models have been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose serious questions in terms of safety, security, or performance guarantees at large. While adversarial training can improve the…

MULAN: Adversarial Attacks in Neural Machine Translation Systems

Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various…

Technology Monitoring and Management (TMM)

The objective of the TMM project is to identify, at an early stage, the risks associated with new technologies and develop solutions to ward off such threats. It also aims to assess existing products and applications to pinpoint vulnerabilities. In that process, artificial intelligence and machine learning will play an…

Causal Inference Using Observational Data: A Review of Modern Methods

In this report we consider several real-life scenarios that may provoke causal research questions. As we introduce concepts in causal inference, we reference these case studies and other examples to clarify ideas and provide examples of how researchers are approaching topics using clear causal thinking.

Analysis of encryption techniques in ACARS communications

In this collaboration (structured in two projects) we develop an automated tool to flag messages sent by planes which are suspicious of using weak encryption mechanisms. We mainly focus on detecting the use of classical ciphers like substitution and transposition ciphers. The tool flags messages and identifies the family of…

ADAN: Adaptive Adversarial Training for Robust Machine Learning

Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use…

UNA: Universal Adversarial Perturbations in NLP

Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various…

Technology Monitoring and Management (TMM)

The objective of the TMM project is to identify, at an early stage, the risks associated with new technologies and develop solutions to ward off such threats. It also aims to assess existing products and applications to pinpoint vulnerabilities. In that process, artificial intelligence and machine learning will play an…

ROBIN – Robust Machine Learning

In communication systems, there are many tasks, like modulation recognition, for which Deep Neural Networks (DNNs) have obtained promising performance. However, these models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification. This raises questions about the security but also the general…

Secure Distributed-Learning on Threat Intelligence

Cyber security information is often extremely sensitive and confidential, it introduces a tradeoff between the benefits of improved threat-response capabilities and the drawbacks of disclosing national-security-related information to foreign agencies or institutions. This results in the retention of valuable information (a.k.a. as the free-rider problem), which considerably limits the efficacy…