armasuisse
Anomaly detection in dynamic networks
The temporal evolution of the structure of dynamic networks carries critical information about the development of complex systems in various applications, from biology to social networks. While this topic is of importance, the literature in network science, graph theory, or network machine learning, still lacks of relevant models for dynamic…
RAEL: Robustness Analysis of Foundation Models
Pre-trained foundation models are widely used in deep learning applications due to their advanced capabilities and extensive training on large datasets. However, these models may have safety risks because they are trained on potentially unsafe internet-sourced data. Additionally, fine-tuned specialized models built on these foundation models often lack proper behavior…
ANEMONE: Analysis and improvement of LLM robustness
Large Language Models (LLMs) have gained widespread adoption for their ability to generate coherent text, and perform complex tasks. However, concerns around their safety such as biases, misinformation, and user data privacy have emerged. Using LLMs to automatically perform red-teaming has become a growing area of research. In this project,…
MAXIM: Improving and explaining robustness of NMT systems
Neural Machine Translation (NMT) models have been shown to be vulnerable to adversarial attacks, wherein carefully crafted perturbations of the input can mislead the target model. In this project, we introduce novel attack framework against NMT. Unlike previous attacks, our new approaches have a more substantial effect on the translation…
Automated Detection Of Non-standard Encryption In ACARS Communications
In this project we introduce a new family of prompt injection attacks, termed Neural Exec. Unlike known attacks that rely on handcrafted strings (e.g., "Ignore previous instructions and..."), we show that it is possible to conceptualize the creation of execution triggers as a differentiable search problem and use learning-based methods…
Automated Detection Of Non-standard Encryption In ACARS Communications
Aircraft and their ground counterparts have been communicating via the ACARS data-link protocol for more than five decades. Researchers discovered that some actors encrypt ACARS messages using an insecure, easily reversible encryption method. In this project, we propose BRUTUS, a decision-support system that support human analysts to detect the use…
ADAN: Adaptive Adversarial Training for Robust Machine Learning (2024)
Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use…
TMM – Leveraging Language Models for Technology Landscape Monitoring
The objective of the project is to perform online monitoring of technologies and technology actors in publicly accessible information sources. The monitoring concerns the early detection of mentions of new technologies, of new actors in the technology space, and the facts related to new relations between technologies and technology actors…
ADHes: Attacks and Defenses on FPGA-CPU Heterogeneous Systems
FPGAs are essential components in many computing systems. With conventional CPUs, FPGAs are deployed in various critical systems, such as wireless base stations, satellites, radars, electronic warfare platforms, and data centers. Both FPGAs and CPUs have security vulnerabilities; integrating them together presents new attack opportunities on both sides. In this…
ARNO: Adversarial robustness via Knowledge Distillation
State-of-the-art architectures for modulation recognition are typically based on deep learning models. However, recently these models have been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose serious questions in terms of safety, security, or performance guarantees at large. While adversarial training can improve the…
MULAN: Adversarial Attacks in Neural Machine Translation Systems
Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various…
Technology Monitoring and Management (TMM)
The objective of the TMM project is to identify, at an early stage, the risks associated with new technologies and develop solutions to ward off such threats. It also aims to assess existing products and applications to pinpoint vulnerabilities. In that process, artificial intelligence and machine learning will play an…
Causal Inference Using Observational Data: A Review of Modern Methods
In this report we consider several real-life scenarios that may provoke causal research questions. As we introduce concepts in causal inference, we reference these case studies and other examples to clarify ideas and provide examples of how researchers are approaching topics using clear causal thinking.
Analysis of encryption techniques in ACARS communications
In this collaboration (structured in two projects) we develop an automated tool to flag messages sent by planes which are suspicious of using weak encryption mechanisms. We mainly focus on detecting the use of classical ciphers like substitution and transposition ciphers. The tool flags messages and identifies the family of…
ADAN: Adaptive Adversarial Training for Robust Machine Learning
Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use…
UNA: Universal Adversarial Perturbations in NLP
Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various…
Technology Monitoring and Management (TMM)
The objective of the TMM project is to identify, at an early stage, the risks associated with new technologies and develop solutions to ward off such threats. It also aims to assess existing products and applications to pinpoint vulnerabilities. In that process, artificial intelligence and machine learning will play an…
ROBIN – Robust Machine Learning
In communication systems, there are many tasks, like modulation recognition, for which Deep Neural Networks (DNNs) have obtained promising performance. However, these models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification. This raises questions about the security but also the general…
Secure Distributed-Learning on Threat Intelligence
Cyber security information is often extremely sensitive and confidential, it introduces a tradeoff between the benefits of improved threat-response capabilities and the drawbacks of disclosing national-security-related information to foreign agencies or institutions. This results in the retention of valuable information (a.k.a. as the free-rider problem), which considerably limits the efficacy…