This article reveals how smart devices gather more information than typically required for their functions, including personal data, location, audio recordings … and this even for devices (like air fryers) that clearly do not require those. This (again) shows a lack of transparency and poses critical questions about consumer privacy.
If you want to attack a network of an organization which has a good firewall, a good entry point is the wireless network. But what do you do if the organization is on the other side of the globe? Easy: you hack a nearby organization with a bad firewall, then use their wireless network to (…)
This article discusses a study suggesting algorithmic bias favoring Republican-leaning content, and its owner Elon Musk’s posts in particular, on the social media platform X. The study further claims that this bias dates back to when Musk officially started supporting Donald Trump. While it is of course impossible to prove these allegations without access to (…)
The on-going journalist investigation into a data set obtained as a free sample from a US-based data broker continues to show how problematic the global data market really is. The latest article focusing on US Army bases in Germany reveals that not only can critical personnel be tracked by identifying their movement profiles, but that (…)
This publication revisits key questions with speakers from the Oct 1st conference on “Deepfakes, Distrust and Disinformation” through the lens of public perception and seeks to advance the debate surrounding AI, misinformation, and disinformation, especially in political contexts.
Welcome to the Factory Update for Fall 2024. Twice a year we take the time to present some of the projects we see coming out of our affiliated labs and give you a short summary of what we’ve been doing the past 12 months. Please also give us a short feedback on what you most (…)
Interesting to see that 12 out of the 15 top vulnerabilities published by CISA, America’s cyber defense agency, are from 2023. Log4j2 from 2021 is also still in the list! So make sure that your systems are up-to-date with regard to these vulnerabilities, even if it’s not 0-days anymore.Bleeping Computer
Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. For our C4DT partners we are also hosting a hands-on workshop aimed at engineers, software developers and cybersecurity experts on Tuesday, 26th of November, which will allow the participants to develop skills and expertise in identifying and combating cyberattacks through deepfakes.
Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. For our C4DT partners we are hosting a high-level roundtable for executives, senior managers and project managers on Tuesday, 19th of November, during which strategies to address the challenges posed by deepfakes, and collaboration opportunities and projects to counter them will be discussed.
An interesting question about how current LLMs are answering our questions: do they have an accurate world model which they use to answer? Or are they more like a ‘stochastical parrot’, and simply answer using previously seen reasoning? The difference is important, because it indicates the maximum precision these models can attain – the better (…)
The temporal evolution of the structure of dynamic networks carries critical information about the development of complex systems in various applications, from biology to social networks. While this topic is of importance, the literature in network science, graph theory, or network machine learning, still lacks of relevant models for dynamic networks, proper metrics for comparing network structures, as well as scalable algorithms for anomaly detection. This project exactly aims at bridging these gaps.
Large Language Models (LLMs) have gained widespread adoption for their ability to generate coherent text, and perform complex tasks. However, concerns around their safety such as biases, misinformation, and user data privacy have emerged. Using LLMs to automatically perform red-teaming has become a growing area of research. In this project, we aim to use techniques like prompt engineering or adversarial paraphrasing to force the victim LLM to generate drastically different, often undesirable responses.
In this project we introduce a new family of prompt injection attacks, termed Neural Exec. Unlike known attacks that rely on handcrafted strings (e.g., “Ignore previous instructions and…”), we show that it is possible to conceptualize the creation of execution triggers as a differentiable search problem and use learning-based methods to autonomously generate them.
Aircraft and their ground counterparts have been communicating via the ACARS data-link protocol for more than five decades. Researchers discovered that some actors encrypt ACARS messages using an insecure, easily reversible encryption method. In this project, we propose BRUTUS, a decision-support system that support human analysts to detect the use of insecure ciphers in the ACARS network in an efficient and scalable manner. We propose and evaluate three different methods to automatically label ACARS messages that are likely to be encrypted with insecure ciphers.
Inwiefern ‘verstehen’ LLMs die Welt, und können sie ‘denken’? Oder ist ihre vermeintliche Intelligenz doch nur eine Illusion der Statistik? Dieser Artikel arbeitet ein aktuelles Papier zu diesem Thema für ein nicht-technisches Publikum auf – ein wichtiger Beitrag dazu, dass das Wissen um die Fähigkeiten und Grenzen solcher Technologien nicht nur im Kreis von ExpertInnen (…)
Wow – this is counter-attack made right! Sophos explains how they tracked hackers of their firewall product by adding code which tags attacks and reports them back to Sophos HQ. They managed to get a lot of information about the hackers, including their whereabouts. What I really liked about the article is how it shows (…)
The leaked internal TikTok documents confirm the long-held suspicion that we urgently need to stop entrusting social media companies with putting up safety-rails. Dampening addictive features, bursting filter bubbles and moderating content directly contradicts maximising user engagement, the metric by which such companies live and die. We need binding regulations with real teeth to protect (…)
One of our jobs at the C4DT Factory is to work on promising projects from our affiliated labs. This helps the faculties translate their research into formats accessible to different audiences. For a newly on-boarded project, we evaluate its current state and identify the required steps towards a final product. We may then also (…)
The banning of Discord in Russia and Turkey is concerning because it serves as a crucial communication tool (without suitable alternatives available), and both countries justify the ban by citing security concerns, such as misuse for illegal activities. At the core of the ban is also Discord’s alleged unwillingness to comply with local laws and (…)
“In the US, despite public skepticism and lack of trust, the crypto-currency industry is determined to assert its influence in Washington, spending record amounts on political campaigns. I find it interesting how they are giving the subject a prominent place on the political agenda, when it’s clearly not a priority concern for the majority of (…)
Canton Vaud’s [seal] Program funds projects in digital trust and cybersecurity with up to CHF 100K or 90% of cost ! The aim of this latest call for projects is to stimulate collaborative innovation in order to propose solutions that help meet the challenges of multimedia content security, from data confidentiality to emerging threats linked (…)
Introduction Following on the heels of our conference on “Deepfakes, Distrust and Disinformation: The Impact of AI on Elections and Public Perception”, which was held on October 1st 2024, C4DT proposes to shift the spotlight to the strategic and operational implications of deepfakes and disinformation for organizations. We are hosting two distinct workshops tailored for (…)
C4DT would like to express a BIG THANK YOU to the speakers, panelists, moderators, to the attendees and to the partner Centers who have made this C4DT conference such a memorable and interesting event! We are very thankful for this opportunity and what it’s brought us!
For those who could not make it to the conference, within the next 2 weeks we will publish the recordings of the talks and panels on C4DT’s Publications Page
Similar to the TikTok ban, this initiative is driven by a combination of protectionism, national security concerns, and data privacy fears. While it is possible for state and non-state actors to hack into any car system if they are determined to do so, the primary concern is China’s completely legal ability to access data collected (…)