Factory Update Spring 2025 Subjects II

This is an additional set of hands-on workshops and projects for the upcoming year for our C4DT partners. You can find the chosen selection here: Main Subjects. The projects are split in two categories: hands-on workshops, which are a 1-day training on a given subject, and project suggestions, based on current research of our affiliated labs.


Summary of the Additional Proposals

Potential Hands-on Workshops

The hands-on workshop proposed by the C4DT Factory allow you to take a deep-dive for one day into a specific subject. It is geared towards project leaders in software engineering, and also more manager roles for the morning session. Besides learning a new topic, this is an excellent way to get in contact with other of our partners. And of course, getting in contact with our labs for discussion about your challenges and ways to do research on those.


In the morning, one of our affiliated professors presents the subject to the audience. In the afternoon, a hands-on workshop takes place, prepared by the C4DT Factory. We split the proposed hands-up workshops in two categories:

  • Research deep-dive: cutting-edge, published research subject to show upcoming trends
  • Research application: mature research result which can be applied to a concrete use-case with a partner. This means setting up a project to verify the research results in a real-life setting

LLM and ML

  • Research deep-diveRobustness of ML tools – classifiers, multi-model, explainability – Prof. Andrea Cavallaro of IDIAP
    This workshop explores techniques to enhance machine learning model robustness against adversarial attacks and methods to make ML decisions more transparent and explainable.
    • What you will learn: Techniques to defend ML models against adversarial attacks that can fool classifiers, methods to develop robust multi-model systems, and approaches to make ML decision processes transparent and interpretable.
    • Where it is used: Critical infrastructure security systems, autonomous vehicles, healthcare diagnostics, financial fraud detection, and hiring/loan approval systems where explaining algorithmic decisions is legally required.
    • Collaborations: Partners can work with IDIAP researchers to enhance existing ML models with adversarial training, implement explainability frameworks tailored to specific applications, or develop new evaluation metrics for model robustness in commercial settings.

  • Research application – Security aspects of the “SwissAI LLM” model coming out in mid-May – Prof. Martin Jaggi
    SwissLLM is a large language model developed specifically for Swiss contexts and languages. This workshop explores its security implications, vulnerabilities, and safeguards.
    • What you will learn: The source data of SwissAI LLM, its training methodology, inherent limitations, potential security vulnerabilities, and techniques for alignment and safety. You’ll understand how to identify prompt injection risks, data leakage concerns, and implementation of security guardrails.
    • Where it is used: The SwissAI LLM, scheduled for release in mid-May, will form the foundation for Swiss-specific AI applications in government services, healthcare, financial sectors, and multilingual customer service systems requiring privacy and data sovereignty.
    • Collaborations: Participants can collaborate with EPFL labs to conduct security audits of the model, develop fine-tuning techniques that maintain security, create detection systems for misuse, and implement custom guardrails for domain-specific applications.

  • Research deep-diveProtect Vision Transformer models against adversarial attack – Prof. Sabine Süsstrunk
    • What you will learn: Understand how Vision Transformer (ViT) models work and their vulnerabilities to adversarial attacks. Learn state-of-the-art defense mechanisms including adversarial training, input preprocessing, and model robustification techniques. Gain hands-on experience implementing and evaluating these defenses against various attack vectors.
    • Where it is used: Vision Transformer models are increasingly deployed in security-critical applications including autonomous vehicles, surveillance systems, medical imaging, and content moderation. Robust defenses are crucial for applications where adversaries might attempt to manipulate AI decisions, such as authentication systems, security screening, and automated visual inspection in manufacturing.
    • Collaborations: Partners can engage in joint research to evaluate the robustness of their existing vision systems, develop custom defense strategies tailored to their specific threat models, or create evaluation frameworks for new vision-based products. Research could extend to building industry-specific benchmarks for vision model security or integrating these defense mechanisms into existing ML pipelines.

Security

  • Research applicationFormally verifying Regular Expressions used in network appliances to avoid DoS attacks – Prof. Clément Pit-Claudel
    • What you will learn: what parts of Regular expressions are dangerous when being applied to user-controlled content. How to verify the engines running the regular expressions.
    • Where it is used: In modern firewalls packet filtering is often done using Regular expressions. However, this creates an attack surface on the firewall itself, as specially crafted packets can overwhelm the firewall.
    • Collaboration possibilities: Clément is looking for partners who are using regular expressions in their network appliances, to apply his work and improve security.

Cryptography

  • Research deep-dive: Coercion-resistant e-voting system – Prof. Bryan Ford
    A technical exploration of advanced cryptographic techniques that enable secure electronic voting while preventing voter intimidation or vote selling.
    • What you will learn: the different ways of protecting e-voting systems against coercions by a third party. Challenges for the user interaction, as well as the cryptographic part.
    • Where it is used: in modern e-voting systems where preventing intimidation and vote buying is critical. It is most critical in high-stake elections where surveillance, or vote buying, might influence the result of the election.
    • Possible collaboration: evaluating the coercion-resistance in a Swiss setting, doing user testing to enhance their existing e-voting solutions with stronger anti-coercion guarantees.

Other

  • Research application: Rating is the new ranking – comparison-based recommender systems – Prof. Matthias Grossglauser
    • What you will learn: This workshop explores how traditional rating systems often introduce cognitive biases and inconsistencies. You’ll learn techniques for implementing comparison-based systems where users compare items directly instead of assigning absolute ratings. The workshop covers practical algorithms for efficiently converting comparisons into rankings and handling partial information.
    • Where it is used: Recommendation systems for content platforms, e-commerce product suggestions, talent recruitment ranking, scientific paper reviews, and prioritization systems where subjective assessment is required. Companies like Netflix, Spotify, and Amazon have explored comparison-based elements to improve recommendation quality.
    • Collaboration: Partners can work with EPFL labs to implement ranking algorithms in existing systems, conduct A/B testing to measure improvements in recommendation quality, or develop custom comparison-based interfaces for specific domains.

Projects Suggestions

These are more abstract than the hands-on workshops, as the subject is just hot out of the published paper. For these ideas, we propose you to discuss with our professors if you are interested. We will set up the discussion, and accompany you if you decide to set up a partnership, with or without a grant.


We split the suggested projects in two categories:

  • Research suggestion: multi-year research ideas to be explored by a PhD student paid by the partner or by a grant.
  • Research application: published research which needs further use-cases from industry to verify its usability. Either paid by the partner or by a grant.

Privacy and Security with Cryptography

  • Research suggestion: Stewardship for digital identities – protecting and recovering our digital secrets without a centrally trusted service. – Prof. Bryan Ford
    • Where it is used: Digital identity stewardship enables secure password and key recovery for users who have lost access to their authentication credentials, without requiring a central authority that could become a security vulnerability. This technology can be implemented in password managers, digital wallet solutions, federated identity systems, and enterprise authentication frameworks to reduce account lockouts while maintaining robust security.
    • Collaboration: Support a PhD student in their research through a grant and providing use-cases and real-world data for their studies. Partners would gain early insights into novel cryptographic methods for distributed key recovery and threshold-based authentication systems.

Others

  • Research application: Formally verifying Regular Expressions used in network appliances to avoid DoS attacks – Prof. Clément Pit-Claudel
    • Where it is used: In modern firewalls packet filtering is often done using Regular expressions. However, this creates an attack surface on the firewall itself, as specially crafted packets can overwhelm the firewall.
    • Collaboration: If you have real-world RegExps used in network appliances, and would like to make sure that they are denial-of-service safe, contact us.

  • Research application: Performance verification of large scale systems by simulating their behaviour before building. – Prof. George Candea
    • Where it is used: When building data centers or searching for hardware accelerators in specific tasks, you should start by simulating the performance gain. This ensures that the system will behave as announced.
    • Collaboration: Create a project to run the optimized simulations for an upcoming large-scale installation to measure the performance and scale it accordingly.