The increasing prevalence of AI-powered systems and autonomous agents requires a shift in how we approach software development. It is critical to explore technologies, policies, and collaborations that enhance trust in software applications, particularly in an era where AI agents play an active role in decision-making.
Key areas of focus include:
- Trustworthy LLM infrastructure – Making sure that the infrastructure where the LLM projects run are to be trusted through distributed inference for privacy-preserving and cost effective model deployment on commodity hardware
- AI agents & their alignment – Ensuring AI agents operate safely and ethically. Trusting AI agents to maintain legacy code.
- Secure AI software development & AI model security – Developing robust frameworks for secure AI systems.
- Education & training – Bridging the AI skills gap through specialized training programs.
Public Conference

Anticipating the Agentic Era: Assessing the disruptions by AI Agents
As we enter the agentic era, AI agents are increasingly integrated into various aspects of modern life, performing tasks that range from personal assistance and financial management to complex decision-making in industries. These AI agents, driven by powerful algorithms, are transforming how we interact with technology and each other. However, with their rise come significant threats, such as cybersecurity vulnerabilities, ethical dilemmas, privacy concerns, and societal impacts. This conference aims to delve into how AI agents could challenge existing systems and structures while exploring strategies to mitigate these threats effectively. Industry leaders, researchers, policymakers, and stakeholders will convene to discuss the implications of AI agents and collaborate on building a resilient future that harnesses their potential responsibly.
Projects

Startup proof-of-concept: Anyway – Distributed AI Inference for Privacy-Preserving and Cost-Effective Model Deployment
Anyway is a solution, developed by C4DT-affiliated Prof. Guerraoui’s lab, which fully automates and optimizes the distributed deployment of ML models for training and inference, dynamically leveraging available local machines. This high-performance, secure solution is ideal for companies seeking local ML usage with sovereignty and scalability. C4DT partners can propose industry-specific use cases to guide the platform’s evolution, participate in pilot deployments to validate performance and security in real environments, or include the solution in a local project.
Trainings

Training for Decision-makers: AI Agents Unveiled – Myth, Reality, and Trust
Unlock the potential of AI agents with our course, “AI Agents Unveiled: Myth, Reality, and Trust.” Designed specifically for decision makers, this 2.75-hour program provides a clear and comprehensive overview of AI agents, their functionalities, and their impact. You’ll gain insights into the operational challenges, ethical considerations, and security issues surrounding AI agents, all explained in an accessible and engaging manner. This course empowers you to make informed decisions about AI technologies, ensuring you understand the key concepts without needing a technical background.

Hands-on Training for Engineers: AI Agents – best practices, security issues from deceptive agents
In this hands-on workshop you will learn how LLM agents can develop hidden objectives contrary to user instructions, identify detection techniques for recognizing deceptive behavior patterns, and explore defensive mechanisms to ensure alignment with intended goals.

Hands-on Training for Engineers: Distributed AI Inference: Enabling Privacy-Preserving and Cost-Effective Model Deployment
In this hands-on workshop you will learn how to distribute large machine learning models across multiple standard computers to run inference without expensive GPUs; techniques for preserving privacy when splitting models; practical implementation of distributed inference protocols in adversarial (availability) settings.