The Research Institute (SuRI) is an annual event that takes place at the School of Computer and Communication Sciences of the École polytechnique fédérale de Lausanne, Switzerland. The workshop brings together researchers and experts from academia and industry for research talks and informal discussions.
The goal of this webinar, organised by the Swiss Support Center for Cybersecurity, is to shed light on cyber resilience related aspects from different view angles. We will see technical measures that aim to improve resilience of systems themselves to more organizational measures that have the goal to absorb the impact even if technical security measures have been successfully circumvented by attackers.
Join us in the further explorations of the Swiss E-ID journey. This time we’re diving into the technical details of the current Public Sandbox Trust Infrastructure of the Swiss government. You’ll learn how the different parts work together, and why self-sovereign IDs are useful to preserve your privacy.
This article is #2 in our series on the Swiss E-ID system. An overview of the system can be found in our first article, Switzerland’s E-ID journey so far.
The registration is open with an early bird price of 290.- CHF (until June 21) and regular price of 340.- CHF afterwards. Please register early to help us plan.
CPDP is a non-profit platform originally founded in 2007 by research groups from the Vrije Universiteit Brussel, the Université de Namur and Tilburg University. The platform was joined in the following years by the Institut National de Recherche en Informatique et en Automatique and the Fraunhofer Institut für System und Innovationsforschung and has now grown into a platform carried by 20 academic centers of excellence from the EU, the US and beyond.
[Lang : Fr][ictjournal.ch] Les populations ont de moins en moins confiance dans l’innovation, selon le baromètre Edelman. En entretien avec la rédaction, Olivier Crochat, directeur du Center for Digital Trust de l’EPFL (C4DT), partage son diagnostic et ses pistes de solution.
Lennart Heim, Research Fellow at the Centre for the Governance of AI (GovAI), is coming to EPFL to present the field of AI Governance! His research focuses on the role of compute for advanced AI systems and how compute can be leveraged as an instrument for AI governance, with an emphasis on policy development and security implications. Registration is mandatory, register here!
In the upcoming Innovation Week, we will delve into the multifaceted aspects of digital trust. Through engaging workshops, insightful panel discussions, and hands-on activities, we aim to explore topics such as digital self-determination, cybersecurity, data privacy, automated decision making and ethical practices and empower stakeholders to navigate the complexities of digital interactions.
Jointly organized by Trust Valley and Groupe Mutuel, the Symposium Valais is part of the Tech4Trust Roadshow event series, which supports startups in the field of cybersecurity and digital trust. C4DT’s own Prof. Jean-Pierre Hubaux will be keynoting the event. During this Roadshow, expect insightful panel discussions and engaging startup pitches, and discover the issues and challenges related to Ethical & Explainable AI, economy of data, data privacy, data security.
DISCO-DHRIVE is developing a privacy-preserving collaborative learning platform using AI. It allows the building of AI models across different locations without the need to share sensitive data. Tailored to meet ICRC’s unique challenges, including resource scarcity and stringent data confidentiality, the project integrates federated and distributed learning. This approach enables the extraction of valuable insights from sensitive data without compromising their security.
When digital technologies intersect with humanitarian crises, understanding the risks and opportunities they can bring is paramount. Digital Dilemmas: Building Digital Resilience in Humanitarian Crises will explore the real-life consequences technology can have for vulnerable populations in conflict zones. Organized by the EPFL EssentialTech Centre and the ICRC, in partnership with the EPFL Center for Digital Trust (C4DT), it builds on the themes explored by Digital Dilemmas: Humanitarian Consequences, an immersive exhibition at EPFL Pavilions (May 3 – July 14).
Six years ago, EPFL rolled out an e-voting platform developed by Bryan Ford’s DEDIS lab for its internal elections [2]. The then newly-formed Center for Digital Trust (C4DT) brought this project as one of the first under the umbrella of its Digital-Trust Open Platform, the precursor to what is today the C4DT Factory – (…)
The WSIS+20 Forum High-Level Event will mark a significant milestone of twenty years of progress made in the implementation of the outcomes of the World Summit on the Information Society, which took place in two phases — Geneva in 2003 and Tunis in 2005. Twenty years ago WSIS set the framework for global digital cooperation with a vision to build people-centric, inclusive, and development-oriented information and knowledge societies.
[Lang : Fr][letemps.ch] Censuré en Inde, en lutte avec le système judiciaire brésilien et au cœur de la campagne présidentielle américaine, le réseau social d’Elon Musk est associé de près au débat démocratique. Deux experts analysent son influence.
Inspired by the panel session at this year’s “AI House” panel session on “Transparency in Artificial Intelligence”, this write up very informally summarizes Imad Aad’s thoughts about transparency and trust in AI. It is aimed at readers with all backgrounds, including those who had little or no exposure to AI so far.
A new EPFL study has demonstrated the persuasive power of Large Language Models, finding that participants debating GPT-4 with access to their personal information were far more likely to change their opinion compared to those who debated humans.
This project underscores the need for a paradigm shift in data privacy policies, acknowledging the inherent trade-off between data utility and privacy that current Privacy Enhancing Technologies (PETs) cannot fully mitigate. It highlights the limitations of PETs and the systemic responsibility issues within the data supply chain, where technology producers often evade accountability. Consequently, a shift towards a data-use case-centric evaluation framework is recommended, one that prioritizes utility while minimizing leakage through nuanced risk assessments. Finally, the porject calls for greater transparency and a redefined accountability structure in the data sharing ecosystem.
The Trust & Finance Forum, an event taking place at the Global Center for Security Policy (GCSP) in Geneva, on May 1st of 2024, to delve into the intersections of trust and finance. We’ll discuss digital trust trends, and explore new opportunities through real success stories in finance.
Our new C4DT Digital Governance Book Review is out! This time: Anu Bradford (2023) “Digital Empires. The Global Battle to Regulate Technology.” In her new book, Bradford offers an in-depth, objective analysis of the three dominant regulatory models for digitalization—market-driven by the US, state-driven by China, and citizen-driven by the EU—and their global impacts on (…)
This book offers an in-depth, objective analysis of the three dominant regulatory models for digitalization—market-driven by the US, state-driven by China, and citizen-driven by the EU—and their global impacts on data, digital platforms, and the internet, highlighting the current geo-political struggles and possible future scenarios.
The slowdown in Moore’s Law has pushed high-end GPUs towards narrow number formats to improve logic density. This introduces new challenges for accurate Deep Neural Network (DNN) training and inference. Our research aims to bring novel solutions to the challenges introduced by ubiquitous ever-growing DNN models and datasets. Our proposal targets building DNN platforms that are optimal in performance/Watt across a broad class of workloads and improve utility by unifying the infrastructure for both training models and inference tasks.
The C4DT Factory team selected some Privacy Enhancing Technology (PET) links for you. They are all related to digital trust: security, privacy, trust in general, we have you covered!
The Open-Source AI Models track draws attention to the pivotal role that open-source AI models play in the responsible development of artificial intelligence and highlights the challenges that this field faces, including ethical and responsible usage of AI models, sustainability, and licensing and legal issues.
The AI Safety track addresses the pressing need for responsible AI usage beyond sensationalized risks. While global leaders address extreme threats, the track spotlights often-overlooked but crucial challenges, such as bias mitigation, individuals’ privacy protection, generation of inaccurate or fabricated information, and AI alignment with human values. It serves as a platform for experts from diverse fields to share insights, tackle challenges, and suggest solutions for a safer, human-centric, and trustworthy AI future.