Machine learning technologies have seen tremendous progress over the past decade, owing to the availability of massive and diverse data, rapid growth in computing and storage power, and novel techniques such as deep learning and sequence-to-sequence models. ML algorithms for several central cognitive tasks, including image and speech recognition, have now surpassed human performance. This enables new applications and levels of automation that seemed out of reach only a few years ago. For example, fully autonomous self-driving cars in the real world are now technically feasible; smart assistants integrate speech recognition and synthesis, natural language understanding, and reasoning, into full-blown dialog systems; AI systems have beaten humans at Jeopardy, Go, and several other tasks.
Yet taking such functions out of human hands raises a number of concerns and fears, which if not addressed could easily erode our trust in ML technology.
First, ML algorithms can exhibit biases and generate discriminatory decisions, inherited from training data. There is currently a strong research effort under way to define notions of fairness and methods to ascertain that ML algorithms conform to these notions. More broadly, the issue of how to teach machines to act ethically, e.g., self-driving cars needing to make split-second decisions about an impending accident, is critical.
Second, in many scenarios, ML algorithms and human decision-makers have to work in concert. This is true, for example, in medical diagnostics, where we are not (yet) ready to make completely automated decisions, but doctors want to rely on ML to augment their own understanding and improve their decisions. A major challenge is to explain predictions by ML to humans, especially with the advent of “black-box” techniques like deep learning. How to convince a sceptical human operator that a prediction is plausible and accurate? We need techniques for interpretable ML algorithms with the ability to mimic the way a doctor explains a diagnostic to another doctor.
Third, while ML algorithms manage to outperform human subjects in various cognitive tasks, many of these algorithms still lack robustness in adversarial settings: for example, small adversarial modifications of images (a few pixels) have been shown to lead to misclassification, while human performance would be unaffected. This lack of robustness is a vulnerability that may be exploited to attack ML systems, and consequently undermine trust in their decisions. Additionally, ML models (e.g., for medical applications) are often trained on sensitive data one would ideally not reveal to third parties, thus creating the need for privacy-sensitive ML algorithms that can learn to make predictions without access to raw sensitive data.
The public acceptance of a much greater level of automation in many areas of business and life, of ML algorithms making decisions affecting people’s health, careers, relationships, etc., requires a much stronger level of trust. ML technology has to evolve to be fair, accountable, and transparent (FAT). Today’s research agenda does not sufficiently reflect these requirements, and remains strongly focused on pushing the performance of tasks such as outlined above. C4DT will drive a research program that focuses explicitly on trust as a goal of next-generation ML frameworks.
Conversely, ML technology is itself an indispensable layer in the architecture of trust of any sufficiently complex system. Despite decades of research in security technologies, from cryptography to verification and to blockchains, human behaviour is often the weakest link and the culprit for successful attacks. Social engineering has played at least some role in almost all major recent attacks. AI has the potential to bring higher-level reasoning and adaptively learning behavioural patterns to bear on distributed systems of trust. The long term ambition is to be able to identify and counter attacks that have not previously been identified and explicitly modelled.
In summary, ML and AI more broadly are transformative technologies that will reshape our economy and our lives. Trust in these systems is crucial to integrate them without causing mistrust and public resistance and a potential backlash, and they need to reflect and encode the values and principles of our societies. At the same time, there is an opportunity that AI technologies become central in fostering trust in complex digital infrastructures, by detecting and preventing attacks and by proactively analysing complex systems and identifying weaknesses.
Geneva solutions reports on C4DT joint event on “Manipulating elections in cyberspace: are democracies in danger”
A little less than one month before the election, one wonders. How are fake news and disinformation affecting relations within a democracy, but moreover what threat do they pose to democracy itself? It is precisely this topic that the Center for Digital Trust (C4DT) housed at EPFL brought to the…
News type :
Press reviews
EPFL’s Predikon: predicting voting results with machine learning
On September 27 Switzerland votes for the first time since the COVID-19 pandemic began, including on a contentious initiative to end the free movement of workers with the European Union. Predikon will be predicting the final outcome within minutes of the release of the first partial municipal results from the…
News type :
News
Prof. Carmela Troncoso interviewed by “Le Temps” on her contribution to the SwissCovid app
French-language news paper 'Le Temps' interviewed C4DT-affiliated Carmela Troncoso, professor of the SPRING lab at EPFL, on her contribution to the SwissCovid app and on her passion for privacy protection in the digital world. Read the article in French on 'www.letemps.ch' by clicking the following link.
News type :
Press reviews
Carmela Troncoso: named one of 2020s global top young tech leaders
Carmela Troncoso, head of the Security and Privacy Engineering Lab (SPRING) in EPFL’s School of Computer and Communication Sciences (IC), helped lead the push to build the Decentralized Privacy-Preserving Proximity Tracing system (DP-3T), now used in COVID-19 tracing apps around the world. She’s just joined the ranks of Fortune Magazine’s…
News type :
News
Sabine Süsstrunk elected President of the Swiss Science Council
The Federal Council has elected Sabine Süsstrunk, Professor at the EPFL School of Computer and Communication Sciences (IC), affiliated to C4DT, as President of the Swiss Science Council (SSC). Sabine Süsstrunk will succeed Gerd Folkers, who has been President of the SSC since 2016, as of January 1, 2021. Please click below for more…
News type :
News
Vaud and Geneva join forces to create the Trust Valley
Building on the expertise of 300 companies and 500 experts, the Vaud and Geneva Cantons of Switzerland are launching the Trust Valley, a public private cooperation for safe digital transformation, cybersecurity and innovation. Among the founding partners are C4DT members ELCA, Kudelski Group and SICPA. For more information please click…
News type :
News
Roche joins the C4DT
We are proud to announce that Roche has just joined the C4DT as partner. Its arrival adds the pharmaceutical industry to the already broad field of economic sectors represented within the C4DT. We are convinced that this partnership will add new perspectives and insights to digital trust and lead to…
News type :
News
ESL Lab developed a new app that can help detect the coronavirus
Five researchers at EPFL’s Embedded Systems Laboratory (ESL), headed by C4DT-affiliated Prof. David Atienza, have developed an artificial intelligence-based system, called Coughvid, that can listen to your cough and indicate whether you might have COVID-19. Click here to access the app. For more information access EPFL's announcement below.
News type :
News
C4DT’s Lead Developer participates at LauzHack Against COVID-19
C4DT's lead developer, Linus Gasser, participated in last weekend's LauzHack Against COVID-19, which is a 72h online hackathon dedicated to fighting the coronavirus crisis, to develop an app called Indie-Pocket. It uses various sensor data from the smartphone and a supervised classification technique to decipher in which pocket/body location the…
News type :
News
CYD and EPFL launch the CYD Fellowships
Cyber-threats have been accelerating due to the exponential growth of network connectivity. These new capabilities provide myriad opportunities for security hackers to wreak significant damage for commercial, political, or other gains. To promote research and education in cyber-defence, EPFL, the Swiss Federal Institute of Technology in Lausanne, and the Cyber-Defence…
News type :
News
C4DT mentioned in “Le Temps” as an initiative against cybercrime
Initiatives against cybercrime, online harassment or spying are increasing at an impressive rate. Switzerland wants to position itself as a world center of excellence. French-language news paper 'Le Temps' asked Olivier Crochat, executive director of the Center for Digital Trust, about the center's focus. Read the article in French on…
News type :
Press reviews
C4DT mentioned in RTS French radio show Alter Eco
C4DT is mentioned in RTS French radio show 'Alter Eco', broadcasted on Jan 6th in French and entitled "Lausanne, 'capital mondial de la confiance'". Please click below to access the broadcast.
News type :
Press reviews
Switzerland: launch of a label to protect SMEs from cyber risks
Protecting your SME from cyberattacks is often complicated: costs of IT security audits, absent or overly complex standards, lack of internal skills have discouraged more than one company from confronting these risks. Born from a participative approach, the Label cyber-safe.ch helps SMEs and other small organizations to manage their cyber…
News type :
News
EPFL aims to build trust in fintechs
A new research program will combine the specialist knowledge of the Swiss Finance Institute @ EPFL with insights from the School’s data scientists and digital trust experts. The Swiss online bank Swissquote, sponsor of the Chair in Quantitative Finance and founding member of EPFL’s Center for Digital Trust (C4DT), is…
News type :
News
Launch of the CyberPeace Institute in Geneva
Thursday 26 September 2019 saw the launch of the CyberPeace Institute, an independent NGO that will address the growing impact of major cyberattacks, assist vulnerable communities, promote transparency, and advance global discussions on acceptable behavior in cyberspace. EPFL President Martin Vetterli will be sitting on the Executive Board, and the…
News type :
News
Prof. Ebrahimi and Quantum Integrity awarded an Innosuisse grant
Multimedia Signal Processing Group, led by C4DT-affiliated Prof. Touradj Ebrahimi, has been working with Quantum Integrity, a startup based at EPFL Innovation Park, on a deepfake detection solution for the past two years. The research team has already completed two pilot tests and recently obtained a grant from Innosuisse, Switzerland’s…
News type :
News
C4DT’s academic director on e-ID in “Le Temps” daily newspaper
On the 4th of June, the Council of States debated the Swiss law on e-ID (Federal Act on Electronic Identification Services, LSIE). C4DT’s academic director Prof. Jean-Pierre Hubaux wrote an article on the topic for the Swiss French-language daily newspaper 'Le Temps', in which he favors state control of all…
News type :
Press reviews
The daily newspaper “Le Temps” interviews the Center for Digital Trust
"Many SMEs are discovering digitalization but are not armed to deal with the threats that accompany this process." The Swiss French-language daily newspaper “Le Temps” interviewed C4DT's executive director, Dr. Olivier Crochat, and academic director, Prof. Jean-Pierre Hubaux, on the mission and ambitions of this new center, based at EPFL,…
News type :
Press reviews
C4DT Holds First General Assembly
The founding General Assembly of C4DT was held on Friday, 2 November, in presence of the President of EPFL, Martin Vetterli, and of 50 guests. The 12 partners of the Center said they are keen to apply research to their business needs and regulatory requirements, at a time when digitalization…