Machine learning technologies have seen tremendous progress over the past decade, owing to the availability of massive and diverse data, rapid growth in computing and storage power, and novel techniques such as deep learning and sequence-to-sequence models. ML algorithms for several central cognitive tasks, including image and speech recognition, have now surpassed human performance. This enables new applications and levels of automation that seemed out of reach only a few years ago. For example, fully autonomous self-driving cars in the real world are now technically feasible; smart assistants integrate speech recognition and synthesis, natural language understanding, and reasoning, into full-blown dialog systems; AI systems have beaten humans at Jeopardy, Go, and several other tasks.
Yet taking such functions out of human hands raises a number of concerns and fears, which if not addressed could easily erode our trust in ML technology.
First, ML algorithms can exhibit biases and generate discriminatory decisions, inherited from training data. There is currently a strong research effort under way to define notions of fairness and methods to ascertain that ML algorithms conform to these notions. More broadly, the issue of how to teach machines to act ethically, e.g., self-driving cars needing to make split-second decisions about an impending accident, is critical.
Second, in many scenarios, ML algorithms and human decision-makers have to work in concert. This is true, for example, in medical diagnostics, where we are not (yet) ready to make completely automated decisions, but doctors want to rely on ML to augment their own understanding and improve their decisions. A major challenge is to explain predictions by ML to humans, especially with the advent of “black-box” techniques like deep learning. How to convince a sceptical human operator that a prediction is plausible and accurate? We need techniques for interpretable ML algorithms with the ability to mimic the way a doctor explains a diagnostic to another doctor.
Third, while ML algorithms manage to outperform human subjects in various cognitive tasks, many of these algorithms still lack robustness in adversarial settings: for example, small adversarial modifications of images (a few pixels) have been shown to lead to misclassification, while human performance would be unaffected. This lack of robustness is a vulnerability that may be exploited to attack ML systems, and consequently undermine trust in their decisions. Additionally, ML models (e.g., for medical applications) are often trained on sensitive data one would ideally not reveal to third parties, thus creating the need for privacy-sensitive ML algorithms that can learn to make predictions without access to raw sensitive data.
The public acceptance of a much greater level of automation in many areas of business and life, of ML algorithms making decisions affecting people’s health, careers, relationships, etc., requires a much stronger level of trust. ML technology has to evolve to be fair, accountable, and transparent (FAT). Today’s research agenda does not sufficiently reflect these requirements, and remains strongly focused on pushing the performance of tasks such as outlined above. C4DT will drive a research program that focuses explicitly on trust as a goal of next-generation ML frameworks.
Conversely, ML technology is itself an indispensable layer in the architecture of trust of any sufficiently complex system. Despite decades of research in security technologies, from cryptography to verification and to blockchains, human behaviour is often the weakest link and the culprit for successful attacks. Social engineering has played at least some role in almost all major recent attacks. AI has the potential to bring higher-level reasoning and adaptively learning behavioural patterns to bear on distributed systems of trust. The long term ambition is to be able to identify and counter attacks that have not previously been identified and explicitly modelled.
In summary, ML and AI more broadly are transformative technologies that will reshape our economy and our lives. Trust in these systems is crucial to integrate them without causing mistrust and public resistance and a potential backlash, and they need to reflect and encode the values and principles of our societies. At the same time, there is an opportunity that AI technologies become central in fostering trust in complex digital infrastructures, by detecting and preventing attacks and by proactively analysing complex systems and identifying weaknesses.
AI’s “Gut Feeling”: Should Society Trust It?
Inspired by the panel session at this year’s “AI House” panel session on “Transparency in Artificial Intelligence”, this write up very informally summarizes Imad Aad's thoughts about transparency and trust in AI. It is aimed at readers with all backgrounds, including those who had little or no exposure to AI…
News type :
Blog posts
[FR] Affaire Taylor Swift: la prolifération des «deepfakes» est jugée «alarmante et terrible». Comment les combattre?
Créées avec des logiciels utilisant l’intelligence artificielle, des photos et des vidéos pornographiques de la star ont été vues des millions de fois. Face à ce fléau, des solutions techniques et juridiques sont esquissées. Mais leur efficacité est incertaine
News type :
News
EPFL’s new Large Language Model for Medical Knowledge
EPFL researchers have just released Meditron, the world’s best performing open source Large Language Model tailored to the medical field designed to help guide clinical decision-making.
News type :
News
[FR] Intelligence artificielle : le grand remplacement ?
Le service d’intelligence artificielle ChatGPT fascine la planète. Et pour cause : posez-lui la question que vous voulez et la machine vous répondra suffisamment bien pour passer un examen universitaire à votre place, rédiger vos mails ou même, c’est pour bientôt, assurer votre défense devant un tribunal. Quel bouleversement pour l’école…
News type :
News
[FR] Les promesses économiques de l’IA
Podcast audio, au sommaire: les horizons économiques de l’intelligence artificielle; le train et l’avion en accord pour réduire l’empreinte voyage; et reportage avec les manifestants au Pérou.
News type :
News
ChatGPT and the future of digital health
How we interact with computers has just changed overnight, forever. A new class of generative AI has emerged that will revolutionize communication and information – and health along with it.
News type :
Blog posts
Using the matrix to help Meta gear up
Just 12-months after it was created, in December 2004, 1-million people were active on Facebook. As of December 2021 it had an average 1.93 billion daily active users. EPFL is in a unique collaboration with its parent company Meta around distributed deep learning research.
News type :
News
“Deepfake generation and detection is like an arms race”
Two EPFL computer scientists have taken home a major prize in Singapore’s Trusted Media Challenge, a five-month long competition aimed at cracking the code of deepfakes.
News type :
Press reviews
[FR] Des solutions pour déjouer les fausses images créées par le deepfake
Prof. Touradj Ebrahimi, head of the C4DT affiliated Multimedia Signal Processing Group, presented solutions to uncover deepfakes today on the RTS CQFD radio show.
News type :
Press reviews
[FR] Data et IA : comment les entreprises peuvent-elles générer plus de confiance pour leurs clients et utilisateurs ?
Olivier Crochat dirige le Center for Digital Trust, au sein de l’école polytechnique fédérale de Lausanne. Il revient sur le concept de confiance appliquée au monde digital avec un tour d’horizon des questions qui se posent aujourd’hui aux entreprises qui développent des services numériques basés sur la data et l’IA.
News type :
Press reviews
DuoKey, Futurae and Nym join the C4DT through its associate partner program
We are delighted to announce that 3 additional start-ups have joined the C4DT community through the C4DT start-up program. For two years Duokey SA, Futurae Technologies AG and Nym Technologies SA will complement the already diverse group of partner companies through their start-up perspectives to collaborate and share insights on…
News type :
News
Ruag AG joins the C4DT
We are pleased to announce that Ruag AG, Switzerland, has just joined the C4DT as partner. Owned by the Confederation, Ruag AG is the technology partner of the Swiss Armed Forces. Together with armasuisse, Ruag’s presence strengthens C4DT's expertise in cybersecurity and cyber defense. We are convinced that this partnership…
News type :
News
Reward for learning with a twist of real-life research
Martin Jaggi, C4DT affiliated Tenure Track Assistant Professor in the School of Computer and Communications Sciences (IC) has won the 2021 Credit Suisse Award for Best Teaching, for introducing two novel, hands-on science challenges into his Machine Learning Course – the largest masters level class on campus.
News type :
Press reviews
Tune Insight secures pre-seed round from Wingman Ventures
Tune Insight B2B software enables organizations to make better decisions by collaborating securely on their sensitive data to extract collective insights. Incubated at the EPFL Laboratory for Data Security, with a deployment in Swiss university hospitals and customer-funded projects in the insurance and cybersecurity businesses, Tune Insight will use the…
News type :
News
The EPFL Tech Launchpad awards two new Ignition grants
We are delighted to announce the startups MinWave and Predikon have each been awarded a CH 30k Ignition grant as part of EPFL’s Tech Launchpad - a leading incubator dedicated to supporting groundbreaking and innovative startups.
News type :
Press reviews
Deepfake Arms Race
Stories of fakes, forgeries, fortunes and folly have intrigued people throughout the ages, from the Athenian Onomacritus, who around 500 BC was said to have been a forger of old oracles and poems, to Shaun Greenhalgh, who between 1978 and 2006 infamously created hundreds of Renaissance, Impressionist and other art…
News type :
Press reviews
A Journey With Predikon (3/3)
On the 7th of March, the Swiss population voted on a ban for full face coverings, the e-ID Act, and an economic partnership agreement with Indonesia. As with all Swiss referendums since 2019, the EPFL election prediction tool Predikon generated real-time predictions for the vote outcomes.
News type :
Blog posts
Deepfakes wreak havoc
Take a look at the RTS documentary on the impacts of deepfakes featuring an interview from C4DT affiliated professor Touradj Ebrahimi.
News type :
Press reviews
A Journey With Predikon (2/3)
On March 7, the next Swiss referendum vote will be held, with votes on a ban for full face coverings, the e-ID Act, and the economic partnership agreement with Indonesia. As this national vote approaches, the EPFL election prediction tool, Predikon, is rolling out some improvements with new features.
News type :
Blog posts
“My focus is on teasing information out of networks”
EPFL Professor Matthias Grossglauser has been awarded the grade of Institute of Electrical and Electronics Engineers Fellow, a notable recognition given to less than 0.1% of voting members annually. Please click below for more info.
News type :
News
A Journey With Predikon (1/3)
While votes seem to yield increasingly surprising results, such as the election of Donald Trump in 2016 or the Brexit vote in the UK defying pre-vote polls and initial vote counting predictions, Swiss vote results are swiftly being predicted by Predikon. We will follow the evolution of the project until…
News type :
Blog posts
Quantum Integrity went home with the prize at October’s EIC ePitching
October’s EIC ePitching with Investors brought together eight highly innovative EIC-backed SMEs with six investor firms to pitch on Fintech, Insurtech and Blockchain applications. This month’s edition awarded Swiss-based Quantum Integrity for its efforts in AI powered deep fake and image forgery detection. CEO Anthony Sahakian's startup is located at…
News type :
News
Manipulating elections in cyberspace: are democracies in danger?
A recent conference held by EPFL’s C4DT (Center for Digital Trust), and live streamed live streamed, with some of the world’s leading experts on cyber security, fake news and democracy, heard that citizens and governments should re-gain their sense of alarm and do something to urgently address what we all…
News type :
News
Trust Valley sets off at EPFL
An alliance for excellence supported by multiple public, private and academic actors, the "TRUST VALLEY" was launched on Thursday, October 8, 2020. Cantons, Confederation, academic institutions and captains of industry such as ELCA, Kudelski and SICPA, come together to co-construct this pool of expertise and encourage the emergence of innovative…