It was a great pleasure to share insights into privacy problems related to machine learning (ML) models. Professor Carmela Troncoso from the SPRING lab did the presentation, while her PhD student Bogdan Kulynych helped set up the hands-on training part for this ML leakage day.
In the morning, Carmela Troncoso started with an overview of why companies should care, and what can go wrong if you don’t. She also introduced techniques on how to avoid leakage of information, and what the limitations of these techniques are.
In the afternoon, Linus Gasser gave the participants hands-on training to experience how to measure these problems and how to avoid them. This allowed the participants to experience first hand with a simple example of what can go wrong. Now they are equipped to test their ML models for potential leakages. And they have some tools to reduce that leakage. Thanks also to our new RSE Ahmed Elghareeb for helping the students with the exercises.
The 23 participants came from our partners: CHUV, Futurae, ICRC, Kudelski, NYM, RUAG, and SICPA.