Machine learning found its way in a broad variety of sensitive applications, such as health care, hiring processes, and autonomous service. Thereby, it has a direct impact on our daily lives and potential malfunctioning could cause severe damages for the individual and society as a whole.
In this seminary, we will therefore set out to study what it means for machine learning to be trustworthy. We will include several different aspects of trustworthiness, such as security, privacy, and fairness. We will study recent work from all the respective communities to gain an understanding of new research directions in the field.
This includes but is not limited to studying:
- Training and test time attacks against the integrity of ML models, such as data poisoning and adversarial machine learning
- Privacy attacks against machine learning models and their training data, such as membership inference attacks, model inversion attacks, and property inference attacks
- Algorithmic fairness in machine learning
- Confidentiality of machine learning models and their training data
The seminary requires students to exhibit a basic understanding of machine learning. Additionally, the students are required to familiarize themselves with the scientific papers listed in the pre-course reading list below.