Colloquium: Towards differentially private machine learning
14 January 2021
Colloquium by Prof. Dr. Esfandiar Mohammadi, University of Lübeck
Thursday, 14 January 2021 17:15, Zoom Meeting
Abstract:
Neural networks are used to tackle a wide variety of non-trivial problems if they are fed a massive amount of training data. As this training data can contain sensitive information, naturally the question arises: do neural networks leak any information about their training data? A rich body of literature has shown: yes, they can indeed leak information about their training data, even if only black-box access is possible, e.g., in a cloud service (MLaaS). It has been shown that the privacy notion of differential privacy can help against this kind of attacks.
In this talk, I will discuss methods for differentially private training of neural networks, present our improved analysis methods, discuss limitations of current training methods, and give a glimpse into our ongoing work on improving differentially private training methods.
Zoom access information: Informatics Colloquium WS 20/21.