Master’s Thesis Opportunity: Differential Privacy for Interpretability

Introduction

Understanding how machine learning models learn and organize concepts is a key challenge in interpretability research. This thesis investigates the potential of Differential Privacy (DP) as a constraint to analyze the hierarchical nature of concept learning, with the goal of improving model interpretability. The research aims to explore how DP affects the learning of subpopulations and concepts.

Research Questions

Primary Question: How can DP be leveraged as a constraint to understand the hierarchical nature of concept learning in machine learning models? Can DP be used to improve model interpretability? Secondary Questions:

  • How can we effectively identify and analyze the learning of specific concepts under DP constraints?
  • What metrics can be used to quantify concept acquisition?
  • How does (the level of) privacy influence the order and granularity of learned concepts?

Methods & Tools

The thesis will involve a combination of theoretical analysis and empirical evaluation. Possible methodologies include:

  • Implementation in Python.
  • Qualitative analysis of learned concepts and identification of relevant subpopulations.
  • Development of quantitative metrics to evaluate concept acquisition under varying DP constraints.

Prerequisites

We are looking for students with an interest in privacy, interpretability, and machine learning. Required and preferred skills include: Required:

  • Strong motivation
  • Strong programming skills in Python
  • Background in machine learning and deep learning

Preferred:

  • Experience with privacy-preserving ML frameworks
  • Familiarity with differential privacy fundamentals
  • Knowledge of interpretable AI techniques
  • Good mathematical foundations

What We Offer

  • Access to computing resources and datasets for experimentation
  • Support from experienced researchers in DP and interpretability
  • Potential opportunities to publish findings in AI/ML conferences

Application Process

Interested students should send the following to sarah.lockfisch@tum.de and jonas.kuntzer@tum.de:

  • A brief motivation letter (1 page max)
  • Your CV
  • Academic transcripts
  • Any relevant project samples or publications (if available)

For further questions, feel free to reach out to Sarah Lockfisch or Jonas Kuntzer.

Sarah Lockfisch
Sarah Lockfisch
PhD Student

DL

Jonas Kuntzer
Jonas Kuntzer
PhD Student

I am mainly interested in trying to understand how neural networks learn and differential privacy.