MSc Thesis: Bias and Fairness in Healthcare AI-based Algorithms

We are looking for a motivated Master’s student to join our research team and work on a thesis project that deals with bias in healthcare, specifically with AI-based algorithms. The successful candidate will have access to a large-scale biomedical database and research resource, containing in-depth genetic and health information from half a million participants. The position involves designing a use case for evaluating bias, implementing machine learning models, and generating a fairness report.

This is an exciting opportunity for a motivated Master’s student to work on a project with real-world implications for healthcare. Additionally, there may be the possibility of publishing the results of the project in a peer-reviewed scientific journal, providing an excellent opportunity to showcase the student’s research skills and contribute to the field of healthcare AI-based algorithms.

Project Background:

Healthcare AI-based algorithms are rapidly becoming ubiquitous in medical settings, from assisting clinicians in diagnosing diseases to personalizing treatments. While AI has the potential to revolutionize healthcare, it can also perpetuate biases and lead to disparities in care. For example, an AI-based algorithm may have a higher accuracy rate for diagnosing certain diseases in specific racial or ethnic groups, leading to differential treatment and outcomes. Bias in healthcare AI-based algorithms can have serious consequences for patient care, particularly for those from marginalized or underrepresented groups. Given the importance of fair and equitable healthcare, it is crucial to incorporate fairness checks into the design and implementation of healthcare AI-based algorithms. Fairness checks can help identify and mitigate bias and ensure that the algorithm does not unfairly discriminate against any group. The successful candidate will be responsible for designing and implementing a use case that evaluates bias in healthcare AI-based algorithms and generating a fairness report.

Your Responsibilities:

  • Design a use case for evaluating bias in healthcare AI-based algorithms
  • Implement machine learning models to test the use case
  • Generate a fairness report that identifies any bias and suggests ways to mitigate it

Your Qualifications:

  • Strong programming skills in Python
  • Experience with machine learning algorithms and data analysis
  • Excellent written and verbal communication skills
  • Ability to work independently and as part of a team

How to Apply

Fill-in the following form or send an email to haifa.beji@tum.de with your CV. We promise to get back to you within days.

Haifa Beji
Haifa Beji
PhD Student

My main research interests lie in assessing and addressing bias and fairness in medical algorithms