MSc Thesis: Disentangled Implicit Neural Representations

Implicit Neural Representations (INR) have emerged as powerful and compact representations for audio, shapes and images, and have recently been translated into medical applications for super-resolution, reconstruction or shape completion [1]. While INRs have been traditionally trained on a single scene or image in the past, novel approaches such as [2] allow to capture a dataset of similar entities with the help of latent representations. Based upon this, INRs are emerging in generative settings and other areas of deep learning, making them a novel and pioneering field [3].

Built-up on these concepts, our current research focuses on efficient and interpretable implicit representations that may be generalized to diverse datasets and settings. Our aim is to find alternative methods and algorithms to [2], allowing to disentangle image features from spatial occurrences.

Your qualifications:

We are looking for a highly motivated Master’s student in Computer Science, Physics, Engineering or Mathematics. Your goal is to extend the existing Pytorch codebase by incorporating novel algorithms and methods, and work on signal processing methodology. Importantly, we aim to publish the results of this work, with you, in a follow up study at a high-impact machine learning conference or in an academic journal.

  1. Strong motivation and interest in machine learning.
  2. Advanced programming skills in C++, Python or C.
  3. Strong interest in teamwork and interdisciplinary research.

What we offer:

  • An exciting research project with many possibilities to bring in your own ideas.
  • Close supervision and access to state-of-the-art computer hardware.
  • The chance to work in a team of highly qualified experts in machine learning, computer vision and deep learning.



How to apply:

Please send us a short e-mail with your CV and grade report to julian.mcginnis@tum.de and suprosanna.shit@tum.de.

References

[1] McGinnis J, Shit S, Li HB, Sideri-Lampretsa V, Graf R, Dannecker M, Pan J, Ansó NS, Mühlau M, Kirschke JS, Rueckert D. Multi-contrast MRI Super-resolution via Implicit Neural Representations. https://arxiv.org/pdf/2303.15065.pdf
[2] Mehta, Ishit, Michaël Gharbi, Connelly Barnes, Eli Shechtman, Ravi Ramamoorthi, and Manmohan Chandraker. “Modulated periodic activations for generalizable local functional representations.” In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14214-14223. 2021. https://openaccess.thecvf.com/content/ICCV2021/papers/Mehta_Modulated_Periodic_Activations_for_Generalizable_Local_Functional_Representations_ICCV_2021_paper.pdf
[3] Dupont, Emilien, Hyunjik Kim, S. M. Eslami, Danilo Rezende, and Dan Rosenbaum. “From data to functa: Your data point is a function and you can treat it like one.” arXiv preprint arXiv:2201.12204 (2022). https://proceedings.mlr.press/v162/dupont22a/dupont22a.pdf

Julian McGinnis
Julian McGinnis
PhD Student

My research interests focus on deep learning methods and medical applications.