Multimodal image segmentation for computer-aided radiotherapy
30 Novembre 2022
Catégorie : Stagiaire
Two internships in the context of MRI-guided radiotherapy part of the CEMMTAUR project / Commnilabs Labex. The main task will be the design and deployement of neural networks for the segmentation of the target and organs at risk. The two internships will deal each with the specificities of two different anatomies: brain and prostate.
• Keywords: Deep learning, medical image analysis, multimodal imaging, radiotherapy
• Supervisor: firstname.lastname@example.org
• Team and Lab: SIMS team, LS2N Lab, Nantes, France
• Starting date: Jan-2023 (flexible) ~6 months
In the context of image-guided radiotherapy, this project will focus on the automatic segmentation of multiple organs from Magnetic Resonance MR (T1, T2) and Computer Tomography images. We will target two anatomies: the prostate and the brain, as well as their respective surrounding Organs At Risk (OARs). The reason to consider multiple modalities (MR and CT) is that MR images allow for better visualisation of soft tissues, while CT are required for radiotherapy dose computation. Therefore, segmentation from multimodal images (here MR and CT) is necessary for clinical assessment, diagnosis and treatment planning [Ackaouy20, Ouyang19]. Extensive literature has shown the effectiveness of convolutional neural networks in segmenting multiple organs[Li21, Painchaud20]. Yet, without proper adaptation, these models fail when deployed across modalities, new populations or different clinical sites, mainly due to a domain shift. Thus, designing models that perform well across domains is critical, as labels are scarce and expensive. Two internships are proposed, each working on one of the two target anatomies. The first internship will be dedicated to the prostate dataset. The mission will be design andimplement the algorithms to learn the organ’s shape distribution from MR images (single sequence) and then use these learned shapes to help segment CT images with less visible contours. We will start by studying our prior work on Optimal Latent Vector Alignment OLVA [Chanti21] for segmentation under unsupervised or weakly supervised domain
adaptation (DA). Then, we will explore how to adapt this work or similar techniques to the prostate dataset. The second internship will address the brain dataset and focus on two aspects: the multisequence nature of brain images (multiple MR images for a single brain) and partial
annotations (not all organs are annotated for every patient). The intern will study existing methods for handling inhomogeneous or incomplete annotations and multi-target domain adaption [Saporta21], and integrate them into OLVA or other UDA methods for segmentation.
The two internships will be part of the CEMMTAUR ((CT synthEsis from Multicentric and
Multisquence MRI daTA with qUality assessment for image-guided Radiotherapy)
[AlChanti21a] D. Al Chanti and D. Mateus. Optimal Latent Vector Alignment for Unsupervised Domain Adaptation in Medical Image Segmentation, Int. Conf. on Medical Image Comp. and Computer-Assisted Interventions, MICCAI 2021
[Ackaouy20] Ackaouy A, et aL., F.: Unsupervised domain adaptation with optimal transport in multi-site segmentation of multiple sclerosis lesions from mri data. Frontiers in computational neuroscience 14, 19 (2020).
[Gonzalez20] V. Gonzalez Duque, D. Al Chanti, M. Crouzier, A. Nordez, L. Lacourpaille, and D. Mateus. Spatio-temporal consistency and negative label transfer for 3d freehand us segmentation. Int. Conf. on Medical Image Comp. and Computer-Assisted Interventions, MICCAI 2020.
[Islam2021] Islam M, Glocker B. Spatially varying label smoothing: Capturing uncertainty from expert annotations. In International Conference on Information Processing in Medical Imaging 2021 Jun 28 (pp. 677-688). Springer, Cham
[Ouyang19] Ouyang C, et al. Data efficient unsupervised domain adaptation for crossmodality image segmentation, MICCAI 2019.
[Saporta21] A. Saporta et.al. Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation. ICCV 2021.
[Zhang21] Y. Zhang et. al. Deep multimodal fusion for semantic image segmentation: Asurvey. Image and Vision Computing, 2021 Elsevier, 10.1016/j.imavis.2020.104042 . hal-02963619
• Background studies in : Signal and image processing, biomedical imaging, computer science or related fields.
• Prior experience in machine or deep learning and/or medical image analysis
• Strong programming skills (python)
• Good verbal communication and writing skills (in French/English)
Application: send to email@example.com
• motivation letter
• contact for two references