Annonce

Les commentaires sont clos.

PhD position at LaTIM - Cross-modality learning for prostate tumor segmentation in dosimetric planning

13 Juillet 2022


Catégorie : Doctorant


The laboratory of medical information processing (LaTIM UMR 1101, Inserm) is opening a PhD position on multi-modal medical image segmentation using artificial intelligence (AI).

 

Context. With more than 1 million new diagnoses and 350,000 deaths per year worldwide, prostate cancer is the second most common cancer in men [1]. Multi-parametric magnetic resonance (MR) imaging is, along with digital examinations, prostatic specific antigen (PSA) testing and prostate biopsies, one of the cornerstones of both diagnosis and therapeutic management [2]. It allows the selection of patients eligible for active surveillance, surgical and therapy planning or local assessment in case of recurrence. Its use in radiotherapy should significantly increase in the coming years since the interest of a focal dose increase with respect to the intra-prostatic tumor volume has been shown to reduce the risk of recurrence. Such a planning requires a precise definition of the gross tumor volume (GTV) but two issues arise: 1- an under-estimation of GTV on prostate MR images compared to surgical specimens, 2- an important variability of the GTV definition between experts. These difficulties can be explained by the lack of methodological consensus on the definition of GTV but also by the complexity of such a task. Automatic and robust GTV segmentation based on multiple multi-modal acquisitions including MR T2, ADC, B2000, perfusion and computed tomography images with contrast agent injection, is requested to standardize the dosimetric planning while alleviating the expert variability induced by the delineation task [3].

Segmentation techniques using convolutional neural networks (CNN) have become popular in medical imaging, but often only use a single imaging modality [4]. The integration of multi-sequence and multi-modal information through deep learning would allow to take advantage of complementarities between modalities in order to build compact and more efficient segmentation models. In view of the recent breakthrough of Transformers [5], the question of the use of hybrid CNN/Transformers models [6] in a context of multi-modal information fusion remains un-explored.

 

Main objective. This work focuses on the development of new methods for multi-modal prostate tumor segmentation using AI, based on innovative architectures combining convolutional layers and Transformers and able to take advantage of both redundancies and complementarities between multiple modalities.

 

Description of work. The PhD thesis will be structured in 3 main steps. First, we will perform a meta-analysis of existing hybrid segmentation models in order to identify, for each available modality, the most efficient way to combine convolutional layers and Transformers [6]. The second objective will be to extend the identified hybrid architectures in a cross-modal learning scenario. Cross-modality learning techniques are usually performed with networks containing many modality-specific layers, which does not allow to fully exploit the potentially valuable cross-modal information [7]. We will propose more compact computational models by largely re-using the network parameters among the different modalities. Finally, the proposed methodological contributions will aim to contribute to concrete cases of applications in oncology and radiotherapy [8], in particular in the context of prostate tumor detection, segmentation [3] and grading. Experiments will be based on public datasets (PROSTATEx, PI-CAI) as well as images collected from CHRU Brest. To test the genericity of the proposed contributions, various multi-modal tumor segmentation tasks from different clinical applications will also be investigated.

 

Bibliography. [1] H. Sung et al., “Global cancer statistics 2020 : GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” Cancer Journal for Clinicians, 2021. [2] N. Mottet et al., “Guidelines on prostate cancer—2020 update. part 1 : screening, diagnosis, and local treatment with curative intent,” European Urology, 2021. [3] A. Saha et al., “End-to-end prostate cancer detection in bpmri via 3D CNNs : Effects of attention mechanisms, clinical priori and decoupled false positive reduction,” Medical Image Analysis, 2021. [4] P.-H. Conze et al., “Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks,” Artificial Intelligence in Medicine, 2021. [5] A. Dosovitskiy et al., “An image is worth 16x16 words : Transformers for image recognition at scale,” arXiv preprint 2010.11929, 2020. [6] J. Chen et al., “TransUNet: Transformers make strong encoders for medical image segmentation,” arXiv preprint 2102.04306, 2021. [7] Q. Dou et al., “Unpaired multi-modal segmentation via knowledge distillation,” IEEE Transactions on Medical Imaging, 2020. [8] D. Huang et al., “The application and development of deep learning in radiotherapy : A systematic review,” Technology in Cancer Research & Treatment, 2021.

 

How to apply? Applications should be sent before August 30, 2022 by e-mail to pierre-henri.conze[at]imt-atlantique.fr with the following documents: a full curriculum vitae, recommendation letter(s) from former teacher(s)/advisor(s), a cover letter stating your motivation and fit for this project as well as grades obtained in M2 and/or engineering school.


Candidate profile. The following skills are required: strong theoretical/practical knowledge in applied mathematics, image processing, machine/deep learning, Python programming, organizational skills, fluent English for reading/writing scientific articles, interest in the fields of health and AI.