The LaTIM laboratory in Brest, France is offering a 24 month postdoc position about computational anatomical models with deep cross-modality learning.
Many medical imaging protocols involve the use of multiple imaging modalities, such as computed tomography (CT), magnetic resonance (MR) or ultrasound (US) imaging. However, while deep learning techniques based on convolutional neural networks (CNN) have become extremely popular in medical image analysis , a common practice is to develop modality-specific (i.e. marginal) computational anatomical models, without explicitly taking into account the multi-modal nature of the underlying imaging data. Doing so, marginal processing may leave potentially valuable cross-modality information unused and as a consequence hamper the performance in downstream applications such as segmentation. By leveraging multi-modal learning, this project aims at exploiting cross-modal information for improved image processing performance in a large number of medical imaging applications. The present project will focus on the development of innovative methodologies for medical image analysis dedicated to multi-modal imaging. A priority will be given to the development of innovative neural network architectures that can take advantage of learned redundancies and/or complementarities across modalities through cross-modality learning . Such an integrative approach will facilitate the development of multi-modal computational anatomical models that make better use of reduced dataset size, a typical limitation in medical imaging. The main target application will be modality-blind automatic segmentation of anatomical structures, i.e. independently of the input modality (e.g. whether MR alone, CT alone or both). This work will complement the initial work of LaTIM on muscle , bone  or organ  segmentation using deep learning. Moreover, the methodological contributions will integrate aspects related to domain adaptation and image synthesis using generative modelling , for which LaTIM has a recognized expertise. Cross-modality learning is usually performed with deep architectures containing many layers specific to each modality, which does not allow to fully exploit potentially valuable inter-modal information. We will therefore propose more compact models  by widely re-using network parameters (e.g. sharing convolution kernels between modalities). Through cross-modality learning, the contributions will aim at obtaining more comprehensive models, towards an augmented vision of the patient-specific anatomy. Deep cross-modality learning represents a promising perspective for various applications in musculoskeletal analysis and oncology : pre-operative planning of arthroplasty for patients with knee osteoarthritis (focus on FollowKnee provided below), optimization of therapeutic follow-up for patients with hepatic metastases from colorectal cancer and zero/low-radiation treatment planning in radiotherapy.
The goal of the FollowKnee project (https://followknee.com) is to propose a innovative workflow for knee replacement surgery as an answer to the changes in the demographic population for the next 20 years . It aims at defining a set of technological solutions to decrease the number of revisions : a personalized implant design adapted to each patient anatomy, the use of augmented-reality in the operating room to optimize the surgical techniques and the integration of a new generation of implant sensors to improve post-operative follow-up. These innovations will have a significant impact among several actors : patients, surgeons, manufacturer or insurances. FollowKnee brings together 10 partners from academic to industrial research. The development of patient-specific knee prostheses requires to take into account all the morpho-functional aspects related to the knee joint. Today, the pre-operative planning is based on CT acquisitions which mainly extract bone surfaces. Conversely, MR imaging is increasingly used since it allows a better identification of soft tissues such as muscles, ligaments or cartilage. Our aim is to develop approaches that exploit the complementarity of both modalities to improve the surgery planning of knee arthroplasty.
— PhD in machine learning, biomedical imaging or computer science
— interest in the fields of health and artificial intelligence
— strong theoretical and practical knowledge in applied mathematics and (medical) image processing
— strong theoretical and practical knowledge in machine and deep learning
— very good level of programming (Python)
— ability to communicate in English, fluent English for reading/writing scientific articles
— must have spent a minimum of 18 months outside France between 1st May, 2017 and the beginning of the project
— start date / duration : as soon as possible for (at least) 24 months
— advisors : Pierre-Henri Conze (IMT Atlantique, LaTIM), Vincent Jaouen (IMT Atlantique, LaTIM)
— laboratory : LaTIM (http://latim.univ-brest.fr) UMR 1101, Inserm
— postal address : IBRBS, 22 avenue Camille Desmoulins, 29200 Brest, France
Applications can be sent by email to firstname.lastname@example.org and email@example.com with the following documents :
— a full curriculum vitæ including a list of scientific contributions
— up to two representative scientific articles or conference papers
— recommendation letters or contacts from former teachers/advisors
— a cover letter stating your motivation and fit for this project
Deadline for application : 1st March, 2021.
 G. Litjens et al., A survey on deep learning in medical image analysis. Medical Image Analysis, 2017.
 V. Valindria et al., Multi-modal learning from unpaired images : application to multi-organ segmentation in CT and MRI. Winter Conference on Applications of Computer Vision, 2018.
 P.-H. Conze et al., Healthy versus pathological learning transferability in shoulder muscle MRI segmentation using deep convolutional encoder-decoders. Computerized Medical Imaging and Graphics, 2020.
 A. Boutillon et al., Combining shape priors with conditional adversarial networks for improved scapula segmentation in MR images. International Symposium on Biomedical Imaging, 2020.
 P.-H. Conze et al., Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks. arXiv, 2020
 C. Hognon et al., Standardization of multicentric image datasets with generative adversarial networks. IEEE Nuclear Science Symposium and Medical Imaging Conference, 2019.
 Q. Dou et al., Unpaired multi-modal segmentation via knowledge distillation. IEEE Transactions on Medical Imaging, 2020.
 S. Kurtz et al., Future young patient demand for primary and revision joint replacement : National projections from 2010 to 2030. Clinical Orthopaedics and Related Research, 2009.
(c) GdR 720 ISIS - CNRS - 2011-2020.