Annonce

Les commentaires sont clos.

MRI-to-CT synthesis by unsupervised deep learning for radiotherapy planning

23 Août 2023


Catégorie : Post-doctorant


MRI-to-CT synthesis by unsupervised deep learning for radiotherapy planning

 

Postdoctoral Position

Contact: Jean-Claude Nunes jean-claude.nunes@univ-rennes.fr

Duration: 12 months Starting Date: September 2023

 

Laboratoire Traitement du Signal et de l’Image (LTSI), INSERM U1099, Univ. de Rennes, at Rennes, FRANCE.

 

Background

CT scans are nowadays the reference imaging for dose planning in radiotherapy, because they provide the electronic density of tissues required for dose calculation. Due to CT’s poor contrast in soft tissues, MRI is the reference modality for soft tissue imaging and delineation. However, MR-images do not provide electronic density information. To perform dose calculation, several methods have been recently developed, allowing the generation of a synthetic CT (sCT), mimicking CT characteristics (intensities in Hounsfield units (HU)) [Boulanger21]. In general, deep learning methods (DLMs), including prior LTSI work [Largent19, Tahri22], have shown better performance leading to lower MR-to-CT lower synthesis and dosimetric errors. Moreover, DLMs have the advantages of being fast for sCT generation (< 1 mn), more robust and not requiring deformable inter-patient registration (in regions such the pelvis area) [Largent19]. DL architectures for MRI-to-CT synthesis [Boulanger21] are generator-only, GAN (generator adversarial networks), or variants of the two.

Firstly, most of the DL architectures use only one MRI sequence (T1 or T2) as input and generated one sCT as output; an architecture generally called single-input single-output (SISO). A better sCT quality can be obtained with multiple inputs (named multi-input single-output (MISO)) [Koike20] or multi-input multiple-output (MIMO) [Sharma20].

Secondly, a major limitation of DLMs to generate sCT in daily practice is their dependency on the training and validation cohorts which are specific to a center or CT/MRI device [Boulanger21], impeding the generalization of the sCT approach. An intuitive solution to the lack of large paired domains is to reuse pre-trained DLM trained on a related domain. Domain adaptation has attracted increasing attention in the field, aiming to minimize distribution gap among different but related domains. Some methods have also been proposed for cross-modality image synthesis by unsupervised domain adaptation for medical data (such MR-to-CT) [Xu20].

Thirdly, recent computer vision studies have adopted diffusion processes as a promising alternative to improve sample fidelity in unconditional generative modeling tasks [Dhariwal21, Özbey22]. However, the potential of diffusion methods in medical image translation remains largely unexplored, partly owing to the computational burden of image sampling and difficulties in unpaired training of regular diffusion models.

 

Mission

Our project focuses on developing an MRI-to-sCT algorithm for accurate planning with the following objectives in mind:

1)Proposing unsupervised (unpaired) learning methods for multi-center use (taking into account the variety of image acquisition systems: manufacturers, calibration, acquisition parameters, magnetic field, etc.).

2)Multimodal non-rigid registration approach (MRI/CT) used for paired training or targeting reducing registration uncertainty during the evaluation.

3)Improving tumor targeting by exploiting multisequence MRI data (T1, T2, resolution, etc.) in a MISO MRI-to-CT generation method.

4)Standardizing the sCT evaluation: The HU of the sCT images will be evaluated based on voxel-to-voxel comparison to the CT images [Boulanger21, Tahri22]. Dose distributions calculated on sCT images will be compared against the CTref dose distributions [Chourak22].

 

She/he will work in close collaboration with other researchers from theIMPACT research team and the CLCC Eugène Marquis (cancer treatment clinical centre).

The postdoc will also have the opportunity and to supervise undergraduatestudents.

 

Hosting environment

The position will be hosted by the laboratory of digital sciences of Rennes: LTSI(INSERM U1099, Rennes University) atthe Rennes University.ThecandidatewillintegratetheIMPACT research team, which associates biomedical engineering researchers, physicians and physicists. It has developed an expertise in image-guided radiation therapy, with a highly translational approach, from fundamental to clinical studies. A strong collaboration has been built with CLCC Eugène Marquis (cancer treatment clinical centre).

Rennes, the gateway to Brittany, is a European city with a reputation for excellence and quality of life. At just 1 hour and 25 minutes from Paris and 50 minutes from the coast (Saint-Malo), it is in a great location that is perfect for everyone, from heads of companies!

Requirements

-A PhD in image processing, computer vision, biomedical engineering or relatedfields.

-Good programming skills (e.g., inPython).

-Good written and spoken scientific communication skills(English).

-A solid background in the following fields :

  • Deeplearning,
  • Signal and Image processing, ComputerVision,
  • Multimodal and non-rigid image registration,
  • Diffusion models,
  • Medical image analysis, in particularMRI and CT.

Salary and duration

The position is for 12 months. Remuneration and social benefits are based on the collective wage agreement for public-sector employees at the national French level, considering previous years of experience.

How to apply: Send an e-mail to jean-claude.nunes@univ-rennes.frwith your CV, publication list, references and motivation letter.

 

Bibliography

[Boulanger21] Boulanger M., Nunes J.-C., et al., Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review, Physica Medica, Vol. 89, 2021, 265-281.

[Chourak22] H. Chourak, et al., Quality assurance for MRI-only radiation therapy: A voxel-wise population-based methodology for image and dose assessment of synthetic CT generation methods, Front. Oncol., Vol. 12, 2022.

[Dhariwal21] P. Dhariwal et al., Diffusion models beat gans on image synthesis,” in Adv Neural Inf Process Syst, vol. 34, 2021.

[Koike20] Y. Koike, et al., Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy, J. Radiat. Res., 61, 2020.

[Largent19] Largent, A., Barateau, A., Nunes, et al., Pseudo-CT Generation for MRI-Only Radiation Therapy Treatment Planning: Comparison Among Patch-Based, Atlas-Based, and Bulk Density Methods. Int. Journal of Rad. Onc. Biology Physics 103(2) 2019.

[Özbey22] Özbey, et al., Unsupervised Medical Image Translation with Adversarial Diffusion Models, 2022, arXiv:2207.08208.

[Sharma20] Sharma A, Missing MRI Pulse Sequence Synthesis Using MultiModal Generative Adversarial Network. IEEE Trans Med Imag, 39, 2020.

[Tahri22] S. Tahri, et al., A high-performance method of deep learning for prostate MR-only radiotherapy planning using an optimized Pix2Pix architecture, Physica Medica, Volume 103, November 2022, Pages 108-118.

[Xu20] Xu L. et al., BPGAN: Bidirectional CT-to-MRI prediction using multi-generative multi-adversarial nets with spectral normalization and localization, Neural Network, 2020 Aug, 128:82-96.