Vous êtes ici : Accueil » Kiosque » Annonce


Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?


26 juin 2020

Multimodal information fusion for diabetic retinopathy diagnosis

Catégorie : Doctorant

The Laboratory of medical information processing (LaTIM – UMR 1101 INSERM) is opening a PhD position on multimodal information fusion for the diagnosis of diabetic retinopathy, within the framework of the ANR RHU EviRed project (Intelligent Evaluation of Diabetic Retinopathy).



EviRed Poject

Diabetic retinopathy (DR), an ocular complication of diabetes, is a leading cause of blindness in developed countries. An important obstacle to fight DR is the use of a classification based on an old imaging technique: color fundus photography (CFP). This classification is insufficient to finely predict future evolution: in ~50% of cases, ophthalmologists over or underestimate the advent of complications. New available imaging techniques are superior: ultra-wide-field color fundus photography (UWF-CFP) gives useful 2-D information on the periphery of the retina, not seen on standard CFP. Structural optical coherence tomography (OCT), which produces few microns resolution cross sectional 3-D imaging, is the reference for diagnosis of diabetic macular edema, one complication of DR. It has been enriched with OCT angiography (OCTA), which can show the vasculature of the retina non-invasively. However, these new imaging modalities produce an expanding amount of data requiring high human expertise. Any clinical score based on them will be complex and challenging for most ophthalmologists. Therefore, the purpose of the EviRed project is to replace the current classification with an AI-based expert system integrating data to propose diagnosis and prediction.

Members of the EviRed consortium (LaTIM, AP-HP, Evolucare Technologies and ADCIS) have already developed a CE-marked AI-based DR screening system for CFP. The consortium has been reinforced by a word leader of eye imaging (ZEISS) with its R&D knowledge.

LaTIM Laboratory

The PhD student will be hosted by LaTIM, in Brest, France, which leads AI development in the EviRed project. Born from the complementarity between Health and Communication sciences, the LaTIM ("laboratoire de traitement de l’information médicale" for laboratory of medical information processing) develops a multidisciplinary research driven by members from University of Western Brittany, IMT Atlantique, INSERM and Brest University Hospitals. Information is at the heart of the research project of the unit; being by nature multimodal, complex, heterogeneous, shared and distributed, it is integrated by researchers into methodological solutions for the sole purpose of improving the actual medical benefit. Benefiting from a unit within the CHRU, the UMR (joint research unit) has (in addition to access to its own platforms) a privileged access to hospital technical platforms, as well as to all clinical data and patients, in a strong dynamic of translational research.

Research Topic

One critical part in AI development for DR diagnosis, in the EviRed project, is the ability to combine visual information from various imaging modalities (UWF-CFP, OCT, WF-OCTA), as well as contextual information collected in ophthalmology departments (e.g. visual acuity) and in diabetology departments (e.g. diabetes stability). The purpose of this PhD position is therefore to develop neural architectures able to jointly process these multimodal sources of information, in order to classify diabetic patients based on their DR status. The investigated neural architectures will have to jointly process 2-D, 3-D and scalar information. For improved performance, cross-modality image registration will be investigated, so that the neural networks can identify local cross-modality features. One challenge is that neural architectures for multimodal data do not exist yet, which has essentially two consequences: 1) new architectures need to be designed and 2) transfer learning, a powerful solution for medical image analysis, will be complicated. Unlike most medical image analysis tasks, the problem is not solely neural weight learning but also architecture learning. To enable AI development, thousands of multimodal DR examination will progressively become available throughout the course of the project.


Bonnin S, Krivosic V, Cognat E, Tadayoni R. Visibility of blood flow on optical coherence tomography angiography in a case of branch retinal artery occlusion. J Ophthalmic Vis Res. 2018;13(1):75–7.

Quellec G, Lamard M, Conze P-H, Massin P, Cochener B. Automatic detection of rare pathologies in fundus photographs using few-shot learning. Medical Image Analysis. 2020 Apr 1;61:101660.

Quellec G, Lamard M, Lay B, Guilcher AL, Erginay A, Cochener B, et al. Instant automatic diagnosis of diabetic retinopathy. 2019 Jun. Report No.: arXiv:1906.11875 [cs, eess].

Al Hajj H, Lamard M, Conze P-H, Cochener B, Quellec G. Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks. Med Image Anal. 2018 Jul;47:203–18.

Quellec G, Cazuguel G, Cochener B, Lamard M. Multiple-instance learning for medical image and video analysis. IEEE Rev Biomed Eng. 2017;10:213–34.

Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017 Dec;42:60–88.

Required Skills

• Experience/courses in AI and image processing

• Python programming

• AI libraries (Tensorflow, Keras, Pytorch, etc.)


Starting date

November/December 2020


Please send a resume, a cover letter, references and available grades (Master’s degree or equivalent) to:

• Hassan Al Hajj (hassan.alhajj@univ-brest.fr)

• Mathieu Lamard (mathieu.lamard@univ-brest.fr)

• Gwenolé Quellec (gwenole.quellec@inserm.fr)


Dans cette rubrique

(c) GdR 720 ISIS - CNRS - 2011-2020.