Les commentaires sont clos.

Interpretability and explainability of deep neural networks for radiomics applications

30 Juin 2022

Catégorie : Post-doctorant

This position is opened within the context of a multidisciplinary project funded by the European Union in a call related to explainable artificial intelligence. The project associates three partners in France (LaTIM, INSERM U1101), Greece (BioemTech and University of Patras) and Poland (University of Warsaw).


Context: Deep neural networks (DNNs) have achieved outstanding performance and broad implementation in computer vision tasks such as classification, denoising, segmentation and image synthesis. However, DNN-based models and algorithms have seen limited adaptation and development within radiomics which aim to improve diagnosis or prognosis of cancer.

Traditionally, medical practitioners have used expert-derived features such as intensity, shape, textual, and others. We hypothesize that, despite the potential of DNNs to improve oncological classification performances in radiomics, a lack of interpretability of such models prevents their broad utilization, performance, and generalizability.

Therefore, the INFORM consortium proposes to investigate explainable artificial intelligence (XAI) with a dual aim of building high performance DNN-based classifiers and developing novel interpretability techniques for radiomics by tackling the interpretability of DNN-based feature engineering and latent variable modeling with innovative developments of saliency maps and related visualization techniques. We propose to build explainable AI models that incorporate both expert-derived and DNN-based features. By quantitatively understanding the interplay between expert-derived and DNN-based features, our models will be readily understood and translated into medical applications. Evaluation will be carried out by clinical collaborators with a focus on various cancer types, for example head and neck cancer with PET/CT imaging.

Location: Laboratory of Medical Information Processing (LaTIM), French Institute of Health and

Medical Research (INSERM UMR 1101), Brest, France (

Duration: 1 year, renewable for 1 more year, starting on September 2022

Salary: ~2500€ per month

Education: The candidate must hold a PhD in physics, computer science, applied mathematics or equivalent and have a substantial expertise and background in deep learning, machine learning and image analysis or computer vision, especially with activation / saliency maps, medical image processing and analysis and an interest in adressing clinical challenges. Experience in radiomics is a plus but not mandatory.

Skills: Scientific programming, deep learning libraries and implementation

Languages: Good proficiency in English (both written and spoken), French optional

Application: Send before the 29th of July (at the latest) your CV, motivation letter and preferably a reference letter by e-mail to: