12-month post-doc position at University of Grenoble-Alps - LIG-MRIM
Explainable medical screening from still images in partnership (with an innovative start-up)
Supervisors: Jean-Pierre Chevallet (UGA) and Georges Quénot (CNRS)
Contact: Jean-Pierre.Chevallet@imag.fr, Georges.Quenot@imag.fr
Location: Grenoble (France), Laboratoire Informatique de Grenoble
Starting from: beginning of 2021.
MRIM team web site: http://lig-mrim.imag.fr/
LIG web site: http://www.liglab.fr/
Keywords: Medical Image analysis, Deep Learning, Explainable AI, Computer Vision.
The MRIM research team, in collaboration with a start-up, is looking for a Post-doc specialized in AI applied to the recognition of objects in images for medical screening. The disease that interests us is manifested by visible clinical signs. The object recognition system will detect and analyze medical images in “standard” 2D photographs taken with consumer digital cameras (ex: Smartphone), which will be sent to a remote AI platform that will perform the image processing for medical screening.
You will work in the MRIM team that has expertise in the field of AI and Image Processing for Information Access. The project deals with a major international health concern with a strong innovative part that should lead to major publications in the medical domain. Through this project, you will have the opportunity to work on real medical data. This research program aims to demonstrate the feasibility and relevance of our digital approach in large-scale, general public screening at low cost.
The objective of this post-doc is to develop a system for making a medical screening from images taken using a standard color camera. Given a training collection of images annotated according to the presence or the absence of a pathology and its level of development if present, the system should be trained to be able make a first prediction on unseen images to predict whether the patient should see a doctor. The goal of the project is not only to produce a system able to do the pre-diagnosis task but also to provide explanations regarding how the conclusion was reached. For this, the system should be able to identify the types of elements or of attributes that it uses for making the decisions and how these are used for that, ideally in terms of visual clues and logical rules on them. The scientific aspects will be mostly related to the explainability part while classical deep learning-based methods will be used for the decision part. The start-up will provide the training and test images, as well as all the expertise related to the targeted pathology.
Profile & Skills:
PhD in Computer Science and/or Applied Mathematics for Computer Science.
Strong knowledge in Machine Learning.
Good programming skills, experience of deep learning frameworks (ex: PyTorch, TensorFlow, etc.)
Image processing, Computer vision in theory and practice.
(c) GdR 720 ISIS - CNRS - 2011-2020.