Vous êtes ici : Accueil » Kiosque » Annonce

Identification

Identifiant: 
Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?

Annonce

25 septembre 2019

Camera / radar data fusion by deep learning for 3D reconstruction of large outdoor environments


Catégorie : Post-doctorant


Automatic matching of data from a calibrated rig consisting of a camera and a panoramic MMW radar by deep learning for 3D reconstruction of large outdoor scenes. Using CNNs to make it possible to manage environmental variability by eliminating information not relevant for mapping (artefacts, pedestrians, etc.). Occlusion handling.

 

Accurate environmental perception is essential for ADAS and L4/L5 autonomous vehicles. Whether for mapping or security, perception must work under all conditions (rain, snow, darkness, fog/mist, dust, etc.).

The camera/radar fusion is a way to achieve this goal. In ComSee, we developed an RGB-D sensor combining microwave radar and wide-angle camera [1, 2, 3]. Unlike the work on multi-modal data fusion, we have adopted a "sensor" approach: to produce a sensor that directly outputs a 3D + RGB reconstruction with a single acquisition. After modelling the particular geometry of this system to allow data alignment, we proposed algorithms and hardware for calibration.

3D reconstruction requires the realization of a delicate task which is the association of data. The encouraging results showed how difficult this task is in real conditions.

It is this task that we intend to improve in this postdoc. Recent advances in the processing of heterogeneous data by the AI, particularly with CNNs, will make it possible to drastically improve matching and reconstruction results.

Initially, the data from each sensor will be processed alone by detection and semantic classification. Then, in the long term, we plan to implement an end-to-end method that receives the raw data from the two sensors as input to provide a 3D panorama.

The contribution of CNNs will make it possible to manage environmental variability by eliminating information not relevant for mapping (artefacts, pedestrians, etc.) but also by detecting moving objects. The merger will also make it possible to effectively manage occlusion.

The candidate must have defended a thesis in the field of computer vision and have knowledge of machine learning.

 

Références :

[1] G. Elnatour, O. Ait-Aider, R. Rouveure et al: Radar and vision sensors calibration for outdoor 3D reconstruction. ICRA’2015

[2] G. Elnatour, O. Ait-Aider, et al: Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor. Sensors 2015

[3] G. Elnatour, O. Ait-Aider, R. Rouveure et al: Range and Vision Sensors Fusion for Outdoor 3D Reconstruction. VISAPP’2015

 

 

Laboratory :

Institut Pascal (umr cnrs 6602), team ComSee ( https://comsee.ispr-ip.fr/ )

 

Contact :

Please send your CV, references and publications by email to

 

Dr. Omar AIT-AIDER

Associate Professor at Institut Pascal/Univ. Clermont Auvergne

Phone 04 73 40 55 67

Email : omar.ait-aider@uca.fr

Dans cette rubrique

(c) GdR 720 ISIS - CNRS - 2011-2019.