Annonce

Les commentaires sont clos.

Operational positioning of drone using computer vison and machine learning

1 Septembre 2022


Catégorie : Post-doctorant


Type of the position: Postdoc

Proposal title: Operational positioning of drone using computer vison and machine learning

Host laboratory: Connaissance et Intelligence Artificielle Distribuées (CIAD) – University of Technology Belfort-Montbéliard (UTBM) – Montbéliard, France

Candidate profile:

-PhD in computer vision, machine learning, robotics, or related field

-Advanced level in programming and machine learning tools

-Knowledge in ROS framework

-Advanced level in English (speaking and writing)

Start and duration of the contract: October/November 2022 for 12 months

Application : CV, PhD manuscript, publications, reference letters, etc. – Deadline: 30 September 2022

Contact: Yassine RUICHEK (yassine.ruichek@utbm.fr)

 

Proposal description:

The subject of the postdoc is part of the TECTONIC project which aims to develop a proof of concept of drone navigation by terrain correlation using imagery in the visible domain, in the absence of GNSS, and this thanks to computer vision techniques and artificial intelligence. The drone's mission will be modeled using a generic approach, which will make it possible to address different fields of application. The goal is to propose a relatively low-cost autonomous navigation solution, based on image sequences acquired from an on-board camera (without GNSS). The basic postulate consists in enslaving the real trajectory carried out by the drone during the mission to the trajectory planned before the mission. To this end, we focus our research in two main areas: the operational positioning of the drone from video images and the semantic representation of the environment and the mission itself. This semantic approach will make it possible, among other things, to supervise and decide in real time on the progress of the mission to adapt the navigation trajectory according to information coming from the image analysis, sensor data, weather conditions, etc.

The postdoc mainly concerns the phase dedicated to operational positioning by image analysis and artificial intelligence. This phase includes two steps: 1) Develop a video scene simulator to generate the sensor view, considering the weather conditions and 2) remedy the lack of reliable information from the GNSS system using the information provided by the camera. Non-GNSS navigation techniques, using images for example, consist in estimating the navigation error by comparing what the sensor measures at a given instant with what it should measure if it were at the delivered position by the navigation system. Conventional techniques often use feature points in images. As part of this project, we will focus on the use of an approach based on deep learning (features extraction or end-to-end). Several strategies can be considered both at the architectural level of the network and at the computational level (learning paradigm, integration of the optical flow in the loss function, etc.).

References:

G. Balamurugan, J. Valarmathi and V. P. S. Naidu, Survey on UAV navigation in GPS denied environments, 2016International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES), 2016, pp.198-204, doi: 10.1109/SCOPES.2016.7955787.

Chen C, Tian Y, Lin L,Chen S, Li H, Wang Y, Su K. Obtaining World Coordinate Information of UAV in GNSS Denied Environments. Sensors (Basel). 2020 Apr 15;20(8):2241. doi: 10.3390/s20082241.

Gayathri, R., and V. Uma. Ontology based knowledge representation technique, domain modeling languages and planners for robotic path planning: A survey, ICT Express, Volume 4, Issue 2, June 2018, Pages 69-74.

Li X, Bilbao S, Martín-Wanton T, Bastos J, Rodriguez J. SWARMs Ontology: A Common Information Model for the Cooperation of Underwater Robots. Sensors (Basel). 2017 Mar 11;17(3):569. doi: 10.3390/s17030569.