Opening of a PhD thesis position on Perception and Deep Understanding of scenes
8 Octobre 2021
Catégorie : Doctorant
A PhD thesis position of 3 years, is opened on perception and deep scene understanding for dexterous robotic manipulation within the Imagine team, LIRIS lab at Ecole Centrale de Lyon, under the supervision of Prof. Liming Chen.
We are seeking a highly motivated research fellow in the field of computer vision and machine learning for dexterous robotic manipulation for an expected starting date on December 1st, 2021.
Aristotle noted that the hand is the “tool of tools”, and Anaxagoras held that “man is the most intelligent of the animals because he has hands”. Intelligence has long been understood to come together with dexterity. As such, dexterous manipulation of objects is a core task in robotics. Because of the design complexity needed for robot controllers even for simple manipulation tasks for humans, e.g., pouring a cup of tea, robots currently in use are mostly limited to specific tasks within a known environment. While humans learn their dexterity over time and manipulate objects through dynamic hand-eye coordination using visuo-tactile feedback [Johansson&Randall Flanagan2009], most recent research work on robotic manipulation is data-driven, primarily based on visual perception, to learn a one shot manipulation model which does not to generalize when objects or environments come to change [Bohg et al.2014, Mousavian et al.2019].
In this research project, we aim to investigate computer vision methods for perception and deep understanding of the scene to enable AI empowered general purpose flexible and adaptable robotic systems for dexterous manipulation of objects so that grasping robots can easily adapt to complex and unknown objects in rapidly changing, dynamic and unpredictable real-world environments. Specifically, given a scene observed by a robot, we aim to develop a general purpose “all-in-one” computer vision model for various tasks in scene understanding, e.g., detecting the objects of interest, segmenting their instances, estimating their pose, performing their tracking and further predicting one or more grasp locations for the latter stage robotic effective grasping.
Despite uncountable number of potential applications enabled by such a robotic manipulation system, we will focus our attention on three use cases that we have been implementing through 4 research projects, namely 1) bin-picking widely required in logistics and industrial assembly line, 2) waste sorting for better waste recycling and environment protection, and 3) assistance systems for “stick-to-bed” patients or elders with limited physical ability in their daily life object manipulation tasks, e.g., fetching a bottle of water and pouring it into a glass. They are closely related to three ongoing research projects within the group, namely the 3-year LEARN-REAL project, the PSPC FAIR WASTES project and the CHIRON project.
The candidate must:
- be fluent in french or english
- have solid expertise in computer vision and deep learning
- be familiar with linux, Cmake, and Gitlab
- have programming skills in Python, C/C++
Former experience on robot learning using computer vision, deep learning and data simulation data and modelling, e.g., Unity 3D, Nvidia Isaac, Ignition gazebo will be appreciable.
The successful candidate will work in direct collaboration with researchers having an established expertise in computer vision and machine learning in partnership with international academic partners (Idiap/EPFL from Switzerland, Italian Institute of Technology from Italy, Intelligent Autonomous Systems at Darmstadt University of Technology from Germany, MEIDAI at Nagoya University from Japan). Ecole Centrale de Lyon is part of the top ten engineering schools in France (Grande Ecoles), part of the elite of "Grande Ecoles" offering access to excellent quality graduate and undergraduate students.
Applications should include a detailed curriculum vitae, brief statements of interests and two reference letters.
Applications and letters should be sent via electronic mail to:
Pr. Liming Chen ( firstname.lastname@example.org )