Annonce

Les commentaires sont clos.

Modelling Rigid and Deformable Objects for Robotic Manipulation

15 Avril 2022


Catégorie : Doctorant



1. Context and main aims

Dexterous manipulation of objects is a core task in robotics. Because of the design complexity needed for robot controllers even for simple manipulation tasks for humans, e.g., pouring a cup of tea, robots currently in use are mostly limited to specific tasks within a known environment. While humans learn their dexterity over time and manipulate objects through dynamic hand-eye coordination using visuo-tactile feedback [Johansson&Randall Flanagan2009], most recent research work on robotic manipulation is data-driven, primarily based on visual perception, to learn a one shot manipulation model which cannot generalize when objects or environments come to change [Bohg et al.2014, Mousavian et al.2019]. In this research project, we aim to investigate computer vision methods for perception and deep understanding of the scene to enable AI empowered general purpose flexible and adaptable robotic systems for dexterous manipulation of objects so that grasping robots can easily adapt to complex and unknown objects in rapidly changing, dynamic and unpredictable real-world environments.

Specifically, in this research project, we aim to develop computer vision models for deep understanding of the scene, and cover objectives:

  • Rigid object detection, segmentation and tracking for grasp location predictions
  • Deformable object modelling

Despite uncountable number of potential applications enabled by such a robotic manipulation system, we will focus our attention on three use cases that we have been implementing through 4 research projects, namely 1) bin-picking widely required in logistics and industrial assembly line, 2) waste sorting for better waste recycling and environment protection, and 3) assistance systems for “stick-to-bed” patients or elders with limited physical ability in their daily life object manipulation tasks, e.g., fetching a bottle of water and pouring it into a glass. They are closely related to three ongoing research projects within the group, namely the 3-year LEARN-REAL project, the PSPC FAIR WASTES project and the CHIRON project.

2. Hypothesis and approach

Regarding rigid objects, they can be of very different materials and have unknown shape and physical properties. Occlusions can also occur. Because of the general purpose nature of the targeted manipulation robot for use in unstructured environments, we aim to develop a “all-in-one” computer vision model to generalize well and can adapt quickly despite variations within a given environment, e.g., lighting, object texture, shape and pose, and changes of

environments, e.g., background, but not at the cost of large labeled data which are not available. The starting point could be our recently proposed deep encoder and multicameral decoders [Grard et al.@IJCV2020].

As for deformable objects, the major challenge for a robot involved in object manipulation is to understand how an object deforms when it is subjected to an external force. So far, most manipulation applications focus on rigid objects, which do not require deformation modelling. However, to manipulate deformable objects, it is absolutely essential to model their deformations, which is the goal of this project.

Based on the deformations, the objects can be categorised as rigid or deformable, elastic or inelastic, volumetric or thin-shell. In our previous works [Parashar et al, CVPR2017], we have shown that deformations can be modelled with a high accuracy with local geometric properties of the objects under consideration. Such a modelling has been shown to be fast, accurate and therefore, effective for the 3D reconstruction of various deformable objects, including elastic [Parashar et al, CVPR2020] and volumetric objects [Parashar et al ICCV2015], from monocular images.

In this project we will extend the use of local geometric properties to the robotic context. We consider some of the common real-life objects, available in YCB dataset. Given a robot which is equipped with multiple imaging and depth sensors, we will use the local geometric properties of deformation to predict the robot-object interaction.

3. Advisors

The project will be jointly supervised by Shaifali Parashar (shaifali.parashar@liris.cnrs.fr) and Prof. Liming Chen (liming.chen@ec-lyon.fr).

4. Application

Interested students with excellent academic records should drop an email with CV and transcript.

Requirements:

  1. Strong background in computer vision, machine learning and mathematics
  2. Strong programming skills in C++ and python
  3. Fluency in English
  4. two reference letters

Project duration: 36 months