Annonce
Opening of an industrial Cifre thesis Position on occlusion handling in multiple object tracking
10 Juillet 2023
Catégorie : Doctorant
Occlusions (i.e., when objects are partially or completely obscured by other objects) remain a significant barrier to high performance in scene understanding tasks. This doctoral research project aims to improve multi-object (e.g., pedestrians and vehicles) tracking (MOT) models to make them robust to occlusions.
Context
The Tracking & Scene Understanding research team at Idemia, in collaboration with Ec Lyon, has opened a 3-year PhD position (Cifre). This position involves dividing the research responsibilities equally between Idemia’s Courbevoie location in La Defense and EC Lyon (the chosen PhD student will have the chance to work collaboratively at both institutes). The context for the PhD position at Idemia is centered on road safety and public security applications, with a specific emphasis on tracking pedestrians and cars.
Objective
Occlusions (i.e., when objects are partially or completely obscured by other objects) remain a significant barrier to high performance in scene understanding tasks. This doctoral research project aims to improve multi-object (e.g., pedestrians and vehicles) tracking (MOT) models to make them robust to occlusions. Occlusions are challenging because:
(i) Public dataset annotations typically prioritize visible data, which is easier for humans to annotate. This bias in annotation leads to a scarcity of labeled data that handle occlusions effectively.
(ii) Even when non visible parts of objects are fully annotated, models struggle to directly link hidden elements with visual patterns, and have to rely heavily on contextual cues from the spatio-temporal surrounding of the element, which often requires significantly more training data. The same phenomenon arises in 3D detection/tracking, as it typically necessitates looking beyond pixel-based visual patterns.
To address the above mentioned difficulties, the use of very large datasets with non-human supervision (or limiting it to a few examples) in training is a promising approach. One way is to exploit the implicit signals present in the spatio-temporal context of many unlabeled videos, using self-supervised learning. Another is to use synthetic data generated by simulation engines, which can benefit from having perfect labels (thereby benefiting the aforementioned 3D tasks as well). Both offer the advantage of being relatively unlimited in dataset size, the first focusing on the quality/realism of the data and the second focusing on the quality of the labels. By combining the two, the goal is to leverage the large size and high quality of both the data and labels, thereby enhancing the overall training process and ultimately improving the performance of scene understanding algorithms in difficult and dense scenarios.
Profile
Candidates must have completed or be in the final stages of defending their MSc degree. They should possess a strong foundation in computer vision, which encompasses 3D processing, as well as machine learning, specifically deep learning, along with proficient coding skills in PyTorch.
About Idemia and Ecole Centrale de Lyon
Formed through the merger of Oberthur Technologies (OT) and Safran Identity & Security (Morpho) in 2017, Idemia is a world leader in identity technologies, specialized in biometrics, identification and authentication, data and video analysis. Ecole Centrale de Lyon is a top ranked French elite engineer grande ´ecole which has developed an excellence in research of engineering overs its history of more than 160 years since its foundation.
Applying
To apply, candidates are required to email Pierre Perrault at pierre.perrault@idemia.com and Liming Chen at liming.chen@eclyon.
fr. The email should include the following:
• A cover letter demonstrating their interest and suitability for the thesis topic.
• Their CV.
• A transcript of their MSc grades.
• Some references or recommendation letters.
Applications will be reviewed on a rolling basis. The anticipated starting date is fall 2023.