CIFRE thesis "In-details scene understanding for realistic data generation" between Huawei and the IBISC laboratory of University of Evry
12 Septembre 2022
Catégorie : Doctorant
Huawei and the IBISC laboratory of University of Evry propose a PhD thesis on dense reconstruction using implicit functions and multi-modal data for novel view synthesis applied to autonomous driving scenario.
The core of this research proposal is to push the boundaries of existing dense 3D reconstruction and NeRF performances by leveraging information of multimodal sensors (cameras & LIDAR) to extract deeper information on objects and materials present in a scene.
In-details scene understanding for realistic data generation
This PhD is a CIFRE fellowship between Huawei and the IBISC laboratory of University of Evry - Paris Saclay.
Huawei is working on key components of L2-L3 autonomous driving platform and progressively shifting focus to development of breakthrough technologies required for L4-L5 autonomy. Tomorrow self-driving cars powered by AI will combine edge and cloud compute with vast number of sensors to safely and autonomously drive customers and deliver merchandise. At Huawei we develop realistic simulators created from crowd-sourced data to continuously improve localization, perception and prediction algorithms of autonomous vehicles. We are seeking the best candidates for PhD CIFRE with a background in computer vision, deep learning, simulation, computer graphics, mapping, perception, sensor fusion, cognition and other related areas, to work as a part of IoV team in Paris Research Center (PRC). As a member IoV PRC you will closely work with multiple teams worldwide to grow your expertise and successfully transfer your research results into real products.
The IBISC laboratory (Informatique, BioInformatique, Systèmes Complexes) is a research unit of the University of Evry Paris Saclay. The IBISC laboratory is composed by 4 teams: AROBAS, COSMO, IRA2, and SIAM. Their scientific activities are divided into two axes: ICT & SMART SYSTEM and ICT & LIFE, each focused on a specific application area which is respectively: Drone & vehicle, and precision and personalized medicine. The SIAM team focuses on the development of visual perception systems and algorithms, and control theory for autonomous vehicles.
The core subject of this PhD should focus on improving a crowd-sourced simulator for realistic data generation. High quality simulated data are critical to improve various machine learning algorithms , in particular to handle edge cases or rare case scenario that are not present in training dataset gather with labelled real data. Photorealistic rendered data generated from crowd-sourced recording  have proved to be even more efficient than “hand-crafted” simulated data for localization  and perception [4,5] tasks. In order to improve self-driving algorithms, we need to develop crowd-sourced engine able to simulate urban outdoor scene with various weather, lightning conditions, etc. Some seminal works  on photorealistic rendering have already tackled the problem of various condition modelling from outdoor crowd-sourced data but the existing approaches do not provide the full control of the scene appearance at rendering time. Recent works have focus on the problem of view-dependent effects on highly specular objects  as well as modeling of the global illumination parameters from multimodal inputs . In this PhD research topic, we propose to integrate multi-modal perception as in  from various sensors to obtain a scene representation as complete as possible for fully controllable photorealistic data generation. This include usual semantic and geometric scene understanding, as well as finer perception of objects in the scene, such as: material properties , lightning sources localization , light intensity classification, etc. By integrating various observation of a scene into the optimization of the implicit scene representation, our goal is to re-generate photorealistic data with different outdoor (weather, lightning ) conditions than the ones present during the acquisition. The development of new algorithms for in-deep and in-details urban scene understanding will be conditioned to be as scalable as possible, both in term of spatial extant  of the scene to be described and regarding the computational resources needed to analyze the scene and to generate the artificial data .
 Yun Chen et al. GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-Driving
 Ben Mildenhall et al. NeRF: Representing scenes as neural radiance fields for view synthesis
 Arthur Moreau et al. LENS: Localization enhanced by NeRF synthesis.
 Zuria Bauer et al. NVS-MonoDepth: Improving Monocular Depth Prediction with Novel View Synthesis
 Shuaifeng Zhiet al. In-Place Scene Labelling and Understanding with Implicit Scene Representation
 Ricardo Martin-Brualla et al. NeRF in the Wild: Neural Radiance Fields for Un- constrained Photo Collections
 Wang et al. Neural Light Field Estimation for Street Scenes with Differentiable Virtual Object Insertion
 Verbin et al. Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields
 Tancik et al. Block-NeRF: Scalable Large Scene Neural View Synthesis
 Müller et al. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
 Griffiths et al. OutCast: Outdoor Single-image Relighting with Cast Shadows
Description of research activities:
- Study state of the art on neural rendering, multi-model 3D reconstruction, lighting and shading estimation
- Identify bottleneck in condition-aware dense reconstruction and promising research direction toward editable scene generation using implicit representation
- Propose new solutions for multi-model dense scene reconstruction
- Research and develop algorithm based on the proposed solutions
- Apply proposed algorithm to the domain of self-driving cars using existing or specifically collected datasets
- Publish research results in top conferences and participate to scientific seminars
This PhD will be supervised jointly between Huawei Technologies France and the laboratory IBISC of Evry University (Paris area).
The candidate should be motivated to carry out world class research and should have a Master in Computer Science, with a focus on Vision, Graphics and/or Robotics. He/She should have solid skills in the following domains:
- Implement Code in Python, C++ (CUDA is a plus)
- Apply or use existing libraries for deep learning in project related tasks (pytorch is a plus)
- Good knowledge in Computer Vision, Computer Graphics, 3D reconstruction and robotics
- Good knowledge in Git, ROS, OpenCV, Boost, multi-threading, CMake, Make and Linux systems
- Code and algorithm documentation
- Project reporting and planning
- Writing of scientific publications and participation in conferences
- Fluency in spoken and written English; French and/or Chinese is a plus
- Intercultural and coordination skills, hands-on and can-do attitude
- Interpersonal skills, team spirit and independent working style
Désiré Sidibé (IBISC, Université Evry) – email@example.com
Nathan Piasco (Huawei) – firstname.lastname@example.org
Application deadline: October 31st, 2022
CV + motivation letter + transcript of records for academic years 2020-2021 and 2021-2022 + any other relevant documents.