The candidate will have the opportunity to work on realistic problems involving leading European research centers and companies with the aim to create a novel Augmented Reality platform able to map 3D scene structure of large scale environments for use by a smartphone. This position will be particularly oriented to advancing the state-of-the art in large-scale localisation and mapping by exploiting learnt 3D represenations to improve mapping.
The research topic is focused on 3D scene mapping and is related to the Robot Vision group's recent achievements in dense localization and mapping along with semantic scene mapping using deep learning approaches with multi-view street-view geometrical and photometrical reasoning. Such approaches will leverage massive worldwide geolocalized data as provided by project partners to finally deploy a mobile app that understands and localizes each semantic element in a generic scene.
The candidate will be expected to write papers and extend the state-of-the art on the topic of visual learning and 3D mapping.
- Experience programming for Android mobile phones and ARCore : Java, Kotlin, Unity
- Proficiency with programming languages, in particular, Python, C/C++, and MATLAB;
- Experience on SLAM and multi-view geometry approaches for sparse and dense 3D reconstruction;
- Publication in major Computer Vision conferences/journals (CVPR, ICCV, ECCV, TPAMI, IJCV, etc.);
- Good communication skills and ability to cooperate;
- Proficient in English language (written and oral).
- Knowledge of OpenCV, PCL, OpenGL and Open3D libraries;
- Practical experience on the Robot Operating System (ROS);
- Knowledge and experience on Deep Learning algorithms and relevant platforms (e.g. TensorFlow, PyTorch, etc.)
- Experience on programming with ARCore and ARKit augmented reality frameworks.
This project aims to develop advanced (deep) machine learning techniques for 3D mapping in the context of cultural heritage. The work will be carried out within the EU funded project H2020, MEMEX: MEMories and EXperiences for inclusive digital storytelling.
The Robot Vision research group at the CNRS-I3S laboratory of the Université de la Cote d'Azur invites qualified applicants to submit their CVs for a Postdoc position in Sophia Antipolis under the supervision of Dr. Andrew Comport.
The scientific goals of our research group are related to the fields of Computer Vision, Signal Processing and Machine Learning with the primary goal to provide algorithms for:
- Reality Capture: Real-time localisation, 3D reconstruction and mapping from image sequences;
- Scene understanding from video, depth cameras, stereo, omnidirectional and various other sensor inputs;
- Robot learning: Dense 3D mapping, semantic segmentation, visual SLAM, large-scale scenes, dynamic environments, loop-closure.
- Applications to Robotics and Augmented Reality.
This open position is financed in the context a interdisciplinary European Project to provide Augmented Reality (AR) systems to impact the European society at large.
The I3S laboratory is the largest information and communication science laboratory in the French Riviera and is part of the Sophia Antipolis science and technology park from its creation. The I3S laboratory consists of a little less than 300 people. Almost 100 professors or associate professors belong to the laboratory along with 20 CNRS and 13 INRIA researchers, with about twenty support staff. About 90 PhD students, 10 post-doctoral students and 60 trainees from Master's degrees or the engineering school are also members of the laboratory. Affiliated to the INS2I CNRS Institute, the I3S research fields cover many themes of CNU sections 27 “Computer Science” and 61 “Computer Engineering, Automation and Signal Processing".
The CNRS-I3S/UCA is an Equal Opportunity Employer that actively seeks diversity in the workforce.
Please note that the data that you provide will be used exclusively for the purpose of professional profiles' evaluation and selection, and in order to meet the requirements of the CNRS-I3S/UCA
(c) GdR 720 ISIS - CNRS - 2011-2020.