Thesis title: “Smart approach of acquisition, generation and processing for improved HDR image quality”
Keywords: Real-time imaging, Embedded Vision, High Dynamic Range, GPU, Deep Learning
General context of the PhD thesis:
The research proposed in this thesis follows works carried out locally on the design of (High Dynamic Range or HDR) vision systems and in particular the European project HIDRALON (2009-2012) and the project FUI Plein-Phare (2014-2018). As part of the Hidralon project (2009-2012), our team designed and validated an HDR camera prototype capable of generating real-time HDR content from 3 images acquired successively. As part of the Plein-Phare project (2014-2018), this prototype was improved by inserting into the processing pipeline a set of algorithms for removing ghost artifacts due to the moving objects in the scene.
Main objectives of the PhD thesis:
First, we will focus on intelligent management of the successive acquisitions in such a way as to capture and restore the best dynamics of the scene. For this purpose, we plan to develop artificial intelligence capable of estimating the adequate number of images required to build the HDR scene with their respective acquisition times, according to the real dynamics of the scene (the higher the dynamics, the more the number of images must be large) and according to the movement in the scene (the greater the movement, the more the number of images must be limited to avoid artifacts). The prototypes designed in previous Hidralon and Plein-Phare projects systematically use 2 or 3 images and are not able to adapt automatically to the scene.
Secondly, we will improve the management of moving objects, resulting in artifacts and thus limiting the quality of HDR content. In the Plein-Phare project, we have conducted a deep analysis of the state-of-the-art of the field and a first real-time implementation on FPGA has been designed. However, for pragmatic reasons, we have only focused on the more basic techniques compliant with our FPGA platform. Today, it is necessary to look at more complex algorithms using conventional image processing techniques or deep learning techniques, even if their complexity makes them a priori incompatible with the constraints of a real-time HDR camera. Our effort will, therefore, aim to adopt a systematic methodology of Algorithm Architecture Adequacy combining optimal image acquisitions via in particular the implementation of a multi-sensor camera, algorithmic simplifications/optimizations and massive exploitation of the parallelism of embedded GPUs (Nvidia Jetson or Xavier for example).
Thirdly, it will be a question of exploiting the HDR flow produced by our camera by embedding specific image processing taking advantage of the dynamics of the HDR video. Our goal here is to develop two "proofs of concept" applications using an HDR camera: the first application will be based on a fixed camera (typical case of video surveillance) and the second one will be based on a mobile camera (typical case of an autonomous vehicle or mobile robot).
Surrounding of the PhD. thesis
The work will be carried out in the CORES team (COmputer vision for REal time Systems) of the Imaging and Artificial Vision Laboratory (ImViA EA 7535) of the University of Burgundy in Dijon. The research axes of this team revolve around the design of architectures for specific vision and imaging systems, as well as the development of associated image processing methods to meet the constraints of the application considered and analyze/exploit the data produced. The main idea resides in the possibility of observing, analyzing and characterizing an object, a human being, a scene, or an environment using vision systems and this generally in real-time.
Image processing, GPU programming, Deep Learning, Real-Time implementation on hardware systems will be also appreciated but will be considered as optional. The proposed thesis is for curious, inventive, dynamic applicants having a strong scientific background and a sense of communication in a collaborative and multidisciplinary environment.
Doctoral school :
SPIM ‐ SPIM ‐ Sciences Physiques pour l'Ingénieur et Microtechniques n°37 http://ed‐spim.univ‐fcomte.fr/pages/fr/index.html
UBFC, Université Bourgogne Franche Comté, Dijon, France
ImVia (Imagerie et Vision Artificielle) laboratory, Team CORES (COmputer vision for REal time Systems), Dijon, https://imvia.u-bourgogne.fr
Barthélémy Heyrman – Associate Professor - CORES team – email@example.com
Dominique Ginhac – Full Professor - CORES team – firstname.lastname@example.org
3 years’ duration doctoral contract, The PhD thesis may start in October 2020 (Date may be postponed to Nov or Dec depending on the evolution of the COVID-19 pandemic).
Doctoral contract funded by the Ministry of Higher Education, Research and Innovation. Amount ~ 1900 €/month before tax.
Additional activities such as teaching will be possible (to be discussed).
Please send your application documents (all in one PDF file) to Barthélémy HEYRMAN email@example.com with copy to Dominique Ginhac firstname.lastname@example.org including a detailed CV, motivation letter dedicated to the proposed position, marks and ranks you obtained during your master degree or engineering school and at least one contact person (typically your supervisor for a training period, master thesis or responsible of your master diploma).
Attention-guided Network for Ghost-free High Dynamic Range Imaging, Qingsen Yan, Dong Gong, Qinfeng Shi, Anton van den Hengel, Chunhua Shen, Ian Reid, Yanning In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
Deep High Dynamic Range Imaging with Large Foreground Motions, Wu, Shangzhe and Xu, Jiarui and Tai, Yu-Wing and Tang, Chi-Keung,The European Conference on Computer Vision (ECCV), September, 2018
Real-time ghost free HDR video stream generation using weight adaptation based method, M Bouderbane, PJ Lapray, J Dubois, B Heyrman, D Ginhac, Proceedings of the 10th International Conference on Distributed Smart Cameras, 2016
HDR-ARtiSt: an adaptive real-time smart camera for high dynamic range imaging, PJ LAPRAY, B HEYRMAN, D GINHAC, Journal of Real-Time Image Processing 12 (4), 747- 762, 2016
(c) GdR 720 ISIS - CNRS - 2011-2020.