Vous êtes ici : Accueil » Kiosque » Annonce


Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?


23 mars 2020

PhD in computer vision: Deep learning for archive video colorization

Catégorie : Doctorant

PhD position: Deep learning for archive video colorization

Location: LaBRI (UMR CNRS 5800), Université de Bordeaux


  • Aurélie Bugeau (aurelie.bugeau@labri.fr)
  • Michaël Clément (michael.clement@labri.fr)
  • Vinh-Thong Ta (vinh-thong.ta@labri.fr)

Funding: ANR PostProdLEAP (2019-2023) “rethinking archive Post-Production with LEarning, vAriational, and Patch-based methods”

Starting date: September/October 2020

Candidate profile: The candidate (Master 2 or Engineer’s degree) must have good knowledge in image processing and computer vision. She/he must also have skills in machine learning and deep learning, and strong programming skills (C/C++, Python, PyTorch/TensorFlow). Writing skills and a good level in English are also important. To apply, please send an email to supervisors with a CV, a cover letter, a copy of transcripts, as well as recommendation letter(s) or the contact of referee(s) who could attest your skills and motivations.



Archive video colorization aims at adding color to grayscale videos to make them more attractive and more realistic. Tools available for professionals enable artists to reach high level video quality but require long human intervention. For example, the colorization of the 1000 shots of archive footage from Raoul Peck's film “I am not your negro” (César of the best documentary 2018, BAFTA of the best documentary 2018, nominated for the Oscar in 2017) took the whole crew of Composite Films (15 people) 6 months of work. Recent state-of-the-art methods, and in particular deep learning methods [1,2] enable to reach very impressive results on many images. Nevertheless, artefacts often appear when processing low light footage, old photographs, or on images with many details and without strong contours.

Furthermore, they are designed to process recent, non-degraded data, of fixed resolution, while archive videos possess different resolutions depending on whether the data is a scanned film roll (generally 4K, Ultra-HD), a digital video in PAL or NTSC (720x576 or 720x480) or a more recent video (for instance Full-HD). Finally, they are still under-developed for video post-production as processing videos is much more difficult than still images [4]. Indeed, specific problems of data size and complexity arise. In terms of quality, a video can have an excellent image by image signal to noise ratio, but a catastrophic perceptual rendering (e.g., flicker or visual artefacts only visible in motion). Therefore, It is crucial to handle motion, illumination changes and temporal consistency.

In this context, the objective of the PhD is to develop techniques to make video post-production more efficient, while keeping the quality level required by professionals. The methods will have to be interactive to provide some control to the artists, necessary to ensure historical veracity. The PhD candidate will focus on deep-learning based approaches, combining them with more traditional patch-based and variational approaches. The obtained results will be applied to data provided by our industrial partner Composite Films. We will follow recent works on colorization with variational auto-encoders [5] and Generative Adversarial Networks (GANs) [6].


[1] R. Zhang, P. Isola and A. A. Efros. “Colorful Image Colorization”, European Conference on Computer Vision (ECCV), 2016.

[2] S. Gu, R. Timofte, R. Zhang. “NTIRE 2019 challenge on image colorization: Report”, IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019.

[3] F. Pierre, J.-F. Aujol, A. Bugeau, N. Papadakis and V.-T. Ta. “Luminance-Chrominance Model for Image Colorization”, SIAM Journal on Imaging Sciences 8(1), pp. 536–563, 2015.

[4] F. Pierre, J.-F. Aujol, A. Bugeau, N. Papadakis and V.-T. Ta. “Interactive Video Colorization within a Variational Framework”, SIAM Journal on Imaging Sciences 10(4), pp. 2293-- 2325, 2017.

[5] A. Deshpande, J. Lu, M.-C. Yeh, M. J. Chong and D. Forsyth. “Learning diverse image colorization”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

[6] P. Vitoria, L. Raad and C. Ballester. “ChromaGAN: An Adversarial Approach for Picture Colorization”, arXiv preprint arXiv:1907.09837, 2019.


Dans cette rubrique

(c) GdR 720 ISIS - CNRS - 2011-2020.