Annonce

Les commentaires sont clos.

Semantic reconstruction of urban scenes from satellite imagery

4 Mars 2022


Catégorie : Doctorant


Website:
https://recrutement.cnes.fr/fr/annonce/1497439-045-3d-semantic-reconstruction-of-urban-scenes-from-satellite-imagery-alpes-maritimes

Contacts:
David.Youssefi@cnes.fr, Florent.Lafarge@inria.fr

Context:
The automatic reconstruction of 3D urban scenes in the form of compact, precise meshes adapted to the geometry of urban objects from satellite images has been a scientific challenge for several decades. Today, the quality of new stereoscopic satellite images at 30cm resolution makes the reconstruction of urban objects such as buildings with a fine description of roofs and facades possible. In particular, the reconstruction of buildings with a LOD2 level of details based on the CityGML formalism [1] is not an impossible scientific challenge anymore.
Methods such as [2], [3] and [4] are developed for aerial data and produce detailed reconstruction of urban scenes, but they are expensive in terms of calculation and benefit from data with numerous view angles and high density point clouds. Although these methods are considered as state-of-the-art, they do not meet the challenge of reconstructing accurately an urban scene from satellite imagery. It is therefore important to imagine new methods that are robust to satellite data.
Three dimensional urban scene reconstruction from satellite imagery will allow to automatically reconstruct and store entire cities in lightweight files for numerous applications such as urban mobility (aerial or ground), renewable energies implantation, urban planning, simulation, or security.

Objectives:
The objective of this PhD thesis is to design and implement an automatic 3D reconstruction method for buildings under a CityGML LOD2 formalism based on the new generation of stereoscopic satellite images.
Traditional methods usually start with a semantic classification step followed by a 3D reconstruction of objects composing the urban scene. Too often in traditional approaches, inaccuracies and errors from the classification phase are propagated and amplified throughout the reconstruction process without any possible subsequent correction. Few methods ([5] and [6]) classify satellite images using deep learning tools leading to good results, but they do not use the stereoscopic nature of input data to extract geometric information that could improve classification performances.
In contrast to these methods, our main idea consists in extracting semantic information of the urban scene and in reconstructing the geometry of objects simultaneously. We believe this coincident treatment will guarantee sufficient robustness considering inaccuracies and general constraints of satellite imagery. This simultaneous process of semantics and geometry from the scene should bring a reciprocal coherence between these two fundamental dimensions of the problem, leading to a significant quality gain over the output 3D models.
In order to be able to extract the scene semantics and geometry simultaneously, we will first study how to exploit geometrical data structures allowing to decompose the 3D space into simple atomic volumes, adapted to the raw data (stereoscopic satellite images) and derived data (Digital Surface Models) in a more relevant way than a simple voxel grid or an octree. We will then set up a labeling process for these cells where each label will correspond to a possible semantic class of object, notably by relying on robust metrics, potentially constructed by machine learning.
Implementation of all these processes will have a strong focus on performances and scalability to allow the reconstruction of very large urban scenes, i.e. at the scale of entire cities, from numerous satellite images.

References:
[1] G.Gröger; T.H.Kolbe; C.Nagel; K-H.Häfele, OGC City Geography Markup Language (CityGML) Encoding Standard, 2012.
[2] L.Zhu; S.Shen; X.Gao; Z.Hu, Large Scale Urban Scene Modeling from MVS Meshes, 2018
[3] J-P.Bauchet; F.Lafarge, City Reconstruction from Airborne Lidar: A Computational Geometry Approach, 2019.
[4] V.Bouzas; H.Ledoux; L.Nan, Structure-aware Building Mesh Polygonization, 2020.
[5] L.Zhu; S.Shen; X.Gao; Z.Hu, Urban Scene Vectorized Modeling Based on Contour Deformation, 2020.
[6] H.Li; K.Qiu; L.Chen; X.Mei; L.Hong; C.Tao, SCAttNet: Semantic Segmentation Network with Spatial and Channel Attention Mechanism for High-Resolution Remote Sensing Images, 2020.