Natural sciences such as astrophysics, geophysics and nuclear physics often use numerical simulations to model highly complex physical systems. These simulations are now more and more accurate thanks to the computational power available. For example, 3D convection models can simulate the thermochemical evolution and structure of stars and planets.
However, to disentangle different models and to estimate physical parameters (e.g., initial conditions), the outputs of these simulations have to be compared to observations. This confrontation of simulations to observations is a major challenge in natural sciences. Indeed, numerical simulations are now able to model quite accurately objects that are impossible to observe directly (e.g., interior of stars and planets, stars and black holes environment …). As for the observations, although their quality and quantity are rapidly increasing, they are often only indirectly related to the actual parameters of interest (e.g., seismic waves observations are used to construct images of the earth mantle, measured interferometric visibilities are used to characterize planet forming disk …).
To infer simulation parameters from observations is very challenging. When a single simulation is computationally intensive, it is impossible to use either stochastic or continuous optimization methods to infer parameters. In most cases, one can only rely on finding the best fits on a low dimensional pre-computed grid of model parameters.
The ultimate goal of the proposed thesis is to build a fast interpolation method on a grid of computational physics simulated images (in a broad sense as it can also be 3D volumes or spectra). With such a method, we will quickly have an approximation of a simulated image from any possible set of parameters, without having to run the expensive simulation. It then will be possible to use any method (optimization, Bayesian inference) to derive the so sought-after distribution of parameters.
The main idea is to use a deep learning framework to build the interpolator. Indeed, all possible modeled images are concentrated on a lower-dimensional subspace or manifold. Deep neural networks such as Generative Adversarial Networks (GAN) appear to be very efficient to model manifolds and could be much more efficient interpolators than classical polynomial interpolators. Trained on a grid on simulated images, these deep neural networks will produce continuous approximations of the images. As a toy example, in a properly defined manifold, the images of a single circle vary continuously with the circle radius. Interpolation between two images of circles with different radius must follow this manifold whereas any polynomial interpolation will produce an image with two circles rather than an image of a single circle with intermediate radius.
Grids of models are quite ubiquitous in physics, and hence such a project can have important impact. To ensure that it will be both robust and useful in practice, the deep learning based interpolator will be developed for two different applications: (i) planet forming disk characterization using VLTI in collaboration with J. Kluska (KU Leuven) and (ii) reconstruction of mantle structure based on geophysical surface observations.
This thesis will be co-directed by Ferréol Soulez (CRAL) and Thomas Bodin (LGL-TPE).
(c) GdR 720 ISIS - CNRS - 2011-2020.