Les commentaires sont clos.

Asynchronous algorithms for fast Bayesian inference

20 Janvier 2022

Catégorie : Stagiaire

Job type: internship, M. Sc. (M2) or last year engineering student.
Keywords: Bayesian inference, continuous optimization, distributed asynchronous algorithms, MCMC methods.
Dates: Starting date between March and April 2022, 4 to 6 months internship.
Full job description:
Laboratory: Centre de Recherche en Informatique, Signal et Automatique de Lille (UMR 9189 CRIStAL), Villeneuve d'Ascq, France.
Contacts: Pierre-Antoine Thouvenin (pierre-antoine(at)thouvenin(at)centralelille(dot)fr),
Pierre Chainais (pierre(dot)chainais(at)centralelille(dot)fr)


Abstract: Bayesian inference is a usual approach to estimate parameters from a dataset, a typical setting underlying the resolution of inverse problems. An inverse problem consists in estimating a collection of parameters involved in a physical model from degraded and noisy observations, e.g., reconstucting an image from noisy incomplete observations of the sky in radio-astronomy. In many signal and image processing applications, especially in astronomy (Abdulaziz2019, Cai2018) and remote sensing (Ghamisi2019) no ground truth is available. Fast parameter inference under controlled uncertainty is thus critical to guarantee the quality of the resulting predictions. Indeed, different values of a parameter can be associated to different physical processes, for instance in remote sensing source separation in presence of outliers.
Inference cost can be large, and increases significantly with both the number of observations (large dataset) and parameters to be inferred (high dimensional problem). Typical signal and image processing applications lead to the resolution of high-dimensional inverse problems, relying on large datasets. Asynchronous (parallel or distributed) optimization algorithms have recently regained interest due to their potential of acceleration to form an estimator, in comparison with their synchronous counterparts (Hannah2017).
The project is aimed at investigating the potential of asynchrony to accelerate distributed optimization algorithms amenable to a Single Program Multiple Data (SPMD) implementation. We will study several aspects, such as the algorithm convergence, the resulting estimation quality and inference time. Applications to the resolution of inverse problems in remote sensing or astronomy will be considered.
Depending on the evolution of the project, the study will be extended to a few selected Markov-chain Monte Carlo (MCMC) methods (Durmus2018, Terenin2020, Simsekli2018) to provide estimators with quantified uncertainty, beyond the point estimate provided by optimization algorithms.
This M.Sc. project may be continued as a PhD thesis, for which a grant is already secured for the period 2022-2025 thanks to the ANR Chaire IA SHERLOCK.

Scientific environment: The intern will be jointly supervised by Pierre Chainais, professor at Centrale Lille, and Pierre-Antoine Thouvenin, assitant professor at Centrale Lille. The internship will take place in the Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL, UMR 9189), France, within the SigMA team.
This project is part of the ANR Chaire IA SHERLOCK (Fast inference with controlled uncertainty: application to astrophysical observations) led by Pierre Chainais (co-funded by Agence Nationale de la Recherche (ANR), ISITE, Centrale Lille Institut and Région Haut-de-France). Participation to a national or international workshop is considered.

Period and continuation as in PhD thesis: This 4 to 6-months internship will start between March and April 2022. The precise start and end dates will be adjusted depending on the availability of the candidate. The intern will be granted the usual stipend of ~600 euros/month (3,90 euros/hour).
This M.Sc. project may be continued as a PhD thesis, for which a grant is already secured for the period 2022-2025 thanks to the ANR Chaire IA SHERLOCK

Profile and requirements: Master 2 or last year engineering school students with major in applied mathematics, computer science or electrical engineering. The project requires a strong background in data science and/or machine learning (statistics, optimization), signal & image processing. Very good Python coding skills are expected. A B2 English level is mandatory. Knowledge in C++ programming, as well as experience or interest in parallel/distributed code development (MPI, OpenMP, CUDA, ...) will be appreciated.

Application procedure: Applicants are invited to send the following documents in .pdf format to both co-advisors:

  • a detailed curriculum;
  • official transcripts from the institutions you have attended over the last 2 years (in French or in English);
  • references: letters of recommendation or names of two researchers/professors willing to recommend your application.

For further information, please contact both co-advisors of the project: Pierre-Antoine Thouvenin (pierre-antoine(at)thouvenin(at)centralelille(dot)fr), Pierre Chainais (pierre(dot)chainais(at)centralelille(dot)fr). Advisors' webpage:,

Abdulaziz, Abdullah et al. (2019). “Wideband Super-Resolution Imaging in Radio Interferometry via Low Rankness and Joint Average Sparsity Models (HyperSARA)”. In: Monthly Notices of the Royal Astronomical Society 489.1, pp. 1230–1248.
Cai, Xiaohao et al. (2018). “Uncertainty Quantification for Radio Interferometric Imaging – I. Proximal MCMC Methods”. In: Monthly Notices of the Royal Astronomical Society 480.3, pp. 4154–4169.
Durmus, Alain et al. (2018). “Efficient Bayesian Computation by Proximal Markov Chain Monte Carlo: When Langevin Meets Moreau”. In: SIAM J. Imaging Sci. 11.1, pp. 473–506.
Ghamisi, Pedram et al. (2019). “Multisource and Multitemporal Data Fusion in Remote Sensing: A Comprehensive Review of the State of the Art”. In: IEEE Geoscience and Remote Sensing Magazine 7.1, pp. 6–39.
Hannah, Robert et al. (2017). “More Iterations per Second, Same Quality – Why Asynchronous Algorithms May Drastically Outperform Traditional Ones”. In: arXiv: 1708.05136.
Simsekli, Umut et al. (2018). “Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization”. In: International Conference on Machine Learning, pp. 4674–4683.
Terenin, Alexander et al. (2020). “Asynchronous Gibbs Sampling”. In: arXiv: 1509.08999.