Les commentaires sont clos.

Réseaux de neurones pour les problèmes inverses en imagerie satellite

Date : 10-09-2021
Lieu : Visioconférence

Thèmes scientifiques :
  • B - Image et Vision
  • T - Apprentissage pour l'analyse du signal et des images

Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.

S'inscrire à la réunion.


140 personnes membres du GdR ISIS, et 161 personnes non membres du GdR, sont inscrits à cette réunion.

Capacité de la salle : 300 personnes.


Titre : neural networks for inverse problems in satellite imaging / réseaux de neurones pour les problèmes inverses en imagerie satellite

Date : 10 septembre 2021. Le lien zoom sera envoyé par email aux personnes inscrites, la veille de l'événement.

Résumé anglais :

Neural networks (NNs) have become ubiquitous in computer vision and image understanding. Their interest for image reconstruction tasks, such as denoising or deconvolution, has been assessed more recently and is an active research subject, covering both theory and applications. This workshop will focus on the use of NNs to solve inverse problems in imaging. The goal of this workshop is to propose an overview of the recent methodological results, as well as applications in the context of satellite imagery.

The technical content will include :

  • Model-based architectures with supervised training (including unrolled CNNs)
  • Decoupled regularization methods (plug-and-play, deep generative prior)
  • Unsupervised methods (deep image prior), and self-supervised methods
  • Stability and robustness results for the above methods.
  • Applications in remote sensing: super-resolution, denoising, onboard processing, etc

Résumé français :

Les réseaux de neurones sont devenus incontournables pour la vision artificielle et l'extraction d'information dans les images. Depuis plusieurs années, ils sont également en train de révolutionner des tâches de traitement d'images plus bas-niveau, comme le débruitage ou la super-résolution. Cette journée sera consacrée à l'utilisation des réseaux de neurones pour résoudre des problèmes inverses en imagerie. Le but est de faire un tour d'horizon des récentes avancées méthodologiques, et de présenter des applications pour l'imagerie satellite.

Le programme abordera les thématiques suivantes :

  • Approches supervisées, basées modèle (unrolling/unfolding)
  • Les approches de régularisation découplée (plug-and-play, deep generative prior)
  • Les méthodes non supervisées (deep image prior), et auto-supervisées
  • La stabilité et la robustesse des méthodes ci-dessus
  • Les applications en imagerie satellite: super-resolution, denoising, traitement bord, etc

Orateurs invités / Invited Speakers

  • Marie Chabert (Toulouse INP)
  • Emilie Chouzenoux (Inria)
  • Rémi Cresson (INRAE)
  • Loïc Denis (Université Jean Monnet)
  • Gabriele Facciolo (ENS Paris-Saclay)
  • Ulugbek Kamilov (Washington University)
  • Maximilian März (TU Berlin)

Appel à contribution L'appel à contribution est clôt


Cette journée est co-organisée par le GdR ISIS et par le COMET TSI


Programme de la journée

Les exposés ayant reçu l'autorisation de son orateur ont été enregistrées. Elles peuvent être visualisées ici

  • 08h45 Introduction [video]
  • 09h00 Maximilian März, Institute for Mathematics, TU Berlin. Solving Inverse Problems With Deep Neural Networks--Robustness Included?
  • 09h50 Gabriele Facciolo, Centre Borelli, ENS Paris-Saclay. Self-supervised multi-image denoising & super-resolution [video]
  • 10h20 Pause café
  • 10h50 Emilie Chouzenoux, Inria Saclay. Unfolding proximal algorithms [video]
  • 11h40 Loïc Denis, Université de Saint-Etienne. Deep restoration of SAR images [video]
  • 12h10 Valentin Debarnot, ITAV et CNRS. DeepBlur: blind identification of space variant PSF [video]
  • 12h30 Pause déjeuner
  • 13h30 Marie Chabert, Toulouse INP. Variational autoencoder with reduced complexity for on-board compression of satellite images [video]
  • 14h00 François De Vieilleville, AGENIUM. Insights from the spotGEO challenge [video]
  • 14h30 Ronan Fablet, IMT Atlantique. Joint learning of variational models and solvers for the resolution of inverse problems: applications to ocean remote sensing [video]
  • 15h00 Pausé café
  • 15h30 Ulugbek Kamilov, Washington University. Computational Imaging: Reconciling Physical and Learned Models [video]
  • 16h20 Rémi Cresson, INRAE. Time series images pre-processing applications with deep learning [video]
  • 16h50 Youssef Mourchid, ISEN Brest, LSL. Dual Color-Image Discriminators Adversarial Networks for Generating Artificial-SAR Colorized Images from SENTINEL-1 Images [video]
  • 17h10 Conclusion

Résumés des contributions

Présentations invitées

Marie Chabert, Toulouse INP and IRIT. Variational autoencoder with reduced complexity for on-board compression of satellite images
Abstract: Recently, convolutional neural networks have been successfully applied to lossy image compression. End-to-end optimized, possibly variational, autoencoders are able to outperform traditional encoders in terms of rate-distortion trade-off, however at the cost of very high computational complexity. A learning phase on large databases allows auto-encoders to learn the image representation and its statistical distribution, possibly using a non-parametric density model or an auxiliary auto-encoder. However, in the context of on-board satellite compression, computational ressources are subject to drastic limitations. The objective of this work is to design a variational autoencoder with reduced complexity in order to meet these constraints while maintaining performance close to the one obtained with the state-of-the-art autoencoders. In addition to a reduction of the network size that targets each parameter of the analysis and synthesis transforms, we propose a simplified entropy model that preserves adaptability to the input image. The obtained compression scheme outperforms the CCSDS (Consultative Committee for Spatial Data Systems) encoder in terms of rate-distortion tradeoffs, and is competitive with state-of-the-art learned image compression schemes.

Emilie Chouzenoux, Inria Saclay. Unfolding proximal algorithms
Abstract: We show in this talk how proximal algorithms, which constitute a powerful class of optimization methods, can be unfolded under the form of deep neural networks. This yields to improved performance and faster implementations while allowing to build more explainable, more robust, and more insightful neural network architectures. Application examples in the domain of image restoration will be provided.

Rémi Cresson, INRAE. Time series images pre-processing applications with deep learning
Abstract: More and more remote sensing imagery is available today for earth observation, due to the rise of satellites constellations. In addition, community-based geographic information gathering systems are expanding every day. Regarding this amount of geospatial data, tackling complexity in earth observation is rising as an exciting challenge. Recently, deep learning had a breakthrough impact in many domains, thanks to state of the art results, and the ability to generalize well over very large and diverse datasets. Thank to the availability of data and computing resources, remote sensing images processing benefits strongly from these techniques. While automatic information retrieval from remote sensing images is well established (for instance, land cover mapping), deep learning based techniques are also extremely useful for images pre-processing. In this short presentation, we will introduce two applications for optical images time series pre-processing: (i) the super-resolution, that aims to sharpen a low resolution image, and (ii) optical image reconstruction from joint optical and synthetic aperture radar images fusion, that aims to reconstruct images polluted by clouds.

Loïc Denis, Université de Saint-Etienne. A review of deep-learning techniques for SAR image restoration
Abstract: The speckle phenomenon is a major hurdle for the analysis of synthetic aperture radar images and has driven the development of numerous image restoration techniques since the 80s. The advent of deep neural networks has offered new ways to tackle this long-standing problem. I will review the different strategies that have been proposed these last years, in particular plug-and-play techniques, supervised, semi-supervised, and self-supervised methods.

François De Vieilleville, AGENIUM. Insights from the spotGEO challenge
Abstract: In this talk, we show how pure supervised deep learning approaches were not enough to get the best possible performance for a seemingly simple segmentation problem : the detection of geosatellites on amateur grade telescopes for the ESA spotGEO challenge.
A short comparison of pros and cons with a possible variational approach is given before a hybrid processing chain using both image processing techniques (denoising, object matching, post processing) and a supervised deep learning approach is presented.

Gabriele Facciolo, Centre Borelli & ENS Paris-Saclay. Self-supervised multi-image denoising & super-resolution
Abstract: In this talk I will present some works realized at Centre Borelli on the subject of self-supervised image and video restoration. Nowadays, deep-learning techniques represent the state of the art in image restoration. The reason for this success is that data-driven methods can incorporate realistic image priors leading to improved restoration. However, these methods are data-hungry and they heavily rely on the size and quality of the training dataset. The importance of training with realistic data was highlighted in several works where it was shown that models trained on synthetic data generalized poorly to real images. Obtaining realistic datasets of noisy/clean images or videos for supervised training can be a challenging task in many application scenarios. The recently proposed technique of noise-to-noise showed that it is possible to train a denoising network in a self-supervised way without needing a noisy/clean image dataset. We extended this concept to the training of networks for video denoising, demosaicking and multi-image super-resolution by exploiting the temporal redundancy in videos or image bursts. In these works, the network is trained to predict a frame of a noisy sequence using its neighboring frames, eliminating the need for ground truth.

Ulugbek S. Kamilov, Washington University in St. Louis. Computational Imaging: Reconciling Physical and Learned Models
Abstract: Computational imaging is a rapidly growing area that seeks to enhance the capabilities of imaging instruments by viewing imaging as an inverse problem. There are currently two distinct approaches for designing computational imaging methods: model-based and learning-based. Model-based methods leverage analytical signal properties and often come with theoretical guarantees and insights. Learning-based methods leverage data-driven representations for best empirical performance through training on large datasets. This talk presents Regularization by Artifact Removal (RARE), as a framework for reconciling both viewpoints by providing a learning-based extension to the classical theory. RARE relies on pre-trained ?artifact-removing deep neural nets? for infusing learned prior knowledge into an inverse problem, while maintaining a clear separation between the prior and physics-based acquisition model. Our results indicate that RARE can achieve state-of-the-art performance in different computational imaging tasks, while also being amenable to rigorous theoretical analysis. We will focus on the applications of RARE in biomedical imaging, including magnetic resonance and tomographic imaging.
Associated references:
J. Liu, Y. Sun, C. Eldeniz, W. Gan, H. An, and U. S. Kamilov, ?RARE: Image Reconstruction using Deep Priors Learned without Ground Truth,? IEEE J. Sel. Topics Signal Process., vol. 14, no. 6, pp. 1088-1099, October 2020.
Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, and U. S. Kamilov, ?SIMBA: Scalable Inversion in Optical Tomography using Deep Denoising Priors,? IEEE J. Sel. Topics Signal Process., vol. 14, no. 6, pp. 1163-1175, October 2020.
Y. Sun, J. Liu, Y. Sun, B. Wohlberg, and U. S. Kamilov, ?Async-RED: A Provably Convergent Asynchronous Block Parallel Stochastic Method using Deep Denoising Priors,? Proc. Int. Conf. Learn. Represent. (ICLR 2021) (Vienna, Austria, May 4-8).

Maximilian März, Institute for Mathematics, TU Berlin. Solving Inverse Problems With Deep Neural Networks--Robustness Included?
Abstract: In the past five years, deep learning methods have become state-of-the-art in solving various inverse problems. Before such approaches can find application in safety-critical fields, a verification of their reliability appears mandatory. Recent works have pointed out instabilities of deep neural networks for several image reconstruction tasks. In analogy to adversarial attacks in classification, it was shown that slight distortions in the input domain may cause severe artifacts.
In this talk, we will shed new light on this concern and deal with an extensive empirical study of the robustness of deep-learning-based algorithms for solving underdetermined inverse problems. This covers compressed sensing with Gaussian measurements as well as image recovery from Fourier and Radon measurements, including a real-world scenario for magnetic resonance imaging (using the NYU-fastMRI dataset). Our main focus is on computing adversarial perturbations of the measurements that maximize the reconstruction error. In contrast to previous findings, our results reveal that standard end-to-end network architectures are not only surprisingly resilient against statistical noise, but also against adversarial perturbations. Remarkably, all considered networks are trained by common deep learning techniques, without sophisticated defense strategies.
This is joint work with Martin Genzel (Utrecht University) and Jan Macdonald (TU Berlin).


Valentin Debarnot, ITAV and CNRS. DeepBlur: blind identification of space variant PSF
Abstract: In this talk, I will present a deep learning based methodology to solve blind inverse problems and more particularly blind deblurring. We start by modeling an optical system as a low dimensional manifold of operators. The simplest instance of this model consists in expanding the point spread function in a low dimensional vector subspace. Under this hypothesis on the observation model we propose to train two deep convolutional networks: the first one allows to identify the blur operator within the known subspace from the blurred observation; the second one deblurs the image robustly with the estimated blur. Both learning strategies require specific training procedures which will be discussed in my presentation.

Ronan Fablet, IMT Atlantique, Lab-STICC. Joint learning of variational models and solvers for the resolution of inverse problems: applications to ocean remote sensing
Abstract: This paper addresses physics-informed deep learning schemes for satellite ocean remote sensing data. Recently, learning-based strategies have appeared to be very efficient for solving inverse problems, by learning direct inversion schemes or plug-and-play regularizers from available pairs of true states and observations. Here, we go a step further and design an end-to-end framework allowing to learn actual variational frameworks for inverse problems in such a supervised setting. The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter. We can jointly learn all trainable components to minimize a supervised or unsupervised training loss. We illustrate the relevance of the proposed scheme for the reconstruction and forecasting of sea surface dynamics from satellite data, including space-time interpolation, sampling strategies and multimodal inversion issues.
Associated references
R. Fablet, L. Drumetz, F. Rousseau. End-to-end learning of variational models and solvers for the resolution of interpolation problems. IEEE Icassp, 2021
R. Fablet, L. Drumetz, F. Rousseau. Joint learning of variational representations and solvers for inverse problems with partially-observed data, arXiv, 2021

Youssef Mourchid, ISEN Brest / IMS - U. Bordeaux. Dual Color-Image Discriminators Adversarial Networks for Generating Artificial-SAR Colorized Images from SENTINEL-1 Images
Résumé : Dans cette présentation, nous nous intéressons à la génération des images SAR artificielles colorées à partir des images SAR de Sentinel 1. Nous introduisons un nouveau réseau antagoniste génératif (GAN) avec deux discriminateurs, afin de générer les images SAR artificielles colorisées. Sur la base de l'architecture standard des GAN, nous utilisons un discriminateur de couleur supplémentaire qui évalue les différences de luminosité, de contraste et de couleurs entre les images générées et celle de vérité de terrain, tandis que le discriminateur d'image compare la texture et le contenu.
Pour atteindre le niveau de colorisation requis dans le processus de génération, nous utilisons une fonction de perte de couleur dédiée à la comparaison des couleurs, contrairement aux approches conventionnelles qui utilisent uniquement la fonction de perte L1. De plus, pour surmonter le problème de disparition du gradient dans l'architecture de réseaux de neurones profonde et assurer le flux d'informations de bas niveau à l'intérieur des couches du réseau, nous ajoutons des connexions résiduelles à notre générateur qui suivent la forme générale de U-Net.
Les performances du modèle proposé ont été évaluées quantitativement et qualitativement avec le jeu de données SEN1-2. Les résultats montrent que le modèle proposé génère des images colorisées réalistes avec moins d'artefacts par rapport aux approches de littérature. De plus de modèle aide à maintenir la stabilité des couleurs ainsi que la reconnaissabilité visuelle dans de grandes régions continues moins texturées, telles que les zones de plantation et d'eau, lorsqu'il est difficile de les distinguer dans les images SAR.
Référence Associée: https://hal.archives-ouvertes.fr/hal-03164190