Postdoc position in signal/image processing/machine learning
22 Novembre 2022
Catégorie : Post-doctorant
Contact : firstname.lastname@example.org
Blind and semi-blind unmixing problems are classical inverse problems in a very wide range of scientific domains from sound processing, medical signal processing to remote sensing or astrophysics. In these domains, the fast development of high resolution/high sensitivity multispectral sensors mandates the development of dedicated analysis tools. Figure 1 shows a particularly representative example of a supernovae remnant observed in multiple X-ray bands. For such type of data, the observations can be modelled as the linear or non-linear combination of various elementary physical components, which are to be retrieved by the astrophysicist.
To estimate these elementary physical components , one needs to tackle an unsupervised (when no information is available) or semi-supervised (availability of partial information) unmixing problem. The exact same mathematical problem arises in remote sensing or earth observation for the extraction of elementary components such as water, vegetation, buildings, etc.
Although such problems have already been studied extensively over the last decades, the current production of increasingly large and complex data encompasses several new challenges:
-Challenge 1: Un/semi-supervised unmixing is an ill-posed inverse problem. Therefore, building an efficient algorithm requires an accurate prior model for the components to be retrieved. However, state-of-the-art methods make use of hand-crafted models (e.g. sparse models) thus leading to degraded results in real-world applications. By opposition, the vast amount of data which is now available could enable to leverage data-driven priors.
-Challenge 2: in real-world applications, the data to be handled are increasingly larger. Typical X-ray hyperspectral data will be composed of 1011 pixels, which dramatically limit the applicability of standard method. Overcoming the curse of dimensionality mandates the development of computationally highly efficient unmixing algorithms.
To overcome these limitations, the goal of this postodc is to investigate novel unmixing approaches based on dedicated machine (deep) learning tools. Several research pathways are to be considered. More precisely, a very promising path of investigations is the use of algorithm unrolling models introduced in a general setting by Gregor&LeCun. In a nutshell, the core of this class of inverse problems solvers is a recurrent neural network that mimics the structure of standard iterative solvers (e.g. gradient minimisation methods). Their advantage is that the learning procedure can capture data-dependent information making it more adaptive to the data to be handled, which makes it a good candidate to address challenge 1. Furthermore, only a limited number of algorithmic parameters are learnt during the training process, making the final solvers having a low computational cost, which would be ideal to overcome challenge 2.
In the scope of unmixing, several algorithm unrolling approaches have been investigated, focusing on specific solvers for non-negative matrix factorization. However, they hardly apply to the kind of data that are typical of astrophysical/remote sensing applications. For that purpose, we recently introduced an algorithm unrolling method to tackle sparse unmixing. The main limitation is that it performs in a supervised way, since the training process requires the knowledge of examples of components to be retrieved. If this assumption perfectly makes sense when good physical models are available to build meaningful training samples, it clearly lacks flexibility when none exists. To that end, the project will explore:
— the development of dedicated unrolling algorithm architectures to tackle semi-supervised unmixing problems. Different research path will be explored, including the design of various architectures, training losses that are adapted to perform learning in an semi-supervised manner.
— loss regularization plays a key role to provide efficient solver. Rather than relying on the sparse regularization of the sources, another option would be to replace such a regularization by a better-adapted to the sought-after factors prior. A possibility would be to resort to plug-and-play method, and to re-use and improve recently introduced methods to learn directly a regularization on the sources S from a training set.
Other research paths might be investigated. To assess their interest over the traditional BSS algorithms, the developed methods will be thoroughly evaluated, both on simulated and realistic data sets (e.g. in remote sensing or astrophysics data sets).