Multiscale representations, scalefree & multifractal, optimization, proximal algorithms, deep learning architectures, texture segmentation.
The objective of the proposed PhD work is to design deep learning network architectures from proximal optimization iterative schemes.
Nowadays, deep learning methodologies constitute state-of-the-art references in several image processing tasks, such as classification or segmentation. The main limitations for these supervised approaches may stem from the sometimes difficult to perform and time-consuming annotation phases of large size databases as well as in extremely high learning costs.
Alternative strategies, that may prove less demanding in terms of database sizes or learning costs, rely on constructing functionals that combine data, models, and constraints expected to be satisfied by the targeted solutions, that are obtained by proximal algorithmic schemes. Proximal optimization methods  were initially proposed to minimize nonsmooth convex functions by replacing the use of the gradient operator, with a proximal operator, interpreted as an implicit gradient for non-differentiable functions. However, the task performance for such class of methods may strongly depend on tunable parameters, such as the algorithm step-size, ensuring convergence, or regularization hyperparameters, notably those balancing data/model fidelity and targeted constraint penalizations.
To take advantage of both worlds, the construction of deep networks based relying on unfolded iterations has emerged.
One can refer to LISTA proposed by Gregor and LeCun  as a pioneering work on the subject in the context of ``sparse coding", where each layer is constructed as a combination of a linear transformation and the application of a proximal operator, seen as an activation function. In this construction, the number of layers of the network corresponds to the number of iterations of the proximal algorithm and a backpropagation procedure on the gradient allows to learn the tunable parameters mentioned (step-size, regularization parameters) . The theoretical foundations connecting deep architectures and iterative schemes is mostly based on existing relations between activation functions and proximal operators .
This analogy makes it possible to guide the deep network architecture by relying on the variational formulation guided by the considered problem (segmentation, inverse problem solving, etc.) and the associated variational formulation developed during the last 20 years to ensure understanding and robustness of the network. This PhD subject focuses on the specific issue of texture segmentation.
This analogy allows to guide the design of the network architecture from the variational formulation of the targeted problem (segmentation, inverse problem solving,...), as commonly developed in the last 20 years.
This combination will permit to increase the explicability of the network structure while ensuring the robustness/reproducibility of its performance.
Workplan: Texture segmentation.
The proposed PhD work will focus on the specific task of texture segmentation.
Automated image segmentation indeed constitutes a crucial task in image processing, in many different application fields, very different in nature and ranging from medical imagery to geophysics. For years, texture segmentation was performed via a classical two-step procedure: First, prior knowledge or expert choice driven features are computed (e.g., Gabor, gradients, differences of oriented Gaussians,$\ldots$); Second, these features are combined via a clustering algorithm. Recently, research focus has been on combining these two steps into a single one to improve interface detection and thus segmentation performance. This has been first envisaged by retaining hand-crafted features but modifying classical optimization frameworks. Deep learning further renewed this topic, jointly performing feature estimation as well as segmentation, rapidly followed by texture segmentation.
In our group team, we recently developed efficient segmentation tools relying both on unsupervised and supervised strategies to perform simultaneously feature estimation as well as segmentation. The combination of scale-free descriptors (based on multiscale representations) and nonsmooth proximal optimization enabled us to perform unsupervised segmentation on synthetic and real textures [4,5]. We compared and discussed achieved performance against those obtained from a deep learning architecture based on Convolutional Neural Network, proposed in the literature .
The objective of this thesis topic is to develop deep network architectures based on the algorithms proposed in , which will allow a better understanding of the network structure on the one hand, and robustness and performance improvement on smaller databases on the other hand.
The proposed PhD work will elaborate on these results to devise strategies permitting perform jointly feature discovery and segmentation, by combining unsupervised and supervised strategies via deep unfolded algorithms, as sketched above.
Such an hybrid strategy is expected to permit a better explicability of the designed network (and thus of the discovered discriminative features) as well as improved performance, notably for training databases, with small sizes.
Laboratoire de Physique de l'ENS Lyon,
46 alléee d'Italie, 69364 Lyon cedex 07
Patrice Abry (DR CNRS) & Nelly Pustelnik (CRCN CNRS)
Skills : The candidate should have some knowledge in signal and image processing as well as in optimization and deep learning, as well as good programming skills in Matlab, Python or Julia.
 P. L. Combettes and J.-C. Pesquet, Proximal splitting methods in signal processing, in: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, (H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, Editors), pp. 185?212. Springer, New York, 2011.
 K. Gregor and Y. LeCun, Learning fast approximations of sparsecoding,in International Conference on Machine Learning, 2010, pp.399-406.
 M.~Jiu, N.~Pustelnik, A deep primal-dual proximal network for image restoration, accepted to IEEE JSTSP, 2021.
 B. Pascal, N. Pustelnik, and P. Abry,
How Joint Fractal Features Estimation and Texture Segmentation can be cast into a Strongly Convex Optimization Problem, submitted, 2019.
 B. Pascal, N. Pustelnik, P. Abry, M. Serres and V. Vidal,
Joint estimation of local variance and local regularity for texture segmentation. Application to multiphase flow characterization, IEEE ICIP,
Athens, Greece, October 7 - 10, 2018.
 B. Pascal, V. Mauduit, P. Abry, and N. Pustelnik
Scale-free texture segmentation: Expert feature-based versus Deep Learning strategies,
European Signal Processing Conference (EUSIPCO),
The Netherlands, Amsterdam, January 18 - 22, 2021.
(c) GdR 720 ISIS - CNRS - 2011-2020.