This project deals with the processing of images recorded by sidescan sonars. Such images reveal detailed textured information about seafloors and may be used to classify any patch into seafloor types (rocks, sand ripples, mud, sand, etc.)and/or to segment images into homogeneous zones.
Various works in this domain (, , , , , ) have studied different approaches using various signal processing algorithms to extract discriminative information and various supervised or unsupervised classifiers to classify these information.
Recently, deep learning approaches showed promising progress , and are interesting to replace traditional handcrafted features stage by an automatic feature learning stage, to extract hierarchical representation of information with various level of abstraction and to obtain invariant information (insensitive to contrast changes between images and more generally to small deformations). The modular structure of deep learning approach give also the opportunity to jointly classify and segment images (, ).
In this context, the aim of the project is to develop new deep-learning-based methods to characterize and exploit seafloor information recorded by sonar systems. A first effort has been achieved by the development of classical supervised and unsupervised architectures.
The successful candidate should propose deep convolutional neural network architecture allowing to jointly classify and segment seafloors. A number of key points will have to be addressed in this work: the choice of architecture, the evaluation of results from available sonar images databases and the comparison with current state-of-the-art approaches, the interpretation of learned representation.
He/she should propose subsequent exploratory directions among new supervised and/or unsupervised deep learning architectures.
The successful candidate should have a strong background (phd or computer engineering) in image processing, image classification and/or deep learning. He/She should have advanced programming skills (Python and/or Matlab). An experience in one of the available deep-learning toolboxes (TensorFlow, (py)Torch, Keras, matconvnet, etc.) will be greatly appreciated.
N.-E. Lasmar, A. Baussard, and G. Le Chenadec, “Asymmetric power distribution model of wavelet subbands for texture classification,” Pattern Recognit. Lett., vol. 52, pp. 1–8, 2015.
H.-G. Nguyen, R. Fablet, A. Ehrold, and J.-M. Boucher, “Keypoint-Based Analysis of Sonar Images: Application to Seabed Recognition,” IEEE Trans. Geosci. Remote Sens., vol. 50, pp. 1171–1184, 2012.
G. Le Chenadec, “Analyse de descripteurs énergétiques et statistiques de signaux sonar pour la caractérisation des fonds marins,” Université de Bretagne Occidentale, 2004.
L. Picard, A. Baussard, G. Le Chenadec, and Q. I, “Potential of the intrinsic dimensionality for characterizing the seabed in the ATR context,” in IEEE/OES & MTS OCEANS conference, 2015.
A. N. Chabane and B. Zerr, “Unsupervised knowledge discovery of seabed types using competitive neural network: Application to sidescan sonar images,” in Oceans - St. John’s, 2014, 2014, pp. 1–5.
M. Mignotte, C. Collet, P. Perez, and P. Bouthemy, “Sonar image segmentation using an unsupervised hierarchical MRF model,” IEEE Trans. Image Process., vol. 9, no. 7, pp. 1216–1231, Jul. 2000.
Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015.
J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3431–3440.
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-Based Convolutional Networks for Accurate Object Detection and Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 1, pp. 142–158, Jan. 2016.
(c) GdR 720 ISIS - CNRS - 2011-2015.