Les commentaires sont clos.

IJCNN 2013: special session on "unsupervised learning: Bayesian regularization and sparsity”

4 Février 2013

Catégorie : Conférence internationale

Dear colleague,

I am pleased to inform you that paper submission is now open for the special session on

«Unsupervised model-based learning: Bayesian regularization and sparsity» (session number S24)

organized within the International Joint Conference on Neural Networks (IJCNN) 2013. The conference will be held on August 4-9, 2013 in Dallas, Texas and the deadline is 22 February 2013. More information is available below as well as in the special session website.

I would appreciate if you kindly inform your colleagues and friends who may be interested to submit a paper.

Please don't hesitate to contact me if you have any question.

Best Regards,



Unsupervised model-based learning approaches aim at automatically acquiring “knowledge” from data for representation, analysis, interpretation, etc by learning a probabilistic model. They are very suitable for many applications when the supervision (e.g., expert information) is missing, hidden or difficult to obtain, etc. Namely, mixture model-based approaches are one of the most popular and successful unsupervised learning approaches. They are very used in particular in cluster analysis for automatically finding clusters, in discriminant analysis when the classes are dispersed, as well as in regression when the predictor is governed by a hidden process, or in generative topographic learning approaches, etc.

In these unsupervised model-based learning approaches, the model parameters are very often learned in a maximum likelihood (ML) estimation framework using the well-known EM algorithm. Selecting the best model can be performed afterward using some information criteria generally formulated as a penalized maximum likelihood, such as the BIC. A Bayesian regularization is also possible to overcome some possible problems in the ML approach such as singularities, or to include some prior knowledge. The Bayesian formulation yields in a maximum a posteriori (MAP) estimation for the model parameters which can also be performed by using the EM algorithm. The model selection can still performed by relying on slightly modified information criteria implying the maximum a posteriori rather than the maximum likelihood function.

Furthermore, the unsupervised learning approaches can also be used to represent a dataset before running a supervised learning task. The aim is therefore to learn dictionaries for sparse data representations. Standard sparse coding algorithms, namely pursuit algorithms, LASSO, etc, sparse decompositions, which generally optimize a penalized deterministic cost function, can be shown as being a specific constrained case of a general probabilistic Bayesian model optimizing a MAP criterion.

The aims of this session are as follows. First, we aim at seeing, how can probabilistic (non-Bayesian) model-based approaches be regularized at best from a Bayesian probabilistic prospective, namely the choice of adapted prior hyperparameters appropriate for the performed task, namely cluster analysis, hidden process regression analysis, functional data analysis. Then, the second objective is to show, how Bayesian approaches do provide a more general framework to provide sparse representations, compared to standard deterministic sparse coding algorithms. Finally, as the two Bayesian approaches stated before are both parametric, in the sens that they aim at controlling a model, the final objective is to see how can we extend this by furthermore exploring non-parametric Bayesian approaches. In particular the infinite mixture model and its use in both model-based clustering and in Bayesian sparse representation.

Key words:

Unsupervised Generative Learning; Latent data Models; Model-based clustering; Hidden process regression; Functional Data Analysis; Bayesian regularization, Bayesian sparse representation, Non-parametric Bayesian models, Sparse coding. Applications

Program Committee:

  1. Pr. Geoff McLachlan, University of Queensland, Australia

  2. Pr. Marcel Van Gerven, Radboud University Nederland

  3. Pr. Mustapha Lebbah, Paris 13 University, France

  4. Pr. Nizar Bouguila, Concordia University, Montreal, Canada

  5. Pr. Ian Wood, University of Queensland, Australia

  6. Pr. Hervé Glotin, Southern University of Toulon-Var, France

  7. Dr. Guénael Cabanes, University of Sydney, Australia

  8. Pr. Latifa Oukhellou, IFSTTAR Institute, France

  9. Mingyuan Zhou, Duke University, USA

  10. Dr. Nistor Grozavu, Paris 13 University, France

  11. Dr. Nicoleta Rogovschi, Paris 5 University, France

  12. Dr. Faicel Chamroukhi, Southern-University of Toulon-Var, France;