Vous êtes ici : Accueil » Kiosque » Annonce

Identification

Identifiant: 
Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?

Annonce

17 octobre 2019

Deep Network Training under Uncertrainty


Catégorie : Stagiaire


Misclassification is an important issue with many potential negative consequences. It occurs when learning algorithm attributes a wrong label to a given class. It may decrease the accuracy of predictions while increasing the complexity of inferred models and the number of necessary training samples. Understanding the underlying causes of the label errors is one of main challenges of classification methods.

Indeed, to unveil the misclassification reasons one should understand what are exactly caused the poor performance.
This can help to prioritize certain directions to others for handling the misclassification and then resolve the model under-performance problem.
In computer vision field, for instance in sceneclassification, certain circumstances such as cluttered environment or unknown scene configuration can lead to poor object localization and consequently to data misclassification.
Mislabeling issue that is one of the sources of misclassification which is in turn caused by crowdsourcing and/or user input (labelers). Generally, classification algorithms perform well when the positive and negative data points are clearly separated. In other words, when the demarcation line between classes is clear.
Otherwise, the classification becomes naturally error prone. In such a case, a solution to alleviate this is to train the algorithm with more labeled datasets around the demarcation line area.
Overfitting or underfitting of a given dimension can also be the sources of misclassification. These happen when a given classifier cannot perfectly distinguish between classes.
In other words, it does not learn the main differentiator between classes. Thus, it classifies a data point into a wrong class. This means, the model fails to learn a dimension from the available data. It is possible to overcome this issue by training the classifier to classify data based on the main differentiator more than other parameters.
Similarly, the model can overfit a particular dimension.
This means, the training data did not have enough examples that could train the model against such misclassification.
Although training the algorithms with more labeled datasets may help to overcome these issues, providing labels for large amounts of unlabeled data is challenging because labeling the data is expensive and time-consuming. Moreover, it is an error prone process since it requires human-in-the-loop interactions that produces naturally mislabeled dataset causing a misclassification.
 
Many attempts have been done to make algorithms in machine learning label noise-robust (see a detailed survey on this subject in [1].
Mislabeling can also have a negative impact on the training of a convolutional neural network for image classification. Consequently, the learning of these label errors can lead to a misclassification and a decrease in overall performance with lower than expected image detection rates. An approach proposed to add an extra noise layer into the network which adapts the network outputs to match the noisy label distribution [2] .
A classification method has been proposed in [3] which was designed, especially for the cases where multiple objects are present in the scene and when the context of the scene impacts the classification result. Although, the method showed better results using a smaller amount
of data in comparison with the state-of-the-art methods, its classification computation time remains high. This could be problematic for sensitive classification tasks. Therefore, there is the need for a robust method tackling the misclassification problem by producing very high accuracy (>=99.97) along with very low computation time
 
The goal of this internship is first to develop an algorithm allowing to detect label error that will ultimately improve object recognition on dataset that could potentially contain label errors. The next objective is to provide an innovative tool to train a convolutional neural network that is more robust to mislabeled data and also distinguishes the mislabeled from misclassified errors.
The training of the convolutional neural network can be based on the label error probability (i.e. a probability giving to each image when it contains a label error). Images that are suspected to have a label error will have a reduced impact on the training.
The developed tool should be able to produce the desired accuracy in presence of mislabeled or misclassified data points with in a very short period of time.
The objective function could optimize a metric based on the distance between a data point and the predicted class, the distance of an image to other classes or/and the distance between the image and the center of its label. This may be useful to improve the detection of label errors especially in the case where two images output have a similar distance of their expected label but one of them has a label error. In other words, it aims at distinguishing the correct classification from the errors. A misclassified point with a small distance has a higher probability to be wrongly labeled than one with a larger distance.
Although regular convolutional neural networks are powerful tools for image classification, their performance is hampered by their vulnerability to adversarial attacks. This is due to the fact that they are assigning high confidence to regions with few or even no feature points (i.e. a nonlinear transformation of the input space extracting a meaningful representation of the input data).
A Radial Basis Function (RBF) network potentially could be used in this project to develop a non-linear classifier.
A RBF network assigns high confidence exclusively to the regions
containing enough feature points. As it is claimed in buhmann2003radial, RBF networks are naturally immune to adversarial and rubbish-class examples in the sense that they give low confidence to such examples. RBF units are activated within a well-circumscribed region of their input-space so that they can make the regions of each class in the feature space finite and narrow.

 

 

Ghazaleh Khodabandelou , Amir Nakib

Laboratoire Images, Signaux et Systèmes Intelligents (LISSI) Domaine Chérioux, 122 rue Paul Armangot, 94400, Vitry sur Sein

Contact : ghazaleh.khodabandelou@u-pec.fr, amir.nakib@u-pec.fr

 

 

La possibilité de poursuivre ce stage en thèse.

Dans cette rubrique

(c) GdR 720 ISIS - CNRS - 2011-2020.