Deep Gaussian Uniform Mixtures for Robust Regression


Stéphane Lathuilière, Pablo Mesejo-Santiago, Xavier Alameda-Pineda, and Radu Horaud


[Could not find the bibliography file(s)Last week, our paper on robust regression was presented at ECCV’18 in Munich [?]. This paper presents a methodological framework for robust regression combining the representation power of deep architectures with the outlier detection capabilities of probabilistic models, in particular of a Gaussian-Uniform mixture (GUM). I am really proud of this piece of work that humbly contributes to science at the cross-roads of machine learning, pattern recognition and computer vision. The code will soon be available at https://github.com/Stephlat.

Abstract:In this paper we address the problem of how to robustly train a ConvNet for regression, or deep robust regression. Traditionally, deep regression employ the L2 loss function, known to be sensitive to outliers, i.e. samples that either lie at an abnormal distance away from the majority of the training samples, or that correspond to wrongly annotated targets. This means that, during back-propagation, outliers may bias the training process due to the high magnitude of their gradient. In this paper, we propose DeepGUM: a deep regression model that is robust to outliers thanks to the use of a Gaussian-uniform mixture model. We derive an optimization algorithm that alternates between the unsupervised detection of outliers using expectation-maximization, and the supervised training with cleaned samples using stochastic gradient descent. DeepGUM is able to adapt to a continuously evolving outlier distribution, avoiding to manually impose any threshold on the proportion of outliers in the training set. Extensive experimental evaluations on four different tasks (facial and fashion landmark detection, age and head pose estimation) lead us to conclude that our novel robust technique provides reliability in the presence of various types of noise and protection against a high percentage of outliers.

References:

Leave a Reply

Your email address will not be published. Required fields are marked *