ACMMM’16 Tutorial on learning from noisy and missing data
We will be giving a tutorial at ACM Multimedia’2016: Emerging topics in learning from noisy and missing data. The tutorial will be given on October the 16th in collaboration with Dr. Timothy Hospedales, Prof. Elisa Ricci, Prof. Xiaogang Wang and Prof. Nicu Sebe. The abstract reads below.
While vital for handling most computer vision problems, collecting large scale fully annotated datasets is a resource-consuming, often unaffordable task. Indeed, on the one hand datasets need to be large and variate enough so that learning strategies can successfully exploit the variability inherently present in real data, but on the other hand they should be small enough so that they can be fully annotated at a reasonable cost. With the overwhelming success of (deep) learning methods, the traditional problem of balancing between dataset dimensions and resources needed for annotations became a full-fledged dilemma. In this context, methodological approaches able to deal with partially described data sets represent an one-of-a-kind opportunity to find the right balance between data variability and resource-consumption in annotation. These include methods able to deal with noisy, weak or partial annotations.
In this tutorial we will present a few recent methodologies to address different visual tasks under the assumption of noisy, weakly annotated data sets. Special emphasis will be given to methods based on deep architectures for unsupervised domain adaptation, low-rank modeling for learning in transductive settings and zero-shot learning. We will show how these approaches exhibit excellent performance in crucial tasks such as pedestrian detection or fine-grained visual recognition. Furthermore, we will discuss emerging application domains which are of great interest to the computer vision community and where handling noisy or missing information is essential. For instance, we will present recent works on complex scene analysis using wearable sensors, on the estimation of physiological signals from face videos in realistic conditions, on the recognition of emotions elicited from abstract paintings.
And here is the picture at the end of the session: