ICCVW 2017 paper on Exploiting the Complementarity of Audio-Visual Data for Probabilistic Multi-Speaker Tracking


Yutong Ban, Laurent Girin, Xavier Alameda-Pineda and Radu Horaud


[Could not find the bibliography file(s)We’ve got a paper accepted at ICCV 2017 Workshopon Computer Vision for Audio-Visual Media about Exploiting the Complementarity of Audio-Visual Data for Probabilistic Multi-Speaker Tracking [?].

Abstract: Multi-speaker tracking is a central problem in human-robot interaction. In this context, exploiting auditory and visual information is gratifying and challenging at the same time. Gratifying because the complementary nature of auditory and visual information allows us to be more robust against noise and outliers than uni-modal approaches. Challenging because how to properly fuse auditory and visual information for multi-speaker tracking is far from being a solved question. In this paper we propose a probabilistic generative model that tracks multiple speakers by jointly exploiting auditory and visual features in their natural representation spaces. Importantly, the method is robust to missing data and it is thus able to track when only one of the modalities is present. Quantitative and qualitative results on the AVDIAR dataset are reported.

References:

Leave a Reply

Your email address will not be published. Required fields are marked *