In this paper, we present an approach for speaker change detection in broadcast video using joint audio-visual scene change statistics. Our experiments indicate that using joint a...
Face-to-face meetings usually encompass several modalities including speech, gesture, handwriting, and person identification. Recognition and integration of each of these modalit...
Ralph Gross, Michael Bett, Hua Yu, Xiaojin Zhu, Yu...
In this paper, we present a novel approach for tracking a lecturer during the course of his speech. We use features from multiple cameras and microphones, and process them in a jo...
Kai Nickel, Tobias Gehrig, Rainer Stiefelhagen, Jo...
Abstract. We propose a probabalistic model of single source multimodal generation and show how algorithms for maximizing mutual information can find the correspondences between com...
Face-to-face meetings usually encompass several modalities including speech, gesture, handwriting, and person identification. Recognition and integration of each of these modaliti...
Michael Bett, Ralph Gross, Hua Yu, Xiaojin Zhu, Yu...