Sciweavers

ICCV
2001
IEEE

Sequential Monte Carlo Fusion of Sound and Vision for Speaker Tracking

14 years 6 months ago
Sequential Monte Carlo Fusion of Sound and Vision for Speaker Tracking
Video telephony could be considerably enhanced by provision of a tracking system that allows freedom of movement to the speaker, while maintaining a well-framed image, for transmission over limited bandwidth. Already commercial multi-microphone systems exist which track speaker direction in order to reject background noise. Stereo sound and vision are complementary modalities in that sound is good for initialisation (where vision is expensive) whereas vision is good for localisation (where sound is less precise). Using generative probabilistic models and particle filtering, we show that stereo sound and vision can indeed be fused effectively, to make a system more capable than with either modality on its own.
Jaco Vermaak, Michel Gangnet, Andrew Blake, Patric
Added 15 Oct 2009
Updated 31 Oct 2009
Type Conference
Year 2001
Where ICCV
Authors Jaco Vermaak, Michel Gangnet, Andrew Blake, Patrick Pérez
Comments (0)