Sciweavers

9 search results - page 2 / 2
» Multimodal Phoneme Recognition of Meeting Data
Sort
View
ICIP
2007
IEEE
13 years 5 months ago
Robust Multi-Modal Group Action Recognition in Meetings from Disturbed Videos with the Asynchronous Hidden Markov Model
The Asynchronous Hidden Markov Model (AHMM) models the joint likelihood of two observation sequences, even if the streams are not synchronised. We explain this concept and how the...
Marc Al-Hames, Claus Lenz, Stephan Reiter, Joachim...
MLMI
2007
Springer
13 years 11 months ago
Meeting State Recognition from Visual and Aural Labels
In this paper we present a meeting state recognizer based on a combination of multi-modal sensor data in a smart room. Our approach is based on the training of a statistical model ...
Jan Curín, Pascal Fleury, Jan Kleindienst, ...
ICIP
2003
IEEE
14 years 7 months ago
On automatic annotation of meeting databases
In this paper, we discuss meetings as an application domain for multimedia content analysis. Meeting databases are a rich data source suitable for a variety of audio, visual and m...
Daniel Gatica-Perez, Hervé Bourlard, Iain M...
TASLP
2008
115views more  TASLP 2008»
13 years 5 months ago
Recognition of Dialogue Acts in Multiparty Meetings Using a Switching DBN
Abstract--This paper is concerned with the automatic recognition of dialogue acts (DAs) in multiparty conversational speech. We present a joint generative model for DA recognition ...
Alfred Dielmann, Steve Renals