Sciweavers

Share
MLMI
2007
Springer

Meeting State Recognition from Visual and Aural Labels

8 years 11 months ago
Meeting State Recognition from Visual and Aural Labels
In this paper we present a meeting state recognizer based on a combination of multi-modal sensor data in a smart room. Our approach is based on the training of a statistical model to use semantical cues generated by perceptual components. These perceptual components generate these cues in processing the output of one or multiple sensors. The presented recognizer is designed to work with an arbitrary combination of multi-modal input sensors. We have defined a set of states representing both meeting and non-meeting situations, and a set of features we base our classification on. Thus, we can model situations like presentation or break which are important information for many applications. We have hand-annotated a set of meeting recordings to verify our statistical classification, as appropriate multi-modal corpora are currently very sparse. We have also used several statistical classification methods for the best classification, which we validated on the hand-annotated corpus of rea...
Jan Curín, Pascal Fleury, Jan Kleindienst,
Added 08 Jun 2010
Updated 08 Jun 2010
Type Conference
Year 2007
Where MLMI
Authors Jan Curín, Pascal Fleury, Jan Kleindienst, Robert Kessl
Comments (0)
books