Sciweavers

11 search results - page 1 / 3
» Meeting State Recognition from Visual and Aural Labels
Sort
View
MLMI
2007
Springer
13 years 10 months ago
Meeting State Recognition from Visual and Aural Labels
In this paper we present a meeting state recognizer based on a combination of multi-modal sensor data in a smart room. Our approach is based on the training of a statistical model ...
Jan Curín, Pascal Fleury, Jan Kleindienst, ...
ICMCS
2005
IEEE
173views Multimedia» more  ICMCS 2005»
13 years 10 months ago
A Multi-Modal Mixed-State Dynamic Bayesian Network for Robust Meeting Event Recognition from Disturbed Data
In this work we present a novel multi-modal mixed-state dynamic Bayesian network (DBN) for robust meeting event classification. The model uses information from lapel microphones,...
Marc Al-Hames, Gerhard Rigoll
CVPR
2007
IEEE
14 years 6 months ago
Latent-Dynamic Discriminative Models for Continuous Gesture Recognition
Many problems in vision involve the prediction of a class label for each frame in an unsegmented sequence. In this paper, we develop a discriminative framework for simultaneous se...
Louis-Philippe Morency, Ariadna Quattoni, Trevor D...
ICMCS
2007
IEEE
208views Multimedia» more  ICMCS 2007»
13 years 8 months ago
A Cognitive and Unsupervised Map Adaptation Approach to the Recognition of the Focus of Attention from Head Pose
In this paper, the recognition of the visual focus of attention (VFOA) of meeting participants (as defined by their eye gaze direction) from their head pose is addressed. To this ...
Jean-Marc Odobez, Sileye O. Ba
ICMCS
2005
IEEE
116views Multimedia» more  ICMCS 2005»
13 years 10 months ago
Multimodal Emotion Recognition and Expressivity Analysis
The paper presents the framework of a special session that aims at investigating the best possible techniques for multimodal emotion recognition and expressivity analysis in human...
Stefanos D. Kollias, Kostas Karpouzis