Sciweavers

43 search results - page 2 / 9
» Meeting Modelling in the Context of Multimodal Research
Sort
View
ICIA
2007
13 years 7 months ago
Why and How to Model Multi-Modal Interaction for a Mobile Robot Companion
Verbal and non-verbal interaction capabilities for robots are often studied isolated from each other in current research trend because they largely contribute to different aspects...
Shuyin Li, Britta Wrede
PAMI
2011
13 years 4 days ago
Multiperson Visual Focus of Attention from Head Pose and Meeting Contextual Cues
—This paper introduces a novel contextual model for the recognition of people’s visual focus of attention (VFOA) in meetings from audio-visual perceptual cues. More specificall...
Sileye O. Ba, Jean-Marc Odobez
MLMI
2004
Springer
13 years 10 months ago
Shallow Dialogue Processing Using Machine Learning Algorithms (or Not)
This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and ...
Andrei Popescu-Belis, Alexander Clark, Maria Georg...
DSMML
2004
Springer
13 years 10 months ago
Multi Channel Sequence Processing
Abstract. This paper summarizes some of the current research challenges arising from multi-channel sequence processing. Indeed, multiple real life applications involve simultaneous...
Samy Bengio, Hervé Bourlard
DAGM
2003
Springer
13 years 10 months ago
A Computational Model of Early Auditory-Visual Integration
We introduce a computational model of sensor fusion based on the topographic representations of a ”two-microphone and one camera” configuration. Our aim is to perform a robust...
Carsten Schauer, Horst-Michael Gross