Sciweavers

28 search results - page 2 / 6
» Exploiting multimodal data fusion in robust speech recogniti...
Sort
View
MCS
2002
Springer
13 years 4 months ago
An Experimental Comparison of Classifier Fusion Rules for Multimodal Personal Identity Verification Systems
In this paper, an experimental comparison between fixed and trained fusion rules for multimodal personal identity verification is reported. We focused on the behaviour of the consi...
Fabio Roli, Josef Kittler, Giorgio Fumera, Daniele...
HCI
2007
13 years 6 months ago
Unobtrusive Multimodal Emotion Detection in Adaptive Interfaces: Speech and Facial Expressions
Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a com...
Khiet P. Truong, David A. van Leeuwen, Mark A. Nee...
ICMCS
2005
IEEE
173views Multimedia» more  ICMCS 2005»
13 years 10 months ago
A Multi-Modal Mixed-State Dynamic Bayesian Network for Robust Meeting Event Recognition from Disturbed Data
In this work we present a novel multi-modal mixed-state dynamic Bayesian network (DBN) for robust meeting event classification. The model uses information from lapel microphones,...
Marc Al-Hames, Gerhard Rigoll
ICMI
2004
Springer
263views Biometrics» more  ICMI 2004»
13 years 10 months ago
Analysis of emotion recognition using facial expressions, speech and multimodal information
The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although ...
Carlos Busso, Zhigang Deng, Serdar Yildirim, Murta...
TCSV
2011
12 years 12 months ago
Concept-Driven Multi-Modality Fusion for Video Search
—As it is true for human perception that we gather information from different sources in natural and multi-modality forms, learning from multi-modalities has become an effective ...
Xiao-Yong Wei, Yu-Gang Jiang, Chong-Wah Ngo