Sciweavers

9 search results - page 1 / 2
» Multimodal Phoneme Recognition of Meeting Data
Sort
View
TSD
2004
Springer
13 years 10 months ago
Multimodal Phoneme Recognition of Meeting Data
This paper describes experiments in automatic recognition of context-independent phoneme strings from meeting data using audiovisual features. Visual features are known to improve ...
Petr Motlícek, Jan Cernocký
MLMI
2005
Springer
13 years 10 months ago
Multimodal Integration for Meeting Group Action Segmentation and Recognition
We address the problem of segmentation and recognition of sequences of multimodal human interactions in meetings. These interactions can be seen as a rough structure of a meeting, ...
Marc Al-Hames, Alfred Dielmann, Daniel Gatica-Pere...
EMNLP
2008
13 years 6 months ago
Multimodal Subjectivity Analysis of Multiparty Conversation
We investigate the combination of several sources of information for the purpose of subjectivity recognition and polarity classification in meetings. We focus on features from two...
Stephan Raaijmakers, Khiet P. Truong, Theresa Wils...
ICMCS
2005
IEEE
173views Multimedia» more  ICMCS 2005»
13 years 10 months ago
A Multi-Modal Mixed-State Dynamic Bayesian Network for Robust Meeting Event Recognition from Disturbed Data
In this work we present a novel multi-modal mixed-state dynamic Bayesian network (DBN) for robust meeting event classification. The model uses information from lapel microphones,...
Marc Al-Hames, Gerhard Rigoll
ICMCS
2010
IEEE
164views Multimedia» more  ICMCS 2010»
13 years 5 months ago
Exploiting multimodal data fusion in robust speech recognition
This article introduces automatic speech recognition based on Electro-Magnetic Articulography (EMA). Movements of the tongue, lips, and jaw are tracked by an EMA device, which are...
Panikos Heracleous, Pierre Badin, Gérard Ba...