Sciweavers

ICMCS
2010
IEEE

Exploiting multimodal data fusion in robust speech recognition

13 years 4 months ago
Exploiting multimodal data fusion in robust speech recognition
This article introduces automatic speech recognition based on Electro-Magnetic Articulography (EMA). Movements of the tongue, lips, and jaw are tracked by an EMA device, which are used as features to create Hidden Markov Models (HMM) and recognize speech only from articulation, that is, without any audio information. Also, automatic phoneme recognition experiments are conducted to examine the contribution of the EMA parameters to robust speech recognition. Using feature fusion, multistream HMM fusion, and late fusion methods, noisy audio speech has been integrated with EMA speech and recognition experiments have been conducted. The achieved results show that the integration of the EMA parameters significantly increases an audio speech recognizer’s accuracy, in noisy environments.
Panikos Heracleous, Pierre Badin, Gérard Ba
Added 07 Dec 2010
Updated 07 Dec 2010
Type Conference
Year 2010
Where ICMCS
Authors Panikos Heracleous, Pierre Badin, Gérard Bailly, Norihiro Hagita
Comments (0)