Sciweavers

160 search results - page 3 / 32
» Exploiting contextual information for improved phoneme recog...
Sort
View
ICMCS
2010
IEEE
164views Multimedia» more  ICMCS 2010»
13 years 6 months ago
Exploiting multimodal data fusion in robust speech recognition
This article introduces automatic speech recognition based on Electro-Magnetic Articulography (EMA). Movements of the tongue, lips, and jaw are tracked by an EMA device, which are...
Panikos Heracleous, Pierre Badin, Gérard Ba...
ICDIM
2007
IEEE
13 years 10 months ago
Exploiting contextual handover information for versatile services in NGN environments
Users in ubiquitous and pervasive computing environments will be much more empowered in ways to access and to control their navigation. Handover, the vital event in which a user c...
Edson dos Santos Moreira, David N. Cottingham, Jon...
TSD
2004
Springer
13 years 11 months ago
Multimodal Phoneme Recognition of Meeting Data
This paper describes experiments in automatic recognition of context-independent phoneme strings from meeting data using audiovisual features. Visual features are known to improve ...
Petr Motlícek, Jan Cernocký
NAACL
2010
13 years 4 months ago
Contextual Information Improves OOV Detection in Speech
Out-of-vocabulary (OOV) words represent an important source of error in large vocabulary continuous speech recognition (LVCSR) systems. These words cause recognition failures, whi...
Carolina Parada, Mark Dredze, Denis Filimonov, Fre...
COLING
2010
13 years 1 months ago
Unsupervised phonemic Chinese word segmentation using Adaptor Grammars
Adaptor grammars are a framework for expressing and performing inference over a variety of non-parametric linguistic models. These models currently provide state-of-the-art perfor...
Mark Johnson, Katherine Demuth