Sciweavers

1423 search results - page 94 / 285
» Polyphase speech recognition
Sort
View
INTERSPEECH
2010
14 years 10 months ago
Canonical state models for automatic speech recognition
Current speech recognition systems are often based on HMMs with state-clustered Gaussian Mixture Models (GMMs) to represent the context dependent output distributions. Though high...
Mark J. F. Gales, Kai Yu
ICASSP
2011
IEEE
14 years 7 months ago
Amplitude modulation spectrogram based features for robust speech recognition in noisy and reverberant environments
In this contribution we present a feature extraction method that relies on the modulation-spectral analysis of amplitude fluctuations within sub-bands of the acoustic spectrum by ...
Niko Moritz, Jörn Anemüller, Birger Koll...
ICASSP
2011
IEEE
14 years 7 months ago
Rapid joint speaker and noise compensation for robust speech recognition
For speech recognition, mismatches between training and testing for speaker and noise are normally handled separately. The work presented in this paper aims at jointly applying sp...
K. K. Chin, Haitian Xu, Mark J. F. Gales, Catherin...
ICASSP
2011
IEEE
14 years 7 months ago
Non-stationary noise estimation method based on bias-residual component decomposition for robust speech recognition
This paper addresses a noise suppression problem, namely the estimation of non-stationary noise sequences. In this problem, we assume that non-stationary noise can be decomposed i...
Masakiyo Fujimoto, Shinji Watanabe, Tomohiro Nakat...
ICASSP
2009
IEEE
15 years 10 months ago
Maximizing global entropy reduction for active learning in speech recognition
We propose a new active learning algorithm to address the problem of selecting a limited subset of utterances for transcribing from a large amount of unlabeled utterances so that ...
Balakrishnan Varadarajan, Dong Yu, Li Deng, Alex A...