Sciweavers

ICANN
2005
Springer

Classifying Unprompted Speech by Retraining LSTM Nets

13 years 10 months ago
Classifying Unprompted Speech by Retraining LSTM Nets
Abstract. We apply Long Short-Term Memory (LSTM) recurrent neural networks to a large corpus of unprompted speech- the German part of the VERBMOBIL corpus. Training first on a fraction of the data, then retraining on another fraction, both reduces time costs and significantly improves recognition rates. For comparison we show recognition rates of Hidden Markov Models (HMMs) on the same corpus, and provide a promising extrapolation for HMM-LSTM hybrids.
Nicole Beringer, Alex Graves, Florian Schiel, J&uu
Added 27 Jun 2010
Updated 27 Jun 2010
Type Conference
Year 2005
Where ICANN
Authors Nicole Beringer, Alex Graves, Florian Schiel, Jürgen Schmidhuber
Comments (0)