Sciweavers

ICASSP
2011
IEEE

Exploiting sparseness of backing-off language models for efficient look-ahead in LVCSR

12 years 8 months ago
Exploiting sparseness of backing-off language models for efficient look-ahead in LVCSR
In this paper, we propose a new method for computing and applying language model look-ahead in a dynamic network decoder, exploiting the sparseness of backing-off n-gram language models. Only partial (sparse) look-ahead tables are computed, with a size that depends on the number of words that have an n-gram score in the language model for a specific context, rather than a constant, vocabulary dependent size. Since high order backing-off language models are inherently sparse, this mechanism reduces the runtime- and memory effort of computing the look-ahead tables by magnitudes. A modified decoding algorithm is required to apply these sparse LM look-ahead tables efficiently. We show that sparse LM look-ahead is much more efficient than the classical method, and that full n-gram look-ahead becomes favorable over lower order look-ahead even when many distinct LM contexts appear during decoding.
David Nolden, Hermann Ney, Ralf Schlüter
Added 21 Aug 2011
Updated 21 Aug 2011
Type Journal
Year 2011
Where ICASSP
Authors David Nolden, Hermann Ney, Ralf Schlüter
Comments (0)