Sciweavers

EMNLP
2010

Training Continuous Space Language Models: Some Practical Issues

13 years 2 months ago
Training Continuous Space Language Models: Some Practical Issues
Using multi-layer neural networks to estimate the probabilities of word sequences is a promising research area in statistical language modeling, with applications in speech recognition and statistical machine translation. However, training such models for large vocabulary tasks is computationally challenging which does not scale easily to the huge corpora that are nowadays available. In this work, we study the performance and behavior of two neural statistical language models so as to highlight some important caveats of the classical training algorithms. The induced word embeddings for extreme cases are also analysed, thus providing insight into the convergence issues. A new initialization scheme and new training techniques are then introduced. These methods are shown to greatly reduce the training time and to significantly improve performance, both in terms of perplexity and on a large-scale translation task.
Hai Son Le, Alexandre Allauzen, Guillaume Wisniews
Added 11 Feb 2011
Updated 11 Feb 2011
Type Journal
Year 2010
Where EMNLP
Authors Hai Son Le, Alexandre Allauzen, Guillaume Wisniewski, François Yvon
Comments (0)