Sciweavers

Share
ICASSP
2009
IEEE

Neural network based language models for highly inflective languages

10 years 1 months ago
Neural network based language models for highly inflective languages
Speech recognition of inflectional and morphologically rich languages like Czech is currently quite a challenging task, because simple n-gram techniques are unable to capture important regularities in the data. Several possible solutions were proposed, namely class based models, factored models, decision trees and neural networks. This paper describes improvements obtained in recognition of spoken Czech lectures using language models based on neural networks. Relative reductions in word error rate are more than 15% over baseline obtained with adapted 4-gram backoff language model using modified Kneser-Ney smoothing.
Tomas Mikolov, Jirí Kopecký, Lukas B
Added 17 Aug 2010
Updated 17 Aug 2010
Type Conference
Year 2009
Where ICASSP
Authors Tomas Mikolov, Jirí Kopecký, Lukas Burget, Ondrej Glembek, Jan Cernocký
Comments (0)
books