Sciweavers

ICASSP
2009
IEEE

Unsupervised acoustic and language model training with small amounts of labelled data

13 years 10 months ago
Unsupervised acoustic and language model training with small amounts of labelled data
We measure the effects of a weak language model, estimated from as little as 100k words of text, on unsupervised acoustic model training and then explore the best method of using word confidences to estimate n-gram counts for unsupervised language model training. Even with 100k words of text and 10 hours of training data, unsupervised acoustic modeling is robust, with 50% of the gain recovered when compared to supervised training. For language model training, multiplying the word confidences together to get a weighted count produces the best reduction in WER by 2% over the baseline language model and 0.5% absolute over using unweighted transcripts. Oracle experiments show that a larger gain is possible, but better confidence estimation techniques are needed to identify correct n-grams.
Scott Novotney, Richard M. Schwartz, Jeff Ma
Added 21 May 2010
Updated 21 May 2010
Type Conference
Year 2009
Where ICASSP
Authors Scott Novotney, Richard M. Schwartz, Jeff Ma
Comments (0)