Sciweavers

ICASSP
2009
IEEE

Combining mixture weight pruning and quantization for small-footprint speech recognition

13 years 11 months ago
Combining mixture weight pruning and quantization for small-footprint speech recognition
Semi-continuous acoustic models, where the output distributions for all Hidden Markov Model states share a common codebook of Gaussian density functions, are a well-known and proven technique for reducing computation in automatic speech recognition. However, the size of the parameter files, and thus their memory footprint at runtime, can be very large. We demonstrate how non-linear quantization can be combined with a mixture weight distribution pruning technique to halve the size of the models with minimal performance overhead and no increase in error rate.
David Huggins-Daines, Alexander I. Rudnicky
Added 21 May 2010
Updated 21 May 2010
Type Conference
Year 2009
Where ICASSP
Authors David Huggins-Daines, Alexander I. Rudnicky
Comments (0)