Sciweavers

ICASSP
2011
IEEE

Dirichlet Mixture Models of neural net posteriors for HMM-based speech recognition

12 years 8 months ago
Dirichlet Mixture Models of neural net posteriors for HMM-based speech recognition
In this paper, we present a novel technique for modeling the posterior probability estimates obtained from a neural network directly in the HMM framework using the Dirichlet Mixture Models (DMMs). Since posterior probability vectors lie on a probability simplex their distribution can be modeled using DMMs. Being in an exponential family, the parameters of DMMs can be estimated in an efficient manner. Conventional approaches like TANDEM attempt to gaussianize the posteriors by suitable transforms and model them using Gaussian Mixture Models (GMMs). This requires more number of parameters as it does not exploit the fact that the probability vectors lie on a simplex. We demonstrate through TIMIT phoneme recognition experiments that the proposed technique outperforms the conventional TANDEM approach.
Balakrishnan Varadarajan, Garimella S. V. S. Sivar
Added 20 Aug 2011
Updated 20 Aug 2011
Type Journal
Year 2011
Where ICASSP
Authors Balakrishnan Varadarajan, Garimella S. V. S. Sivaram, Sanjeev Khudanpur
Comments (0)