Sciweavers

Share
ICASSP
2009
IEEE

Comparing maximum a posteriori vector quantization and Gaussian mixture models in speaker verification

10 years 4 months ago
Comparing maximum a posteriori vector quantization and Gaussian mixture models in speaker verification
Gaussian mixture model - universal background model (GMMUBM) is a standard reference classifier in speaker verification. We have recently proposed a simplified model using vector quantization (VQ-UBM). In this study, we extensively compare these two classifiers on NIST 2005, 2006 and 2008 SRE corpora, while having a standard discriminative classifier (GLDS-SVM) as a reference point. We focus on parameter setting for N-top scoring, model order, and performance for different amounts of training data. The most interesting result, against a general belief, is that GMM-UBM yields better results for short segments whereas VQ-UBM is good for long utterances. The results also suggest that maximum likelihood training of the UBM is sub-optimal, and hence, alternative ways to train the UBM should be considered.
Tomi Kinnunen, Juhani Saastamoinen, Ville Hautam&a
Added 21 May 2010
Updated 21 May 2010
Type Conference
Year 2009
Where ICASSP
Authors Tomi Kinnunen, Juhani Saastamoinen, Ville Hautamäki, Mikko Vinni, Pasi Fränti
Comments (0)
books