Sciweavers

Share
INTERSPEECH
2010

On speaker adaptive training of artificial neural networks

8 years 3 months ago
On speaker adaptive training of artificial neural networks
In the paper we present two techniques improving the recognition accuracy of multilayer perceptron neural networks (MLP ANN) by means of adopting Speaker Adaptive Training. The use of the MLP ANN, usually in combination with the TRAPS parametrization, includes applications in speech recognition tasks, discriminative features production for GMM-HMM and other. In the first SAT experiments, we used the VTLN as a speaker normalization technique. Moreover, we developed a novel speaker normalization technique called Minimum Error Linear Transform (MELT) that resembles the cMLLR/fMLLR method [1] with respect to the possible application either on the model or features. We tested these two methods extensively on telephone speech corpus SpeechDat-East. The results obtained in these experiments suggest that incorporation of SAT into MLP ANN training process is beneficial and depending on the setup leads to significant decrease of phoneme error rate (3 %
Jan Trmal, Jan Zelinka, Ludek Müller
Added 18 May 2011
Updated 18 May 2011
Type Journal
Year 2010
Where INTERSPEECH
Authors Jan Trmal, Jan Zelinka, Ludek Müller
Comments (0)
books