Sciweavers

ICIP
2000
IEEE

Normalized Training for HMM-Based Visual Speech Recognition

14 years 6 months ago
Normalized Training for HMM-Based Visual Speech Recognition
This paper presents an approach to estimating the parameters of continuous density HMMs for visual speech recognition. One of the key issues of image-based visual speech recognition is normalization of lip location and lighting condition prior to estimating the parameters of HMMs. We presented a normalized training method in which the normalization process is integrated in the model training. This paper extends it for contrast normalization in addition to average-intensity and location normalization. The proposed method provides a theoretically-well-defined algorithm based on a maximum likelihood formulation, hence the likelihood for the training data is guaranteed to increase at each iteration of the normalized training. Experiments on M2VTS database show that the recognition performance can be significantly improved by the normalized training.
Yoshihiko Nankaku, Keiichi Tokuda, Tadashi Kitamur
Added 25 Oct 2009
Updated 25 Oct 2009
Type Conference
Year 2000
Where ICIP
Authors Yoshihiko Nankaku, Keiichi Tokuda, Tadashi Kitamura, Takao Kobayashi
Comments (0)