Sciweavers

AMFG
2003
IEEE

Shape and appearance models of talking faces for model-based tracking

13 years 9 months ago
Shape and appearance models of talking faces for model-based tracking
This article presents a system that can recover and track the 3D speech movements of a speaker’s face for each image of a monocular sequence. A speaker-specific face model is used for tracking: model parameters are extracted from each image by an analysis-by-synthesis loop. To handle both the individual specificities of the speaker’s articulation and the complexity of the facial deformations during speech, an accurate 3D model of the face geometry and an appearance model are built from real data. The geometric model is linearly controlled by only seven articulatory parameters. Appearance is seen either as a classical texture map or through local appearance of a relevant subset of 3D points. We compare several appearance models: they are either constant or depend linearly on the articulatory parameters. We evaluate these different appearance models with ground truth data. ———————————
Matthias Odisio, Gérard Bailly
Added 04 Jul 2010
Updated 04 Jul 2010
Type Conference
Year 2003
Where AMFG
Authors Matthias Odisio, Gérard Bailly
Comments (0)