Occluded Facial Expression Tracking

11 years 6 days ago
Occluded Facial Expression Tracking
The work presented here takes place in the field of computer aided analysis of facial expressions displayed in sign language videos. We use Active Appearance Models to model a face and its variations of shape and texture caused by expressions. The inverse compositional algorithm is used to accurately fit an AAM to the face seen on each video frame. In the context of sign language communication, the signer’s face is frequently occluded, mainly by hands. A facial expression tracker has then to be robust to occlusions. We propose to rely on a robust variant of the AAM fitting algorithm to explicitly model the noise introduced by occlusions. Our main contribution is the automatic detection of hand occlusions. The idea is to model the behavior of the fitting algorithm on unoccluded faces, by means of residual image statistics, and to detect occlusions as being what is not explained by this model. We use residual parameters with respect to the fitting iteration i.e., the AAM distance ...
Hugo Mercier, Julien Peyras, Patrice Dalle
Added 09 Jun 2010
Updated 09 Jun 2010
Type Conference
Year 2007
Where SCIA
Authors Hugo Mercier, Julien Peyras, Patrice Dalle
Comments (0)