Sciweavers

ICIP
2005
IEEE

Fusion of multiple viewpoint information towards 3D face robust orientation detection

14 years 6 months ago
Fusion of multiple viewpoint information towards 3D face robust orientation detection
This paper presents a novel approach to the problem of determining head pose estimation and face 3D orientation of several people in low resolution sequences from multiple calibrated cameras. Spatial redundancy is exploited and the heads of people in the scene are detected and geometrically approximated by an ellipsoid using a voxel reconstruction and a moment analysis method. Skin patches from each detected head are located in each camera view. Data fusion is performed by back-projecting skin patches from single images onto the estimated 3D head model, thus providing a synthetic reconstruction of the head appearance. Finally, these data are processed in a pattern analysis framework thus giving a reliable and robust estimation of face orientation. Tracking over time is performed by Kalman filtering. Results are provided showing the effectiveness of the proposed algorithm in a SmartRoom scenario.
Cristian Canton-Ferrer, Josep R. Casas, Montse Par
Added 23 Oct 2009
Updated 14 Nov 2009
Type Conference
Year 2005
Where ICIP
Authors Cristian Canton-Ferrer, Josep R. Casas, Montse Pardàs
Comments (0)