Sciweavers

3DPVT
2004
IEEE

Markerless Human Motion Transfer

13 years 8 months ago
Markerless Human Motion Transfer
In this paper we develop a computer vision-based system to transfer human motion from one subject to another. Our system uses a network of eight calibrated and synchronized cameras. We first build detailed kinematic models of the subjects based on our algorithms for extracting shape from silhouette across time [6]. These models are then used to capture the motion (joint angles) of the subjects in new video sequences. Finally we describe an image-based rendering algorithm to render the captured motion applied to the articulated model of another person. Our rendering algorithm uses an ensemble of spatially and temporally distributed images to generate photo-realistic video of the transferred motion. We demonstrate the performance of the system by rendering throwing and kungfu motions on subjects who did not perform them.
German K. M. Cheung, Simon Baker, Jessica K. Hodgi
Added 20 Aug 2010
Updated 20 Aug 2010
Type Conference
Year 2004
Where 3DPVT
Authors German K. M. Cheung, Simon Baker, Jessica K. Hodgins, Takeo Kanade
Comments (0)