Sciweavers

CVPR
2008
IEEE

Reconstructing non-stationary articulated objects in monocular video using silhouette information

14 years 5 months ago
Reconstructing non-stationary articulated objects in monocular video using silhouette information
This paper presents an approach to reconstruct nonstationary, articulated objects from silhouettes obtained with a monocular video sequence. We introduce the concept of motion blurred scene occupancies, a direct analogy of motion blurred images but in a 3D object scene occupancy space resulting from the motion/deformation of the object. Our approach starts with an image based fusion step that combines color and silhouette information from multiple views. To this end we propose to use a novel construct: the temporal occupancy point (TOP), which is the estimated 3D scene location of a silhouette pixel and contains information about duration of time it is occupied. Instead of explicitly computing the TOP in 3D space we directly obtain it's imaged(projected) locations in each view. This enables us to handle monocular video and arbitrary camera motion in scenarios where complete camera calibration information may not be available. The result is a set of blurred scene occupancy images ...
Saad M. Khan, Mubarak Shah
Added 12 Oct 2009
Updated 12 Oct 2009
Type Conference
Year 2008
Where CVPR
Authors Saad M. Khan, Mubarak Shah
Comments (0)