Sciweavers

ICCV
2007
IEEE

Recovering Occlusion Boundaries from a Single Image

14 years 6 months ago
Recovering Occlusion Boundaries from a Single Image
Occlusion reasoning, necessary for tasks such as navigation and object search, is an important aspect of everyday life and a fundamental problem in computer vision. We believe that the amazing ability of humans to reason about occlusions from one image is based on an intrinsically 3D interpretation. In this paper, our goal is to recover the occlusion boundaries and depth ordering of free-standing structures in the scene. Our approach is to learn to identify and label occlusion boundaries using the traditional edge and region cues together with 3D surface and depth cues. Since some of these cues require good spatial support (i.e., a segmentation), we gradually create larger regions and use them to improve inference over the boundaries. Our experiments demonstrate the power of a scene-based approach to occlusion reasoning.
Derek Hoiem, Andrew N. Stein, Alexei A. Efros, Mar
Added 14 Oct 2009
Updated 30 Oct 2009
Type Conference
Year 2007
Where ICCV
Authors Derek Hoiem, Andrew N. Stein, Alexei A. Efros, Martial Hebert
Comments (0)