Sciweavers

TIP
2008

Geometry-Based Distributed Scene Representation With Omnidirectional Vision Sensors

13 years 4 months ago
Geometry-Based Distributed Scene Representation With Omnidirectional Vision Sensors
Abstract--This paper addresses the problem of efficient representation of scenes captured by distributed omnidirectional vision sensors. We propose a novel geometric model to describe the correlation between different views of a 3-D scene. We first approximate the camera images by sparse expansions over a dictionary of geometric atoms. Since the most important visual features are likely to be equivalently dominant in images from multiple cameras, we model the correlation between corresponding features in different views by local geometric transforms. For the particular case of omnidirectional images, we define the multiview transforms between corresponding features based on shape and epipolar geometry constraints. We apply this geometric framework in the design of a distributed coding scheme with side information, which builds an efficient representation of the scene without communication between cameras. The Wyner
Ivana Tosic, Pascal Frossard
Added 15 Dec 2010
Updated 15 Dec 2010
Type Journal
Year 2008
Where TIP
Authors Ivana Tosic, Pascal Frossard
Comments (0)