Sciweavers

Share
PRL
2006

Modelling and accuracy estimation of a new omnidirectional depth computation sensor

9 years 10 months ago
Modelling and accuracy estimation of a new omnidirectional depth computation sensor
Depth computation is an attractive feature in computer vision. The use of traditional perspective cameras for panoramic perception requires several images, most likely implying the use of several cameras or of a sensor with mobile elements. Moreover, misalignments can appear for non-static scenes. Omnidirectional cameras offer a much wider field of view (FOV) than perspective cameras, capture a panoramic image at every moment and alleviate problems due to occlusions. A practical way to obtain depth in computer vision is the use of structured light systems. This paper is focused on combining omnidirectional vision and structured light with the aim of obtaining panoramic depth information. The resulting sensor is formed by a single catadioptric camera and an omnidirectional light projector. The model and the prototype of a new omnidirectional depth computation sensor are presented in this article and its accuracy is estimated by means of laboratory experimental setups.
Radu Orghidan, Joaquim Salvi, El Mustapha Mouaddib
Added 14 Dec 2010
Updated 14 Dec 2010
Type Journal
Year 2006
Where PRL
Authors Radu Orghidan, Joaquim Salvi, El Mustapha Mouaddib
Comments (0)
books