Abstract. This paper proposes a new vision-based system that can extract walking parameters from human demonstration. The system uses only a non-calibrated USB webcam connected to ...
Juan Pedro Bandera Rubio, Changjiu Zhou, Francisco...
— For a robot to understand a scene, we have to infer and extract meaningful information from vision sensor data. Since scene understanding consists in recognizing several visual...
In this paper, we present a new method for vision-based, reactive robot navigation that enables a robot to move in the middle of the free space by exploiting both central and peri...
— This paper presents a sensor fusion model developed for the 2005 Grand Challenge competition, an autonomous ground vehicle race across the Mojave desert organized by DARPA1 . T...
Alberto Broggi, Stefano Cattani, Pier Paolo Porta,...
This paper represents a description of our approach to the problem of topological localization of a mobile robot using visual information. Our method has been developed for ImageCL...