Sciweavers

ICML
2005
IEEE

High speed obstacle avoidance using monocular vision and reinforcement learning

14 years 5 months ago
High speed obstacle avoidance using monocular vision and reinforcement learning
We consider the task of driving a remote control car at high speeds through unstructured outdoor environments. We present an approach in which supervised learning is first used to estimate depths from single monocular images. The learning algorithm can be trained either on real camera images labeled with ground-truth distances to the closest obstacles, or on a training set consisting of synthetic graphics images. The resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning/policy search is then applied within a simulator that renders synthetic scenes. This learns a control policy that selects a steering direction as a function of the vision system's output. We present results evaluating the predictive ability of the algorithm both on held out test data, and in actual autonomous driving experiments.
Jeff Michels, Ashutosh Saxena, Andrew Y. Ng
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2005
Where ICML
Authors Jeff Michels, Ashutosh Saxena, Andrew Y. Ng
Comments (0)