Sciweavers

KBS
2006

Robot docking based on omnidirectional vision and reinforcement learning

13 years 2 months ago
Robot docking based on omnidirectional vision and reinforcement learning
We present a system for visual robotic docking using an omnidirectional camera coupled with the actor critic reinforcement learning algorithm. The system enables a PeopleBot robot to locate and approach a table so that it can pick an object from it using the pan-tilt camera mounted on the robot. We use a staged approach to solve this problem as there are distinct subtasks and different sensors used. Starting with random wandering of the robot until the table is located via a landmark, then a network trained via reinforcement allows the robot to turn to and approach the table. Once at the table the robot is to pick the object from it. We argue that our approach has a lot of potential allowing the learning of robot control for navigation and remove the need for internal maps of the environment. This is achieved by allowing the robot to learn couplings between motor actions and the position of a landmark.
David Muse, Cornelius Weber, Stefan Wermter
Added 13 Dec 2010
Updated 13 Dec 2010
Type Journal
Year 2006
Where KBS
Authors David Muse, Cornelius Weber, Stefan Wermter
Comments (0)