Sciweavers

Share
IJRR
2010

Planning under Uncertainty for Robotic Tasks with Mixed Observability

8 years 10 months ago
Planning under Uncertainty for Robotic Tasks with Mixed Observability
Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been applied to various robotic tasks. However, solving POMDPs exactly is computationally intractable. A major challenge is to scale up POMDP algorithms for complex robotic tasks. Robotic systems often have mixed observability: even when a robot’s state is not fully observable, some components of the state may still be so. We use a factored model to represent separately the fully and partially observable components of a robot’s state and derive a compact lower-dimensional representation of its belief space. This factored representation can be combined with any point-based algorithm to compute approximate POMDP solutions. Experimental results show that on standard test problems, our approach improves the performance of a leading point-based POMDP algorithm by many times.
Sylvie C. W. Ong, Shao Wei Png, David Hsu, Wee Sun
Added 28 Jan 2011
Updated 28 Jan 2011
Type Journal
Year 2010
Where IJRR
Authors Sylvie C. W. Ong, Shao Wei Png, David Hsu, Wee Sun Lee
Comments (0)
books