Partially observable Markov decision processes (POMDPs) have been
successfully applied to various robot motion planning tasks under uncertainty.
However, most existing POMDP algorithms assume a discrete state space, while
the natural state space of a robot is often continuous. This paper presents Monte
Carlo Value Iteration (MCVI) for continuous-state POMDPs. MCVI samples both
a robot’s state space and the corresponding belief space, and avoids inefficient a
priori discretization of the state space as a grid. Both theoretical results and prelimi-
nary experimental results indicate that MCVI is a promising new approach for robot
motion planning under uncertainty.
Haoyu Bai, David Hsu, Wee Sun Lee, and Vien A. Ngo