Sciweavers

PKDD
2010
Springer

Efficient Planning in Large POMDPs through Policy Graph Based Factorized Approximations

13 years 7 months ago
Efficient Planning in Large POMDPs through Policy Graph Based Factorized Approximations
Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightforward optimization of plans (policies) computationally intractable. To solve this, we introduce an efficient POMDP planning algorithm. Many current methods store the policy partly through a set of "value vectors" which is updated at each iteration by planning one step further; the size of such vectors follows the size of the state space, making computation intractable for large POMDPs. We store the policy as a graph only, which allows tractable approximations in each policy update step: for a state space described by several variables, we approximate beliefs over future states with factorized forms, minimizing Kullback-Leibler divergence to the non-factorized distributions. Our other speedup approximations include bounding potential rewards. We demonstrate the advantage of our method in several rei...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik
Added 14 Feb 2011
Updated 14 Feb 2011
Type Journal
Year 2010
Where PKDD
Authors Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mikko A. Uusitalo
Comments (0)