Sciweavers

Share
ICML
2006
IEEE

Fast direct policy evaluation using multiscale analysis of Markov diffusion processes

9 years 8 months ago
Fast direct policy evaluation using multiscale analysis of Markov diffusion processes
Policy evaluation is a critical step in the approximate solution of large Markov decision processes (MDPs), typically requiring O(|S|3 ) to directly solve the Bellman system of |S| linear equations (where |S| is the state space size). In this paper we apply a recently introduced multiscale framework for analysis on graphs to design a faster algorithm for policy evaluation. For a fixed policy , this framework efficiently constructs a multiscale decomposition of the random walk P associated with the policy . This enables efficiently computing medium and long term state distributions, approximation of value functions, and the direct computation of the potential operator (I - P )-1 needed to solve Bellman's equation. We show that even a preliminary non-optimized version of the solver competes with highly optimized iterative techniques, and can be computed in time O(|S| log2 |S|).
Mauro Maggioni, Sridhar Mahadevan
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2006
Where ICML
Authors Mauro Maggioni, Sridhar Mahadevan
Comments (0)
books