Sciweavers

NIPS
2007

Random Sampling of States in Dynamic Programming

13 years 6 months ago
Random Sampling of States in Dynamic Programming
We combine three threads of research on approximate dynamic programming: sparse random sampling of states, value function and policy approximation using local models, and using local trajectory optimizers to globally optimize a policy and associated value function. Our focus is on finding steady state policies for deterministic time invariant discrete time control problems with continuous states and actions often found in robotics. In this paper we show that we can now solve problems we couldn’t solve previously.
Christopher G. Atkeson, Benjamin Stephens
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2007
Where NIPS
Authors Christopher G. Atkeson, Benjamin Stephens
Comments (0)