Sciweavers

NIPS
2003

Policy Search by Dynamic Programming

13 years 6 months ago
Policy Search by Dynamic Programming
We consider the policy search approach to reinforcement learning. We show that if a “baseline distribution” is given (indicating roughly how often we expect a good policy to visit each state), then we can derive a policy search algorithm that terminates in a finite number of steps, and for which we can provide non-trivial performance guarantees. We also demonstrate this algorithm on several grid-world POMDPs, a planar biped walking robot, and a double-pole balancing problem.
J. Andrew Bagnell, Sham Kakade, Andrew Y. Ng, Jeff
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2003
Where NIPS
Authors J. Andrew Bagnell, Sham Kakade, Andrew Y. Ng, Jeff G. Schneider
Comments (0)