Sciweavers

ICML
2000
IEEE

Reinforcement Learning in POMDP's via Direct Gradient Ascent

14 years 5 months ago
Reinforcement Learning in POMDP's via Direct Gradient Ascent
This paper discusses theoretical and experimental aspects of gradient-based approaches to the direct optimization of policy performance in controlled ??? ?s. We introduce ??? ?, a ? ?? ?? -like algorithm for estimating an approximation to the gradient of the average reward as a function of the parameters of a stochastic policy. The algorithm's chief advantages are that it requires only a single sample path of the underlying Markov chain, it uses only one free parameter ? ? ? ??, which has a natural interpretation in terms of bias-variance trade-off, and it requires no knowledge of the underlying state. We prove convergence of ??? ? and show how the gradient estimates produced by ??? ? can be used in a conjugate-gradient procedure to find local optima of the average reward.
Jonathan Baxter, Peter L. Bartlett
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2000
Where ICML
Authors Jonathan Baxter, Peter L. Bartlett
Comments (0)