Sciweavers

61 search results - page 1 / 13
» Convergence of synchronous reinforcement learning with linea...
Sort
View
ICML
2004
IEEE
14 years 5 months ago
Convergence of synchronous reinforcement learning with linear function approximation
Synchronous reinforcement learning (RL) algorithms with linear function approximation are representable as inhomogeneous matrix iterations of a special form (Schoknecht & Merk...
Artur Merke, Ralf Schoknecht
NIPS
2001
13 years 6 months ago
Rates of Convergence of Performance Gradient Estimates Using Function Approximation and Bias in Reinforcement Learning
We address two open theoretical questions in Policy Gradient Reinforcement Learning. The first concerns the efficacy of using function approximation to represent the state action ...
Gregory Z. Grudic, Lyle H. Ungar
ICML
2007
IEEE
14 years 5 months ago
Tracking value function dynamics to improve reinforcement learning with piecewise linear function approximation
Reinforcement learning algorithms can become unstable when combined with linear function approximation. Algorithms that minimize the mean-square Bellman error are guaranteed to co...
Chee Wee Phua, Robert Fitch
ICML
2003
IEEE
14 years 5 months ago
TD(0) Converges Provably Faster than the Residual Gradient Algorithm
In Reinforcement Learning (RL) there has been some experimental evidence that the residual gradient algorithm converges slower than the TD(0) algorithm. In this paper, we use the ...
Ralf Schoknecht, Artur Merke
AAMAS
2007
Springer
13 years 5 months ago
Parallel Reinforcement Learning with Linear Function Approximation
In this paper, we investigate the use of parallelization in reinforcement learning (RL), with the goal of learning optimal policies for single-agent RL problems more quickly by us...
Matthew Grounds, Daniel Kudenko