Sciweavers

24 search results - page 3 / 5
» Technical Update: Least-Squares Temporal Difference Learning
Sort
View
ISDA
2009
IEEE
13 years 12 months ago
Postponed Updates for Temporal-Difference Reinforcement Learning
This paper presents postponed updates, a new strategy for TD methods that can improve sample efficiency without incurring the computational and space requirements of model-based ...
Harm van Seijen, Shimon Whiteson
COLT
2000
Springer
13 years 9 months ago
Bias-Variance Error Bounds for Temporal Difference Updates
We give the first rigorous upper bounds on the error of temporal difference (td) algorithms for policy evaluation as a function of the amount of experience. These upper bounds pr...
Michael J. Kearns, Satinder P. Singh
AAAI
2011
12 years 5 months ago
Differential Eligibility Vectors for Advantage Updating and Gradient Methods
In this paper we propose differential eligibility vectors (DEV) for temporal-difference (TD) learning, a new class of eligibility vectors designed to bring out the contribution of...
Francisco S. Melo
JMLR
2010
119views more  JMLR 2010»
13 years 2 days ago
A Convergent Online Single Time Scale Actor Critic Algorithm
Actor-Critic based approaches were among the first to address reinforcement learning in a general setting. Recently, these algorithms have gained renewed interest due to their gen...
Dotan Di Castro, Ron Meir