Sciweavers

162 search results - page 2 / 33
» Off-Policy Temporal Difference Learning with Function Approx...
Sort
View
ML
2002
ACM
168views Machine Learning» more  ML 2002»
13 years 4 months ago
On Average Versus Discounted Reward Temporal-Difference Learning
We provide an analytical comparison between discounted and average reward temporal-difference (TD) learning with linearly parameterized approximations. We first consider the asympt...
John N. Tsitsiklis, Benjamin Van Roy
ICML
2008
IEEE
14 years 5 months ago
A worst-case comparison between temporal difference and residual gradient with linear function approximation
Residual gradient (RG) was proposed as an alternative to TD(0) for policy evaluation when function approximation is used, but there exists little formal analysis comparing them ex...
Lihong Li
CORR
2010
Springer
204views Education» more  CORR 2010»
13 years 3 months ago
Predictive State Temporal Difference Learning
We propose a new approach to value function approximation which combines linear temporal difference reinforcement learning with subspace identification. In practical applications...
Byron Boots, Geoffrey J. Gordon
ICML
2003
IEEE
13 years 10 months ago
The Significance of Temporal-Difference Learning in Self-Play Training TD-Rummy versus EVO-rummy
Reinforcement learning has been used for training game playing agents. The value function for a complex game must be approximated with a continuous function because the number of ...
Clifford Kotnik, Jugal K. Kalita
ICML
1999
IEEE
14 years 5 months ago
Least-Squares Temporal Difference Learning
Excerpted from: Boyan, Justin. Learning Evaluation Functions for Global Optimization. Ph.D. thesis, Carnegie Mellon University, August 1998. (Available as Technical Report CMU-CS-...
Justin A. Boyan