Sciweavers

162 search results - page 3 / 33
» Off-Policy Temporal Difference Learning with Function Approx...
Sort
View
NIPS
2008
13 years 6 months ago
Temporal Difference Based Actor Critic Learning - Convergence and Neural Implementation
Actor-critic algorithms for reinforcement learning are achieving renewed popularity due to their good convergence properties in situations where other approaches often fail (e.g.,...
Dotan Di Castro, Dmitry Volkinshtein, Ron Meir
ICML
2009
IEEE
14 years 6 months ago
Regularization and feature selection in least-squares temporal difference learning
We consider the task of reinforcement learning with linear value function approximation. Temporal difference algorithms, and in particular the Least-Squares Temporal Difference (L...
J. Zico Kolter, Andrew Y. Ng
JAIR
2010
108views more  JAIR 2010»
13 years 3 months ago
Kalman Temporal Differences
This paper deals with value (and Q-) function approximation in deterministic Markovian decision processes (MDPs). A general statistical framework based on the Kalman filtering pa...
Matthieu Geist, Olivier Pietquin
CDC
2010
IEEE
136views Control Systems» more  CDC 2010»
13 years 7 days ago
Pathologies of temporal difference methods in approximate dynamic programming
Approximate policy iteration methods based on temporal differences are popular in practice, and have been tested extensively, dating to the early nineties, but the associated conve...
Dimitri P. Bertsekas
ATAL
2005
Springer
13 years 10 months ago
Improving reinforcement learning function approximators via neuroevolution
Reinforcement learning problems are commonly tackled with temporal difference methods, which use dynamic programming and statistical sampling to estimate the long-term value of ta...
Shimon Whiteson