Sciweavers

34 search results - page 6 / 7
» Using Temporal Neighborhoods to Adapt Function Approximators...
Sort
View
NIPS
1996
13 years 6 months ago
Multidimensional Triangulation and Interpolation for Reinforcement Learning
Dynamic Programming, Q-learning and other discrete Markov Decision Process solvers can be applied to continuous d-dimensional state-spaces by quantizing the state space into an arr...
Scott Davies
NIPS
1994
13 years 6 months ago
Reinforcement Learning with Soft State Aggregation
It is widely accepted that the use of more compact representations than lookup tables is crucial to scaling reinforcement learning (RL) algorithms to real-world problems. Unfortun...
Satinder P. Singh, Tommi Jaakkola, Michael I. Jord...
ATAL
2008
Springer
13 years 6 months ago
Sigma point policy iteration
In reinforcement learning, least-squares temporal difference methods (e.g., LSTD and LSPI) are effective, data-efficient techniques for policy evaluation and control with linear v...
Michael H. Bowling, Alborz Geramifard, David Winga...
ICML
2009
IEEE
14 years 5 months ago
Binary action search for learning continuous-action control policies
Reinforcement Learning methods for controlling stochastic processes typically assume a small and discrete action space. While continuous action spaces are quite common in real-wor...
Jason Pazis, Michail G. Lagoudakis
ECML
2006
Springer
13 years 8 months ago
Task-Driven Discretization of the Joint Space of Visual Percepts and Continuous Actions
We target the problem of closed-loop learning of control policies that map visual percepts to continuous actions. Our algorithm, called Reinforcement Learning of Joint Classes (RLJ...
Sébastien Jodogne, Justus H. Piater