Sciweavers

124 search results - page 3 / 25
» Basis function construction for hierarchical reinforcement l...
Sort
View
ICML
2007
IEEE
14 years 6 months ago
Constructing basis functions from directed graphs for value function approximation
Basis functions derived from an undirected graph connecting nearby samples from a Markov decision process (MDP) have proven useful for approximating value functions. The success o...
Jeffrey Johns, Sridhar Mahadevan
ICML
2007
IEEE
14 years 6 months ago
Learning state-action basis functions for hierarchical MDPs
This paper introduces a new approach to actionvalue function approximation by learning basis functions from a spectral decomposition of the state-action manifold. This paper exten...
Sarah Osentoski, Sridhar Mahadevan
AUSAI
2005
Springer
13 years 11 months ago
Global Versus Local Constructive Function Approximation for On-Line Reinforcement Learning
: In order to scale to problems with large or continuous state-spaces, reinforcement learning algorithms need to be combined with function approximation techniques. The majority of...
Peter Vamplew, Robert Ollington
ECML
2004
Springer
13 years 11 months ago
Model Approximation for HEXQ Hierarchical Reinforcement Learning
HEXQ is a reinforcement learning algorithm that discovers hierarchical structure automatically. The generated task hierarchy repthe problem at different levels of abstraction. In ...
Bernhard Hengst