Sciweavers

13 search results - page 1 / 3
» Learning state-action basis functions for hierarchical MDPs
Sort
View
ICML
2007
IEEE
14 years 5 months ago
Learning state-action basis functions for hierarchical MDPs
This paper introduces a new approach to actionvalue function approximation by learning basis functions from a spectral decomposition of the state-action manifold. This paper exten...
Sarah Osentoski, Sridhar Mahadevan
ICML
2007
IEEE
14 years 5 months ago
Constructing basis functions from directed graphs for value function approximation
Basis functions derived from an undirected graph connecting nearby samples from a Markov decision process (MDP) have proven useful for approximating value functions. The success o...
Jeffrey Johns, Sridhar Mahadevan
AAAI
2006
13 years 6 months ago
Learning Basis Functions in Hybrid Domains
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Branislav Kveton, Milos Hauskrecht
ATAL
2010
Springer
13 years 5 months ago
Basis function construction for hierarchical reinforcement learning
This paper introduces an approach to automatic basis function construction for Hierarchical Reinforcement Learning (HRL) tasks. We describe some considerations that arise when con...
Sarah Osentoski, Sridhar Mahadevan
NCI
2004
185views Neural Networks» more  NCI 2004»
13 years 6 months ago
Hierarchical reinforcement learning with subpolicies specializing for learned subgoals
This paper describes a method for hierarchical reinforcement learning in which high-level policies automatically discover subgoals, and low-level policies learn to specialize for ...
Bram Bakker, Jürgen Schmidhuber