Sciweavers

77 search results - page 2 / 16
» Value Function Approximation in Reinforcement Learning Using...
Sort
View
ICML
2005
IEEE
14 years 6 months ago
Proto-value functions: developmental reinforcement learning
This paper presents a novel framework called proto-reinforcement learning (PRL), based on a mathematical model of a proto-value function: these are task-independent basis function...
Sridhar Mahadevan
IWANN
1999
Springer
13 years 9 months ago
Using Temporal Neighborhoods to Adapt Function Approximators in Reinforcement Learning
To avoid the curse of dimensionality, function approximators are used in reinforcement learning to learn value functions for individual states. In order to make better use of comp...
R. Matthew Kretchmar, Charles W. Anderson
ICRA
2007
IEEE
155views Robotics» more  ICRA 2007»
13 years 11 months ago
Value Function Approximation on Non-Linear Manifolds for Robot Motor Control
— The least squares approach works efficiently in value function approximation, given appropriate basis functions. Because of its smoothness, the Gaussian kernel is a popular an...
Masashi Sugiyama, Hirotaka Hachiya, Christopher To...
IAT
2003
IEEE
13 years 10 months ago
Asymmetric Multiagent Reinforcement Learning
A gradient-based method for both symmetric and asymmetric multiagent reinforcement learning is introduced in this paper. Symmetric multiagent reinforcement learning addresses the ...
Ville Könönen
ICML
2007
IEEE
14 years 6 months ago
Constructing basis functions from directed graphs for value function approximation
Basis functions derived from an undirected graph connecting nearby samples from a Markov decision process (MDP) have proven useful for approximating value functions. The success o...
Jeffrey Johns, Sridhar Mahadevan