Sciweavers

Share
IWANN
1999
Springer

Using Temporal Neighborhoods to Adapt Function Approximators in Reinforcement Learning

8 years 11 months ago
Using Temporal Neighborhoods to Adapt Function Approximators in Reinforcement Learning
To avoid the curse of dimensionality, function approximators are used in reinforcement learning to learn value functions for individual states. In order to make better use of computational resources basis functions many researchers are investigating ways to adapt the basis functions during the learning process so that they better t the value-function landscape. Here we introduce temporal neighborhoods as small groups of states that experience frequent intragroup transitions during on-line sampling. We then form basis functions along these temporal neighborhoods. Empirical evidence is provided which demonstrates the e ectiveness of this scheme. We discuss a class of RL problems for which this method might be plausible. 1 Overview In reinforcement learning an agent navigates an environment a state space by selecting various actions in each state. As the agent makes actions, it receives rewards indicating the goodness" of the action. Reinforcement learning is a methodology which...
R. Matthew Kretchmar, Charles W. Anderson
Added 04 Aug 2010
Updated 04 Aug 2010
Type Conference
Year 1999
Where IWANN
Authors R. Matthew Kretchmar, Charles W. Anderson
Comments (0)
books