Sciweavers

ICML
2004
IEEE

Using relative novelty to identify useful temporal abstractions in reinforcement learning

14 years 5 months ago
Using relative novelty to identify useful temporal abstractions in reinforcement learning
lative Novelty to Identify Useful Temporal Abstractions in Reinforcement Learning ?Ozg?ur S?im?sek ozgur@cs.umass.edu Andrew G. Barto barto@cs.umass.edu Department of Computer Science, University of Massachusetts, Amherst, MA 01003-9264 We present a new method for automatically creating useful temporal abstractions in reinforcement learning. We argue that states that allow the agent to transition to a different region of the state space are useful subgoals, and propose a method for identifying them using the concept of relative novelty. When such a state is identified, a temporallyextended activity (e.g., an option) is generated that takes the agent efficiently to this state. We illustrate the utility of the method in a number of tasks.
Özgür Simsek, Andrew G. Barto
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2004
Where ICML
Authors Özgür Simsek, Andrew G. Barto
Comments (0)