Sciweavers

ICML
1996
IEEE

Learning Evaluation Functions for Large Acyclic Domains

14 years 5 months ago
Learning Evaluation Functions for Large Acyclic Domains
Some of the most successful recent applications of reinforcement learning have used neural networks and the TD algorithm to learn evaluation functions. In this paper, we examine the intuition that TD operates by approximating asynchronous value iteration. We note that on the important subclass of acyclic tasks, value iteration is ine cient compared with another graph algorithm, DAG-SP, which assigns values to states by workingstrictly backwards fromthe goal. We then present ROUT, an algorithm analogous to DAG-SP that can be used in large stochastic state spaces requiring function approximation. We close by comparing the behavior of ROUT and TD on a simple example domain and on two domains with much larger state spaces. 1 LEARNING CONTROL BACKWARDS Computing an accurate value function is the key to dynamic-programming-based algorithms for optimal sequential control in Markov Decision Processes. The optimalvalue function Vx speci es, for each state x in the state space X, the expected c...
Justin A. Boyan, Andrew W. Moore
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 1996
Where ICML
Authors Justin A. Boyan, Andrew W. Moore
Comments (0)