Sciweavers

ICML
1995
IEEE

Stable Function Approximation in Dynamic Programming

14 years 5 months ago
Stable Function Approximation in Dynamic Programming
The success ofreinforcement learninginpractical problems depends on the ability to combine function approximation with temporal di erence methods such as value iteration. Experiments in this area have produced mixed results; there have been both notable successes and notable disappointments. Theory has been scarce, mostly due to the difculty of reasoning about function approximators that generalize beyond the observed data. We provide a proof of convergence for a wide class of temporal di erence methods involving function approximators such as k-nearest-neighbor, and show experimentally that these methods can be useful. The proof is based on a view of function approximators as expansion or contraction mappings. In addition, we present a novel view of tted value iteration: an approximate algorithm for one environment turns out to be an exact algorithm for a di erent environment.
Geoffrey J. Gordon
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 1995
Where ICML
Authors Geoffrey J. Gordon
Comments (0)