Sciweavers

Share
NIPS
1994

Reinforcement Learning with Soft State Aggregation

8 years 9 months ago
Reinforcement Learning with Soft State Aggregation
It is widely accepted that the use of more compact representations than lookup tables is crucial to scaling reinforcement learning (RL) algorithms to real-world problems. Unfortunately almost all of the theory of reinforcement learning assumes lookup table representations. In this paper we address the pressing issue of combining function approximation and RL, and present 1) a function approximator based on a simple extension to state aggregation (a commonly used form of compact representation), namely soft state aggregation, 2) a theory of convergence for RL with arbitrary, but xed, soft state aggregation, 3) a novel intuitive understanding of the e ect of state aggregation on online RL, and 4) a new heuristic adaptive state aggregation algorithm that nds improved compact representations by exploiting the non-discrete nature of soft state aggregation. Preliminary empirical results are also presented.
Satinder P. Singh, Tommi Jaakkola, Michael I. Jord
Added 02 Nov 2010
Updated 02 Nov 2010
Type Conference
Year 1994
Where NIPS
Authors Satinder P. Singh, Tommi Jaakkola, Michael I. Jordan
Comments (0)
books