Localizing Search in Reinforcement Learning

13 years 8 months ago
Localizing Search in Reinforcement Learning
Reinforcement learning (RL) can be impractical for many high dimensional problems because of the computational cost of doing stochastic search in large state spaces. We propose a new RL method, Boundary Localized Reinforcement Learning (BLRL), which maps RL into a mode switching problem where an agent deterministically chooses an action based on its state, and limits stochastic search to small areas around mode boundaries, drastically reducing computational cost. BLRL starts with an initial set of parameterized boundaries that partition the state space into distinct control modes. Reinforcement reward is used to update the boundary parameters using the policy gradient formulation of Sutton et al. (2000). We demonstrate that stochastic search can be limited to regions near mode boundaries, thus greatly reducing search, while still guaranteeing convergence to a locally optimal deterministic mode switching policy. Further, we give conditions under which the policy gradient can be arbitra...
Gregory Z. Grudic, Lyle H. Ungar
Added 01 Nov 2010
Updated 01 Nov 2010
Type Conference
Year 2000
Where AAAI
Authors Gregory Z. Grudic, Lyle H. Ungar
Comments (0)