Sciweavers

FLAIRS
2004

State Space Reduction For Hierarchical Reinforcement Learning

13 years 6 months ago
State Space Reduction For Hierarchical Reinforcement Learning
er provides new techniques for abstracting the state space of a Markov Decision Process (MDP). These techniques extend one of the recent minimization models, known as -reduction, to construct a partition space that has a smaller number of states than the original MDP. As a result, learning policies on the partition space should be faster than on the original state space. The technique presented here extends reduction to SMDPs by executing a policy instead of a single action, and grouping all states which have a small difference in transition probabilities and reward function under a given policy. When the reward structure is not known, a two-phase method for state aggregation is introduced and a theorem in this paper shows the solvability of tasks using the two-phase method partitions. These partitions can be further refined when the complete structure of reward is available. Simulations of different state spaces show that the policies in both MDP and this representation achieve simil...
Mehran Asadi, Manfred Huber
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2004
Where FLAIRS
Authors Mehran Asadi, Manfred Huber
Comments (0)