Sciweavers

AAMAS
2007
Springer

Continuous-State Reinforcement Learning with Fuzzy Approximation

13 years 11 months ago
Continuous-State Reinforcement Learning with Fuzzy Approximation
Abstract. Reinforcement learning (RL) is a widely used learning paradigm for adaptive agents. There exist several convergent and consistent RL algorithms which have been intensively studied. In their original form, these algorithms require that the environment states and agent actions take values in a relatively small discrete set. Fuzzy representations for approximate, model-free RL have been proposed in the literature for the more difficult case where the state-action space is continuous. In this work, we propose a fuzzy approximation architecture similar to those previously used for Q-learning, but we combine it with the model-based Q-value iteration algorithm. We prove that the resulting algorithm converges. We also give a modified, asynchronous variant of the algorithm that converges at least as fast as the original version. An illustrative simulation example is provided.
Lucian Busoniu, Damien Ernst, Bart De Schutter, Ro
Added 06 Jun 2010
Updated 06 Jun 2010
Type Conference
Year 2007
Where AAMAS
Authors Lucian Busoniu, Damien Ernst, Bart De Schutter, Robert Babuska
Comments (0)