Sciweavers

IROS
2007
IEEE

Autonomous blimp control using model-free reinforcement learning in a continuous state and action space

13 years 11 months ago
Autonomous blimp control using model-free reinforcement learning in a continuous state and action space
— In this paper, we present an approach that applies the reinforcement learning principle to the problem of learning height control policies for aerial blimps. In contrast to previous approaches, our method does not require sophisticated handtuned models, but rather learns the policy online, which makes the system easily adaptable to changing conditions. The blimp we apply our approach to is a small-scale vehicle equipped with an ultrasound sensor that measures its elevation relative to the ground. The major problem in the context of learning control policies lies in the high-dimensional state-action space that needs to be explored in order to identify the values of all state-action pairs. In this paper, we propose a solution to learning continuous control policies based on the Gaussian process model. In practical experiments carried out on a real robot we demonstrate that the system is able to learn a policy online within a few minutes only.
Axel Rottmann, Christian Plagemann, Peter Hilgers,
Added 03 Jun 2010
Updated 03 Jun 2010
Type Conference
Year 2007
Where IROS
Authors Axel Rottmann, Christian Plagemann, Peter Hilgers, Wolfram Burgard
Comments (0)