Sciweavers

Share
ICANN
2010
Springer

Reinforcement Learning Based Neural Controllers for Dynamic Processes without Exploration

8 years 10 months ago
Reinforcement Learning Based Neural Controllers for Dynamic Processes without Exploration
Abstract. In this paper we present a Reinforcement Learning (RL) approach with the capability to train neural adaptive controllers for complex control problems without expensive online exploration. The basis of the neural controller is a Neural fitted Q-Iteration (NFQ). This network is trained with data from the example set enriched with artificial data. With this training scheme, unlike most other existing approaches, the controller is able to learn offline on observed training data of an already closed-loop controlled process with often sparse and uninformative training samples. The suggested neural controller is evaluated on a modified and advanced cartpole simulator and a combustion control of a real waste-incineration plant and can successfully demonstrate its superiority.
Frank-Florian Steege, André Hartmann, Erik
Added 09 Nov 2010
Updated 09 Nov 2010
Type Conference
Year 2010
Where ICANN
Authors Frank-Florian Steege, André Hartmann, Erik Schaffernicht, Horst-Michael Gross
Comments (0)
books