Sciweavers

Share
AUSAI
1999
Springer

Q-Learning in Continuous State and Action Spaces

12 years 17 days ago
Q-Learning in Continuous State and Action Spaces
Abstract. Q-learning can be used to learn a control policy that maximises a scalar reward through interaction with the environment. Qlearning is commonly applied to problems with discrete states and actions. We describe a method suitable for control tasks which require continuous actions, in response to continuous states. The system consists of a neural network coupled with a novel interpolator. Simulation results are presented for a non-holonomic control task. Advantage Learning, a variation of Q-learning, is shown enhance learning speed and reliability for this task.
Chris Gaskett, David Wettergreen, Alexander Zelins
Added 03 Aug 2010
Updated 03 Aug 2010
Type Conference
Year 1999
Where AUSAI
Authors Chris Gaskett, David Wettergreen, Alexander Zelinsky
Comments (0)
books