Sciweavers

ESANN
2008

Multilayer Perceptrons with Radial Basis Functions as Value Functions in Reinforcement Learning

13 years 6 months ago
Multilayer Perceptrons with Radial Basis Functions as Value Functions in Reinforcement Learning
Using multilayer perceptrons (MLPs) to approximate the state-action value function in reinforcement learning (RL) algorithms could become a nightmare due to the constant possibility of unlearning past experiences. Moreover, since the target values in the training examples are bootstraps values, this is, estimates of other estimates, the chances to get stuck in a local minimum are increased. These problems occur very often in the mountain car task, as showed by Boyan and Moore [2]. In this paper we present empirical evidence showing that MLPs augmented with one layer of radial basis functions (RBFs) can avoid these problems. Our experimental testbeds are the mountain car task and a robot control problem.
Victor Uc Cetina
Added 29 Oct 2010
Updated 29 Oct 2010
Type Conference
Year 2008
Where ESANN
Authors Victor Uc Cetina
Comments (0)