Sciweavers

NPL
2000

Towards the Optimal Learning Rate for Backpropagation

13 years 4 months ago
Towards the Optimal Learning Rate for Backpropagation
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate is derived. The algorithm is based upon minimising the instantaneous output error and does not include any simplifications encountered in the corresponding Least Mean Square (LMS) algorithms for linear adaptive filters. The backpropagation algorithm with an adaptive learning rate, which is derived based upon the Taylor series expansion of the instantaneous output error, is shown to exhibit behaviour similar to that of the Normalised LMS (NLMS) algorithm. Indeed, the derived optimal adaptive learning rate of a neural network trained by backpropagation degenerates to the learning rate of the NLMS for a linear activation function of a neuron. By continuity, the optimal adaptive learning rate for neural networks imposes additional stabilisation effects to the traditional backpropagation learning algorithm. Key words: adaptive learning rate, backpropagation, feedforward neural networks, optim...
Danilo P. Mandic, Jonathon A. Chambers
Added 19 Dec 2010
Updated 19 Dec 2010
Type Journal
Year 2000
Where NPL
Authors Danilo P. Mandic, Jonathon A. Chambers
Comments (0)