Sciweavers

NIPS
2003

Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks

13 years 5 months ago
Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks
Gradient-following learning methods can encounter problems of implementation in many applications, and stochastic variants are frequently used to overcome these difficulties. We derive quantitative learning curves for three online training methods used with a linear perceptron: direct gradient descent, node perturbation, and weight perturbation. The maximum learning rate for the stochastic methods scales inversely with the first power of the dimensionality of the noise injected into the system; with sufficiently small learning rate, all three methods give identical learning curves. These results suggest guidelines for when these stochastic methods will be limited in their utility, and considerations for architectures in which they will be effective.
Justin Werfel, Xiaohui Xie, H. Sebastian Seung
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2003
Where NIPS
Authors Justin Werfel, Xiaohui Xie, H. Sebastian Seung
Comments (0)