Sciweavers

7
Voted
NIPS
1993

Optimal Stochastic Search and Adaptive Momentum

13 years 5 months ago
Optimal Stochastic Search and Adaptive Momentum
Stochastic optimization algorithms typically use learning rate schedules that behave asymptotically as (t) = 0=t. The ensemble dynamics (Leen and Moody, 1993) for such algorithms provides an easy path to results on mean squared weight error and asymptotic normality. We apply this approach to stochastic gradient algorithms with momentum. We show that at late times, learning is governed by an e ective learning rate e = 0=(1 ; ) where is the momentum parameter. We describe the behavior of the asymptotic weight error and give conditions on e that insure optimal convergence speed. Finally, we use the results to develop an adaptive form of momentum that achieves optimal convergence speed independent of 0.
Todd K. Leen, Genevieve B. Orr
Added 02 Nov 2010
Updated 02 Nov 2010
Type Conference
Year 1993
Where NIPS
Authors Todd K. Leen, Genevieve B. Orr
Comments (0)