Sciweavers

3 search results - page 1 / 1
» A Fast and Scalable Recurrent Neural Network Based on Stocha...
Sort
View
TNN
2008
138views more  TNN 2008»
13 years 4 months ago
A Fast and Scalable Recurrent Neural Network Based on Stochastic Meta Descent
This brief presents an efficient and scalable online learning algorithm for recurrent neural networks (RNNs). The approach is based on the real-time recurrent learning (RTRL) algor...
Zhenzhen Liu, Itamar Elhanany
ICANN
2001
Springer
13 years 9 months ago
Fast Curvature Matrix-Vector Products
The method of conjugate gradients provides a very effective way to optimize large, deterministic systems by gradient descent. In its standard form, however, it is not amenable to ...
Nicol N. Schraudolph
IVC
2007
142views more  IVC 2007»
13 years 4 months ago
Fast stochastic optimization for articulated structure tracking
Recently, an optimization approach for fast visual tracking of articulated structures based on Stochastic Meta-Descent (SMD) [7] has been presented. SMD is a gradient descent with...
Matthieu Bray, Esther Koller-Meier, Nicol N. Schra...