Sciweavers

202 search results - page 5 / 41
» Privacy-preservation for gradient descent methods
Sort
View
COLT
2010
Springer
13 years 3 months ago
Composite Objective Mirror Descent
We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known firstor...
John Duchi, Shai Shalev-Shwartz, Yoram Singer, Amb...
EOR
2000
77views more  EOR 2000»
13 years 5 months ago
Training the random neural network using quasi-Newton methods
Training in the random neural network (RNN) is generally speci
Aristidis Likas, Andreas Stafylopatis
ICML
2006
IEEE
14 years 6 months ago
Accelerated training of conditional random fields with stochastic gradient methods
We apply Stochastic Meta-Descent (SMD), a stochastic gradient optimization method with gain vector adaptation, to the training of Conditional Random Fields (CRFs). On several larg...
S. V. N. Vishwanathan, Nicol N. Schraudolph, Mark ...
JMLR
2012
11 years 7 months ago
Generic Methods for Optimization-Based Modeling
“Energy” models for continuous domains can be applied to many problems, but often suffer from high computational expense in training, due to the need to repeatedly minimize t...
Justin Domke
IJIT
2004
13 years 6 months ago
A Comparison of First and Second Order Training Algorithms for Artificial Neural Networks
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no expli...
Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C...