Sciweavers

56 search results - page 1 / 12
» Stochastic methods for l1 regularized loss minimization
Sort
View
ICML
2009
IEEE
13 years 11 months ago
Stochastic methods for l1 regularized loss minimization
Shai Shalev-Shwartz, Ambuj Tewari
CORR
2011
Springer
205views Education» more  CORR 2011»
12 years 8 months ago
Parallel Coordinate Descent for L1-Regularized Loss Minimization
We propose Shotgun, a parallel coordinate descent algorithm for minimizing L1regularized losses. Though coordinate descent seems inherently sequential, we prove convergence bounds...
Joseph K. Bradley, Aapo Kyrola, Danny Bickson, Car...
NIPS
2008
13 years 6 months ago
An interior-point stochastic approximation method and an L1-regularized delta rule
The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its farreaching application, there is almost n...
Peter Carbonetto, Mark Schmidt, Nando de Freitas
ECML
2007
Springer
13 years 11 months ago
Fast Optimization Methods for L1 Regularization: A Comparative Study and Two New Approaches
L1 regularization is effective for feature selection, but the resulting optimization is challenging due to the non-differentiability of the 1-norm. In this paper we compare state...
Mark Schmidt, Glenn Fung, Rómer Rosales
ACL
2009
13 years 2 months ago
Stochastic Gradient Descent Training for L1-regularized Log-linear Models with Cumulative Penalty
Stochastic gradient descent (SGD) uses approximate gradients estimated from subsets of the training data and updates the parameters in an online fashion. This learning framework i...
Yoshimasa Tsuruoka, Jun-ichi Tsujii, Sophia Anania...