Sciweavers

9 search results - page 1 / 2
» Parallel Coordinate Descent for L1-Regularized Loss Minimiza...
Sort
View
CORR
2011
Springer
205views Education» more  CORR 2011»
12 years 9 months ago
Parallel Coordinate Descent for L1-Regularized Loss Minimization
We propose Shotgun, a parallel coordinate descent algorithm for minimizing L1regularized losses. Though coordinate descent seems inherently sequential, we prove convergence bounds...
Joseph K. Bradley, Aapo Kyrola, Danny Bickson, Car...
IJCNN
2007
IEEE
13 years 11 months ago
Optimizing 0/1 Loss for Perceptrons by Random Coordinate Descent
—The 0/1 loss is an important cost function for perceptrons. Nevertheless it cannot be easily minimized by most existing perceptron learning algorithms. In this paper, we propose...
Ling Li, Hsuan-Tien Lin
JMLR
2012
11 years 7 months ago
Lifted coordinate descent for learning with trace-norm regularization
We consider the minimization of a smooth loss with trace-norm regularization, which is a natural objective in multi-class and multitask learning. Even though the problem is convex...
Miroslav Dudík, Zaïd Harchaoui, J&eacu...
JMLR
2008
114views more  JMLR 2008»
13 years 5 months ago
Coordinate Descent Method for Large-scale L2-loss Linear Support Vector Machines
Linear support vector machines (SVM) are useful for classifying large-scale sparse data. Problems with sparse features are common in applications such as document classification a...
Kai-Wei Chang, Cho-Jui Hsieh, Chih-Jen Lin
JMLR
2010
143views more  JMLR 2010»
13 years 3 months ago
A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning
We extend the well-known BFGS quasi-Newton method and its memory-limited variant LBFGS to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by ge...
Jin Yu, S. V. N. Vishwanathan, Simon Günter, ...