Sciweavers

Share
MP
2016

Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization

4 years 2 months ago
Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve stateof-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.
Shai Shalev-Shwartz, Tong Zhang 0001
Added 08 Apr 2016
Updated 08 Apr 2016
Type Journal
Year 2016
Where MP
Authors Shai Shalev-Shwartz, Tong Zhang 0001
Comments (0)
books