Sciweavers

CORR
2012
Springer

Smoothing Proximal Gradient Method for General Structured Sparse Learning

11 years 11 months ago
Smoothing Proximal Gradient Method for General Structured Sparse Learning
We study the problem of learning high dimensional regression models regularized by a structured-sparsity-inducing penalty that encodes prior structural information on either input or output sides. We consider two widely adopted types of such penalties as our motivating examples: 1) overlapping group lasso penalty, based on the 1/ 2 mixed-norm penalty, and 2) graph-guided fusion penalty. For both types of penalties, due to their non-separability, developing an efficient optimization method has remained a challenging problem. In this paper, we propose a general optimization approach, called smoothing proximal gradient method, which can solve the structured sparse regression problems with a smooth convex loss and a wide spectrum of structured-sparsityinducing penalties. Our approach is based on a general smoothing technique of Nesterov [17]. It achieves a convergence rate faster than the standard first-order method, subgradient method, and is much more scalable than the most widely used...
Xi Chen, Qihang Lin, Seyoung Kim, Jaime G. Carbone
Added 20 Apr 2012
Updated 20 Apr 2012
Type Journal
Year 2012
Where CORR
Authors Xi Chen, Qihang Lin, Seyoung Kim, Jaime G. Carbonell, Eric P. Xing
Comments (0)