Sciweavers

AAAI
2011

A Feasible Nonconvex Relaxation Approach to Feature Selection

12 years 4 months ago
A Feasible Nonconvex Relaxation Approach to Feature Selection
Variable selection problems are typically addressed under a penalized optimization framework. Nonconvex penalties such as the minimax concave plus (MCP) and smoothly clipped absolute deviation (SCAD), have been demonstrated to have the properties of sparsity practically and theoretically. In this paper we propose a new nonconvex penalty that we call exponential-type penalty. The exponential-type penalty is characterized by a positive parameter, which establishes a connection with the ℓ0 and ℓ1 penalties. We apply this new penalty to sparse supervised learning problems. To solve to resulting optimization problem, we resort to a reweighted ℓ1 minimization method. Moreover, we devise an efficient method for the adaptive update of the tuning parameter. Our experimental results are encouraging. They show that the exponential-type penalty is competitive with MCP and SCAD.
Cuixia Gao, Naiyan Wang, Qi Yu, Zhihua Zhang
Added 12 Dec 2011
Updated 12 Dec 2011
Type Journal
Year 2011
Where AAAI
Authors Cuixia Gao, Naiyan Wang, Qi Yu, Zhihua Zhang
Comments (0)