Sciweavers

SDM
2010
SIAM

Convex Principal Feature Selection

13 years 2 months ago
Convex Principal Feature Selection
A popular approach for dimensionality reduction and data analysis is principal component analysis (PCA). A limiting factor with PCA is that it does not inform us on which of the original features are important. There is a recent interest in sparse PCA (SPCA). By applying an L1 regularizer to PCA, a sparse transformation is achieved. However, true feature selection may not be achieved as non-sparse coefficients may be distributed over several features. Feature selection is an NP-hard combinatorial optimization problem. This paper relaxes and re-formulates the feature selection problem as a convex continuous optimization problem that minimizes a mean-squared-reconstruction error (a criterion optimized by PCA) and considers feature redundancy into account (an important property in PCA and feature selection). We call this new method Convex Principal Feature Selection (CPFS). Experiments show that CPFS performed better than SPCA in selecting features that maximize variance or minimize the ...
Mahdokht Masaeli, Yan Yan, Ying Cui, Glenn Fung, J
Added 30 Jan 2011
Updated 30 Jan 2011
Type Journal
Year 2010
Where SDM
Authors Mahdokht Masaeli, Yan Yan, Ying Cui, Glenn Fung, Jennifer G. Dy
Comments (0)