Sciweavers

JMLR
2011

Variable Sparsity Kernel Learning

12 years 10 months ago
Variable Sparsity Kernel Learning
This paper1 presents novel algorithms and applications for a particular class of mixed-norm regularization based Multiple Kernel Learning (MKL) formulations. The formulations assume that the given kernels are grouped and employ l1 norm regularization for promoting sparsity within RKHS norms of each group and ls,s ≥ 2 norm regularization for promoting non-sparse combinations across groups. Various sparsity levels in combining the kernels can be achieved by varying the grouping of kernels—hence we name the formulations as Variable Sparsity Kernel Learning (VSKL) formulations. While previous attempts have a non-convex formulation, here we present a convex formulation which admits efficient Mirror-Descent (MD) based solving techniques. The proposed MD based algorithm optimizes over product of simplices and has a computational complexity of O m2ntot lognmax/ε2 where m is no. training data points, nmax,ntot are the maximum no. kernels in any group, total no. kernels respectively and ...
Jonathan Aflalo, Aharon Ben-Tal, Chiranjib Bhattac
Added 14 May 2011
Updated 14 May 2011
Type Journal
Year 2011
Where JMLR
Authors Jonathan Aflalo, Aharon Ben-Tal, Chiranjib Bhattacharyya, Jagarlapudi Saketha Nath, Sankaran Raman
Comments (0)