Sciweavers

JMLR
2012

Multi Kernel Learning with Online-Batch Optimization

11 years 7 months ago
Multi Kernel Learning with Online-Batch Optimization
In recent years there has been a lot of interest in designing principled classification algorithms over multiple cues, based on the intuitive notion that using more features should lead to better performance. In the domain of kernel methods, a principled way to use multiple features is the Multi Kernel Learning (MKL) approach. Here we present a MKL optimization algorithm based on stochastic gradient descent that has a guaranteed convergence rate. We directly solve the MKL problem in the primal formulation. By having a p-norm formulation of MKL, we introduce a parameter that controls the level of sparsity of the solution, while leading to an easier optimization problem. We prove theoretically and experimentally that 1) our algorithm has a faster convergence rate as the number of kernels grows; 2) the training complexity is linear in the number of training examples; 3) very few iterations are sufficient to reach good solutions. Experiments on standard benchmark databases support our c...
Francesco Orabona, Jie Luo, Barbara Caputo
Added 27 Sep 2012
Updated 27 Sep 2012
Type Journal
Year 2012
Where JMLR
Authors Francesco Orabona, Jie Luo, Barbara Caputo
Comments (0)