Sciweavers

PKDD
2009
Springer

Sparse Kernel SVMs via Cutting-Plane Training

13 years 11 months ago
Sparse Kernel SVMs via Cutting-Plane Training
We explore an algorithm for training SVMs with Kernels that can represent the learned rule using arbitrary basis vectors, not just the support vectors (SVs) from the training set. This results in two benefits. First, the added flexibility makes it possible to find sparser solutions of good quality, substantially speeding-up prediction. Second, the improved sparsity can also make training of Kernel SVMs more efficient, especially for high-dimensional and sparse data (e.g. text classification). This has the potential to make training of Kernel SVMs tractable for large training sets, where conventional methods scale quadratically due to the linear growth of the number of SVs. In addition to a theoretical analysis of the algorithm, we also present an empirical evaluation.
Thorsten Joachims, Chun-Nam John Yu
Added 27 May 2010
Updated 27 May 2010
Type Conference
Year 2009
Where PKDD
Authors Thorsten Joachims, Chun-Nam John Yu
Comments (0)