Sciweavers

Share
TNN
2008

Training Hard-Margin Support Vector Machines Using Greedy Stagewise Algorithm

11 years 9 months ago
Training Hard-Margin Support Vector Machines Using Greedy Stagewise Algorithm
Hard-margin support vector machines (HM-SVMs) suffer from getting overfitting in the presence of noise. Soft-margin SVMs deal with this problem by introducing a regularization term and obtain a state-of-the-art performance. However, this disposal leads to a relatively high computational cost. In this paper, an alternative method, greedy stagewise algorithm for SVMs, named GS-SVMs, is presented to cope with the overfitting of HM-SVMs without employing the regularization term. The most attractive property of GS-SVMs is that its computational complexity in the worst case only scales quadratically with the size of training samples. Experiments on the large data sets with up to 400 000 training samples demonstrate that GS-SVMs can be faster than LIBSVM 2.83 without sacrificing the accuracy. Finally, we employ statistical learning theory to analyze the empirical results, which shows that the success of GS-SVMs lies in that its early stopping rule can act as an implicit regularization term.
Liefeng Bo, Ling Wang, Licheng Jiao
Added 15 Dec 2010
Updated 15 Dec 2010
Type Journal
Year 2008
Where TNN
Authors Liefeng Bo, Ling Wang, Licheng Jiao
Comments (0)
books