Sciweavers

ICML
2009
IEEE

A simpler unified analysis of budget perceptrons

14 years 5 months ago
A simpler unified analysis of budget perceptrons
The kernel Perceptron is an appealing online learning algorithm that has a drawback: whenever it makes an error it must increase its support set, which slows training and testing if the number of errors is large. The Forgetron and the Randomized Budget Perceptron algorithms overcome this problem by restricting the number of support vectors the Perceptron is allowed to have. These algorithms have regret bounds whose proofs are dissimilar. In this paper we propose a unified analysis of both of these algorithms by observing that the way in which they remove support vectors can be seen as types of L2-regularization. By casting these algorithms as instances of online convex optimization problems and applying a variant of Zinkevich's theorem for noisy and incorrect gradient, we can bound the regret of these algorithms more easily than before. Our bounds are similar to the existing ones, but the proofs are less technical.
Ilya Sutskever
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2009
Where ICML
Authors Ilya Sutskever
Comments (0)