Sciweavers

NIPS
2001

On the Generalization Ability of On-Line Learning Algorithms

13 years 5 months ago
On the Generalization Ability of On-Line Learning Algorithms
In this paper, it is shown how to extract a hypothesis with small risk from the ensemble of hypotheses generated by an arbitrary on-line learning algorithm run on an independent and identically distributed (i.i.d.) sample of data. Using a simple large deviation argument, we prove tight data-dependent bounds for the risk of this hypothesis in terms of an easily computable statistic associated with the on-line performance of the ensemble. Via sharp pointwise bounds on , we then obtain risk tail bounds for kernel Perceptron algorithms in terms of the spectrum of the empirical kernel matrix. These bounds reveal that the linear hypotheses found via our approach achieve optimal tradeoffs between hinge loss and margin size over the class of all linear functions, an issue that was left open by previous results. A distinctive feature of our approach is that the key tools for our analysis come from the model of prediction of individual sequences; i.e., a model making no probabilistic assumptions...
Nicolò Cesa-Bianchi, Alex Conconi, Claudio
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2001
Where NIPS
Authors Nicolò Cesa-Bianchi, Alex Conconi, Claudio Gentile
Comments (0)