We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalizatio...
This paper collects together a miscellany of results originally motivated by the analysis of the generalization performance of the “maximum-margin” algorithm due to Vapnik and...
Robert C. Williamson, Alex J. Smola, Bernhard Sch&...
We consider the AdaBoost procedure for boosting weak learners. In AdaBoost, a key step is choosing a new distribution on the training examples based on the old distribution and th...
We consider algorithms for combining advice from a set of experts. In each trial, the algorithm receives the predictions of the experts and produces its own prediction. A loss func...
Abstract. We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each o...