Sciweavers

AAAI
1998

Boosting in the Limit: Maximizing the Margin of Learned Ensembles

13 years 6 months ago
Boosting in the Limit: Maximizing the Margin of Learned Ensembles
The "minimum margin" of an ensemble classifier on a given training set is, roughly speaking, the smallest vote it gives to any correct training label. Recent work has shown that the Adaboost algorithm is particularly effective at producing ensembles with large minimum margins, and theory suggests that this may account for its success at reducing generalization error. We note, however,that the problem of finding good margins is closely related to linear programming, and we use this connection to derive and test new "LPboosting" algorithms that achieve better minimum margins than Adaboost. However, these algorithms do not always yield better generalization performance. In fact, more often the opposite is true. We report on a series of controlled experiments which show that no simple version of the minimum-margin story can be complete. We conclude that the crucial question as to why boosting works so well in practice, and how to further improve upon it, remains mostly...
Adam J. Grove, Dale Schuurmans
Added 01 Nov 2010
Updated 01 Nov 2010
Type Conference
Year 1998
Where AAAI
Authors Adam J. Grove, Dale Schuurmans
Comments (0)