Sciweavers

ALT
2008
Springer

Entropy Regularized LPBoost

14 years 1 months ago
Entropy Regularized LPBoost
In this paper we discuss boosting algorithms that maximize the soft margin of the produced linear combination of base hypotheses. LPBoost is the most straightforward boosting algorithm for doing this. It maximizes the soft margin by solving a linear programming problem. While it performs well on natural data, there are cases where the number of iterations is linear in the number of examples instead of logarithmic. By simply adding a relative entropy regularization to the linear objective of LPBoost, we arrive at the Entropy Regularized LPBoost algorithm for which we prove a logarithmic iteration bound. A previous algorithm, called SoftBoost, has the same iteration bound, but the generalization error of this algorithm often decreases slowly in early iterations. Entropy Regularized LPBoost does not suffer from this problem and has a simpler, more natural motivation.
Manfred K. Warmuth, Karen A. Glocer, S. V. N. Vish
Added 14 Mar 2010
Updated 14 Mar 2010
Type Conference
Year 2008
Where ALT
Authors Manfred K. Warmuth, Karen A. Glocer, S. V. N. Vishwanathan
Comments (0)