Sciweavers

CORR
2011
Springer

Generalized Boosting Algorithms for Convex Optimization

12 years 8 months ago
Generalized Boosting Algorithms for Convex Optimization
Boosting is a popular way to derive powerful learners from simpler hypothesis classes. Following previous work (Mason et al., 1999; Friedman, 2000) on general boosting frameworks, we analyze gradient-based descent algorithms for boosting with respect to any convex objective and introduce a new measure of weak learner performance into this setting which generalizes existing work. We present the first weak to strong learning guarantees for the existing gradient boosting work for smooth convex objectives, and also demonstrate that this work fails for nonsmooth objectives. To address this issue, we present new algorithms which extend this boosting approach to arbitrary convex loss functions and give corresponding weak to strong convergence results. In addition, we demonstrate experimental results that support our analysis and demonstrate the need for the new algorithms we present.
Alexander Grubb, J. Andrew Bagnell
Added 19 Aug 2011
Updated 19 Aug 2011
Type Journal
Year 2011
Where CORR
Authors Alexander Grubb, J. Andrew Bagnell
Comments (0)