Boosting is a popular way to derive powerful learners from simpler hypothesis classes. Following previous work (Mason et al., 1999; Friedman, 2000) on general boosting frameworks,...
Scaling discrete AdaBoost to handle real-valued weak hypotheses has often been done under the auspices of convex optimization, but little is generally known from the original boost...
Surrogate maximization (or minimization) (SM) algorithms are a family of algorithms that can be regarded as a generalization of expectation-maximization (EM) algorithms. There are...
We give a review of various aspects of boosting, clarifying the issues through a few simple results, and relate our work and that of others to the minimax paradigm of statistics. ...
We generalize the primal-dual hybrid gradient (PDHG) algorithm proposed by Zhu and Chan in [M. Zhu, and T. F. Chan, An Efficient Primal-Dual Hybrid Gradient Algorithm for Total Var...