Sciweavers

ML
2015
ACM

Soft-max boosting

8 years 26 days ago
Soft-max boosting
The standard multi-class classification risk, based on the binary loss, is rarely directly minimized. This is due to (i) the lack of convexity and (ii) the lack of smoothness (and even continuity). The classic approach consists in minimizing instead a convex surrogate. In this paper, we propose to replace the usually considered deterministic decision rule by a stochastic one, which allows obtaining a smooth risk (generalizing the expected binary loss, and more generally the cost-sensitive loss). Practically, this (empirical) risk is minimized by performing a gradient descent in the function space linearly spanned by a base learner (a.k.a. boosting). We provide a convergence analysis of the resulting algorithm and experiment it on a bunch of synthetic and realworld data sets (with noiseless and noisy domains, compared to convex and non-convex boosters).
Matthieu Geist
Added 14 Apr 2016
Updated 14 Apr 2016
Type Journal
Year 2015
Where ML
Authors Matthieu Geist
Comments (0)