Sciweavers

Share
JMLR
2012

Beyond Logarithmic Bounds in Online Learning

9 years 3 months ago
Beyond Logarithmic Bounds in Online Learning
We prove logarithmic regret bounds that depend on the loss L∗ T of the competitor rather than on the number T of time steps. In the general online convex optimization setting, our bounds hold for any smooth and exp-concave loss (such as the square loss or the logistic loss). This bridges the gap between the O(ln T) regret exhibited by expconcave losses and the O( L∗ T ) regret exhibited by smooth losses. We also show that these bounds are tight for specific losses, thus they cannot be improved in general. For online regression with square loss, our analysis can be used to derive a sparse randomized variant of the online Newton step, whose expected number of updates scales with the algorithm’s loss. For online classification, we prove the first logarithmic mistake bounds that do not rely on prior knowledge of a bound on the competitor’s norm.
Francesco Orabona, Nicolò Cesa-Bianchi, Cla
Added 27 Sep 2012
Updated 27 Sep 2012
Type Journal
Year 2012
Where JMLR
Authors Francesco Orabona, Nicolò Cesa-Bianchi, Claudio Gentile
Comments (0)
books