Sciweavers

Share
COLT
2010
Springer

Regret Minimization With Concept Drift

11 years 7 months ago
Regret Minimization With Concept Drift
In standard online learning, the goal of the learner is to maintain an average loss that is "not too big" compared to the loss of the best-performing function in a fixed class. Classic results on no-regret learning show that simple algorithms can achieve an average loss arbitrarily close to the average loss of best function in retrospect, even when input and output pairs are chosen in a fully adversarial manner. However, in many real-world applications, competing with the best fixed function is not good enough. In particular, in applications such as spam prediction and classification of news articles, the best target function may be drifting over time. We introduce a novel model of concept drift in which an adversary is given control of both the distribution over input at each time step and the corresponding labels. The goal of the learner is to maintain an average loss close to the best slowly changing sequence of functions in retrospect. We provide tight upper and lower bo...
Koby Crammer, Yishay Mansour, Eyal Even-Dar, Jenni
Added 10 Feb 2011
Updated 10 Feb 2011
Type Journal
Year 2010
Where COLT
Authors Koby Crammer, Yishay Mansour, Eyal Even-Dar, Jennifer Wortman Vaughan
Comments (0)
books