Linear Classification and Selective Sampling Under Low Noise Conditions

11 years 1 months ago
Linear Classification and Selective Sampling Under Low Noise Conditions
We provide a new analysis of an efficient margin-based algorithm for selective sampling in classification problems. Using the so-called Tsybakov low noise condition to parametrize the instance distribution, we show bounds on the convergence rate to the Bayes risk of both the fully supervised and the selective sampling versions of the basic algorithm. Our analysis reveals that, excluding logarithmic factors, the average risk of the selective sampler converges to the Bayes risk at rate N-(1+)(2+)/2(3+) where N denotes the number of queried labels, and > 0 is the exponent in the low noise condition. For all > 3 - 1 0.73 this convergence rate is asymptotically faster than the rate N-(1+)/(2+) achieved by the fully supervised version of the same classifier, which queries all labels, and for the two rates exhibit an exponential gap. Experiments on textual data reveal that simple variants of the proposed selective sampler perform much better than popular and similarly efficient c...
Giovanni Cavallanti, Nicolò Cesa-Bianchi, C
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2008
Where NIPS
Authors Giovanni Cavallanti, Nicolò Cesa-Bianchi, Claudio Gentile
Comments (0)