Sciweavers

Share
ICDM
2010
IEEE

Consequences of Variability in Classifier Performance Estimates

8 years 3 months ago
Consequences of Variability in Classifier Performance Estimates
The prevailing approach to evaluating classifiers in the machine learning community involves comparing the performance of several algorithms over a series of usually unrelated data sets. However, beyond this there are many dimensions along which methodologies vary wildly. We show that, depending on the stability and similarity of the algorithms being compared, these sometimes-arbitrary methodological choices can have a significant impact on the conclusions of any study, including the results of statistical tests. In particular, we show that performance metrics and data sets used, the type of cross-validation employed, and the number of iterations of cross-validation run have a significant, and often predictable, effect. Based on these results, we offer a series of recommendations for achieving consistent, reproducible results in classifier performance comparisons.
Troy Raeder, T. Ryan Hoens, Nitesh V. Chawla
Added 12 Feb 2011
Updated 12 Feb 2011
Type Journal
Year 2010
Where ICDM
Authors Troy Raeder, T. Ryan Hoens, Nitesh V. Chawla
Comments (0)
books