Sciweavers

TKDE
2010

Completely Lazy Learning

13 years 2 months ago
Completely Lazy Learning
—Local classifiers are sometimes called lazy learners because they do not train a classifier until presented with a test sample. However, such methods are generally not completely lazy, because the neighborhood size k (or other locality parameter) is usually chosen by cross-validation on the training set, which can require significant preprocessing and risks overfitting. We propose a simple alternative to cross-validation of the neighborhood size that requires no pre-processing: instead of committing to one neighborhood size, average the discriminants for multiple neighborhoods. We show that this forms an expected estimated posterior that minimizes the expected Bregman loss with respect to the uncertainty about the neighborhood choice. We analyze this approach for six standard and state-of-the-art local classifiers, including discriminative adaptive metric kNN (DANN), a local support vector machine (SVM-KNN), hyperplane distance nearest-neighbor (HKNN) and a new local Bayesian q...
Eric K. Garcia, Sergey Feldman, Maya R. Gupta, San
Added 31 Jan 2011
Updated 31 Jan 2011
Type Journal
Year 2010
Where TKDE
Authors Eric K. Garcia, Sergey Feldman, Maya R. Gupta, Santosh Srivastava
Comments (0)