Sciweavers

Share
ICPR
2004
IEEE

Nearest Neighbor Ensemble

9 years 6 months ago
Nearest Neighbor Ensemble
Recent empirical work has shown that combining predictors can lead to significant reduction in generalization error. The individual predictors (weak learners) can be very simple, such as two terminal-node trees; it is the aggregating scheme that gives them the power of increasing prediction accuracy. Unfortunately, many combining methods do not improve nearest neighbor (NN) classifiers at all. This is because NN methods are very robust with respect to variations of a data set. In contrast, they are sensitive to input features. We exploit the instability of NN classifiers with respect to different choices of features to generate an effective and diverse set of NN classifiers with possibly uncorrelated errors. Interestingly, the approach takes advantage of the high dimensionality of the data. The experimental results show that our technique offers significant performance improvements with respect to competitive methods.
Bojun Yan, Carlotta Domeniconi
Added 09 Nov 2009
Updated 09 Nov 2009
Type Conference
Year 2004
Where ICPR
Authors Bojun Yan, Carlotta Domeniconi
Comments (0)
books