Sciweavers

Share
ML
2000
ACM

Randomizing Outputs to Increase Prediction Accuracy

8 years 5 months ago
Randomizing Outputs to Increase Prediction Accuracy
Bagging and boosting reduce error by changing both the inputs and outputs to form perturbed training sets, grow predictors on these perturbed training sets and combine them. A question that has been frequently asked is whether it is possible to get comparable performance by perturbing the outputs alone. Two methods of randomizing outputs are experimented with. One is called output smearing and the other output flipping. Both are shown to consistently do better than bagging.
Leo Breiman
Added 19 Dec 2010
Updated 19 Dec 2010
Type Journal
Year 2000
Where ML
Authors Leo Breiman
Comments (0)
books