Sciweavers

SODA
2000
ACM

Improved bounds on the sample complexity of learning

13 years 5 months ago
Improved bounds on the sample complexity of learning
We present a new general upper bound on the number of examples required to estimate all of the expectations of a set of random variables uniformly well. The quality of the estimates is measured using a variant of the relative error proposed by Haussler and Pollard. We also show that our bound is within a constant factor of the best possible. Our upper bound implies improved bounds on the sample complexity of learning according to Haussler's decision theoretic model. A preliminary version of this work appeared in the Proceedings of the Eleventh Annual ACM-SIAM Symposium on Discrete Algorithms, 2000. Part of this work was done while this author was at the School of Computing of the National University of Singapore. 1
Yi Li, Philip M. Long, Aravind Srinivasan
Added 01 Nov 2010
Updated 01 Nov 2010
Type Conference
Year 2000
Where SODA
Authors Yi Li, Philip M. Long, Aravind Srinivasan
Comments (0)