Sciweavers

180 search results - page 7 / 36
» Improved bounds on the sample complexity of learning
Sort
View
120
Voted
COLT
2000
Springer
15 years 4 months ago
PAC Analogues of Perceptron and Winnow via Boosting the Margin
We describe a novel family of PAC model algorithms for learning linear threshold functions. The new algorithms work by boosting a simple weak learner and exhibit complexity bounds...
Rocco A. Servedio
96
Voted
JMLR
2006
117views more  JMLR 2006»
14 years 11 months ago
On the Complexity of Learning Lexicographic Strategies
Fast and frugal heuristics are well studied models of bounded rationality. Psychological research has proposed the take-the-best heuristic as a successful strategy in decision mak...
Michael Schmitt, Laura Martignon
ALT
2010
Springer
14 years 11 months ago
Recursive Teaching Dimension, Learning Complexity, and Maximum Classes
This paper is concerned with the combinatorial structure of concept classes that can be learned from a small number of examples. We show that the recently introduced notion of recu...
Thorsten Doliwa, Hans-Ulrich Simon, Sandra Zilles
ICML
2008
IEEE
16 years 13 days ago
Empirical Bernstein stopping
Sampling is a popular way of scaling up machine learning algorithms to large datasets. The question often is how many samples are needed. Adaptive stopping algorithms monitor the ...
Csaba Szepesvári, Jean-Yves Audibert, Volod...
72
Voted
CORR
2010
Springer
47views Education» more  CORR 2010»
14 years 11 months ago
Robustness and Generalization
We derive generalization bounds for learning algorithms based on their robustness: the property that if a testing sample is "similar" to a training sample, then the test...
Huan Xu, Shie Mannor