Sciweavers

1106 search results - page 29 / 222
» On regularization algorithms in learning theory
Sort
View
AAAI
2010
15 years 6 months ago
G-Optimal Design with Laplacian Regularization
In many real world applications, labeled data are usually expensive to get, while there may be a large amount of unlabeled data. To reduce the labeling cost, active learning attem...
Chun Chen, Zhengguang Chen, Jiajun Bu, Can Wang, L...
COLT
2006
Springer
15 years 8 months ago
Discriminative Learning Can Succeed Where Generative Learning Fails
Generative algorithms for learning classifiers use training data to separately estimate a probability model for each class. New items are classified by comparing their probabiliti...
Philip M. Long, Rocco A. Servedio
ICML
2005
IEEE
16 years 5 months ago
Exploiting syntactic, semantic and lexical regularities in language modeling via directed Markov random fields
We present a directed Markov random field (MRF) model that combines n-gram models, probabilistic context free grammars (PCFGs) and probabilistic latent semantic analysis (PLSA) fo...
Shaojun Wang, Shaomin Wang, Russell Greiner, Dale ...
COLT
2006
Springer
15 years 6 months ago
Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints
There are two main families of on-line algorithms depending on whether a relative entropy or a squared Euclidean distance is used as a regularizer. The difference between the two f...
Manfred K. Warmuth
SYNTHESE
2008
84views more  SYNTHESE 2008»
15 years 4 months ago
How experimental algorithmics can benefit from Mayo's extensions to Neyman-Pearson theory of testing
Although theoretical results for several algorithms in many application domains were presented during the last decades, not all algorithms can be analyzed fully theoretically. Exp...
Thomas Bartz-Beielstein