Sciweavers

38 search results - page 2 / 8
» Boosting and Maximum Likelihood for Exponential Models
Sort
View
ACL
2006
13 years 6 months ago
Approximation Lasso Methods for Language Modeling
Lasso is a regularization method for parameter estimation in linear models. It optimizes the model parameters with respect to a loss function subject to model complexities. This p...
Jianfeng Gao, Hisami Suzuki, Bin Yu
IJAR
2010
97views more  IJAR 2010»
13 years 3 months ago
Parameter estimation and model selection for mixtures of truncated exponentials
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing ...
Helge Langseth, Thomas D. Nielsen, Rafael Rum&iacu...
JSC
2006
102views more  JSC 2006»
13 years 5 months ago
Counting and locating the solutions of polynomial systems of maximum likelihood equations, I
In statistics, mixture models consisting of several component subpopulations are used widely to model data drawn from heterogeneous sources. In this paper, we consider maximum lik...
Max-Louis G. Buot, Donald St. P. Richards
ICML
2007
IEEE
14 years 6 months ago
Exponentiated gradient algorithms for log-linear structured prediction
Conditional log-linear models are a commonly used method for structured prediction. Efficient learning of parameters in these models is therefore an important problem. This paper ...
Amir Globerson, Terry Koo, Xavier Carreras, Michae...
ML
2008
ACM
222views Machine Learning» more  ML 2008»
13 years 5 months ago
Boosted Bayesian network classifiers
The use of Bayesian networks for classification problems has received significant recent attention. Although computationally efficient, the standard maximum likelihood learning me...
Yushi Jing, Vladimir Pavlovic, James M. Rehg