Sciweavers

13 search results - page 1 / 3
» Minimum Risk Annealing for Training Log-Linear Models
Sort
View
ACL
2006
13 years 5 months ago
Minimum Risk Annealing for Training Log-Linear Models
When training the parameters for a natural language system, one would prefer to minimize 1-best loss (error) on an evaluation set. Since the error surface for many natural languag...
David A. Smith, Jason Eisner
ACL
2008
13 years 5 months ago
Beyond Log-Linear Models: Boosted Minimum Error Rate Training for N-best Re-ranking
Current re-ranking algorithms for machine translation rely on log-linear models, which have the potential problem of underfitting the training data. We present BoostedMERT, a nove...
Kevin Duh, Katrin Kirchhoff
NAACL
2010
13 years 2 months ago
Softmax-Margin CRFs: Training Log-Linear Models with Cost Functions
We describe a method of incorporating taskspecific cost functions into standard conditional log-likelihood (CLL) training of linear structured prediction models. Recently introduc...
Kevin Gimpel, Noah A. Smith
EMNLP
2011
12 years 4 months ago
Training a Log-Linear Parser with Loss Functions via Softmax-Margin
Log-linear parsing models are often trained by optimizing likelihood, but we would prefer to optimise for a task-specific metric like Fmeasure. Softmax-margin is a convex objecti...
Michael Auli, Adam Lopez
EMNLP
2009
13 years 2 months ago
First- and Second-Order Expectation Semirings with Applications to Minimum-Risk Training on Translation Forests
Many statistical translation models can be regarded as weighted logical deduction. Under this paradigm, we use weights from the expectation semiring (Eisner, 2002), to compute fir...
Zhifei Li, Jason Eisner