Sciweavers

3 search results - page 1 / 1
» Semi-Supervised Convex Training for Dependency Parsing
Sort
View
ACL
2008
13 years 5 months ago
Semi-Supervised Convex Training for Dependency Parsing
We present a novel semi-supervised training algorithm for learning dependency parsers. By combining a supervised large margin loss with an unsupervised least squares loss, a discr...
Qin Iris Wang, Dale Schuurmans, Dekang Lin
ACL
2006
13 years 5 months ago
Minimum Risk Annealing for Training Log-Linear Models
When training the parameters for a natural language system, one would prefer to minimize 1-best loss (error) on an evaluation set. Since the error surface for many natural languag...
David A. Smith, Jason Eisner
EMNLP
2011
12 years 4 months ago
Training a Log-Linear Parser with Loss Functions via Softmax-Margin
Log-linear parsing models are often trained by optimizing likelihood, but we would prefer to optimise for a task-specific metric like Fmeasure. Softmax-margin is a convex objecti...
Michael Auli, Adam Lopez