Sciweavers

8 search results - page 1 / 2
» Loss Minimization in Parse Reranking
Sort
View
EMNLP
2006
13 years 6 months ago
Loss Minimization in Parse Reranking
We propose a general method for reranker construction which targets choosing the candidate with the least expected loss, rather than the most probable candidate. Different approac...
Ivan Titov, James Henderson
ICML
2007
IEEE
14 years 5 months ago
Scalable training of L1-regularized log-linear models
The l-bfgs limited-memory quasi-Newton method is the algorithm of choice for optimizing the parameters of large-scale log-linear models with L2 regularization, but it cannot be us...
Galen Andrew, Jianfeng Gao
NAACL
2007
13 years 6 months ago
Improved Inference for Unlexicalized Parsing
We present several improvements to unlexicalized parsing with hierarchically state-split PCFGs. First, we present a novel coarse-to-fine method in which a grammar’s own hierarc...
Slav Petrov, Dan Klein
AAAI
2007
13 years 7 months ago
Learning and Inference for Hierarchically Split PCFGs
Treebank parsing can be seen as the search for an optimally refined grammar consistent with a coarse training treebank. We describe a method in which a minimal grammar is hierarc...
Slav Petrov, Dan Klein
NAACL
2010
13 years 2 months ago
Ensemble Models for Dependency Parsing: Cheap and Good?
Previous work on dependency parsing used various kinds of combination models but a systematic analysis and comparison of these approaches is lacking. In this paper we implemented ...
Mihai Surdeanu, Christopher D. Manning