Sciweavers

Share
EMNLP
2006

Loss Minimization in Parse Reranking

12 years 3 months ago
Loss Minimization in Parse Reranking
We propose a general method for reranker construction which targets choosing the candidate with the least expected loss, rather than the most probable candidate. Different approaches to expected loss approximation are considered, including estimating from the probabilistic model used to generate the candidates, estimating from a discriminative model trained to rerank the candidates, and learning to approximate the expected loss. The proposed methods are applied to the parse reranking task, with various baseline models, achieving significant improvement both over the probabilistic models and the discriminative rerankers. When a neural network parser is used as the probabilistic model and the Voted Perceptron algorithm with data-defined kernels as the learning algorithm, the loss minimization model achieves 90.0% labeled constituents F1 score on the standard WSJ parsing task.
Ivan Titov, James Henderson
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2006
Where EMNLP
Authors Ivan Titov, James Henderson
Comments (0)
books