Sciweavers

Share
NAACL
2010

Distributed Training Strategies for the Structured Perceptron

9 years 12 days ago
Distributed Training Strategies for the Structured Perceptron
Perceptron training is widely applied in the natural language processing community for learning complex structured models. Like all structured prediction learning frameworks, the structured perceptron can be costly to train as training complexity is proportional to inference, which is frequently non-linear in example sequence length. In this paper we investigate distributed training strategies for the structured perceptron as a means to reduce training times when computing clusters are available. We look at two strategies and provide convergence bounds for a particular mode of distributed structured perceptron training based on iterative parameter mixing (or averaging). We present experiments on two structured prediction problems
Ryan T. McDonald, Keith Hall, Gideon Mann
Added 14 Feb 2011
Updated 14 Feb 2011
Type Journal
Year 2010
Where NAACL
Authors Ryan T. McDonald, Keith Hall, Gideon Mann
Comments (0)
books