Sciweavers

IJCNLP
2005
Springer

Regularisation Techniques for Conditional Random Fields: Parameterised Versus Parameter-Free

13 years 10 months ago
Regularisation Techniques for Conditional Random Fields: Parameterised Versus Parameter-Free
Recent work on Conditional Random Fields (CRFs) has demonstrated the need for regularisation when applying these models to real-world NLP data sets. Conventional approaches to regularising CRFs has focused on using a Gaussian prior over the model parameters. In this paper we explore other possibilities for CRF regularisation. We examine alternative choices of prior distribution and we relax the usual simplifying assumptions made with the use of a prior, such as constant hyperparameter values across features. In addition, we contrast the effectiveness of priors with an alternative, parameter-free approach. Specifically, we employ logarithmic opinion pools (LOPs). Our results show that a LOP of CRFs can outperform a standard unregularised CRF and attain a performance level close to that of a regularised CRF, without the need for intensive hyperparameter search.
Andrew Smith, Miles Osborne
Added 27 Jun 2010
Updated 27 Jun 2010
Type Conference
Year 2005
Where IJCNLP
Authors Andrew Smith, Miles Osborne
Comments (0)