Sciweavers

Share
EMNLP
2009

Re-Ranking Models Based-on Small Training Data for Spoken Language Understanding

9 years 9 months ago
Re-Ranking Models Based-on Small Training Data for Spoken Language Understanding
The design of practical language applications by means of statistical approaches requires annotated data, which is one of the most critical constraint. This is particularly true for Spoken Dialog Systems since considerably domain-specific conceptual annotation is needed to obtain accurate Language Understanding models. Since data annotation is usually costly, methods to reduce the amount of data are needed. In this paper, we show that better feature representations serve the above purpose and that structure kernels provide the needed improved representation. Given the relatively high computational cost of kernel methods, we apply them to just re-rank the list of hypotheses provided by a fast generative model. Experiments with Support Vector Machines and different kernels on two different dialog corpora show that our re-ranking models can achieve better results than state-of-the-art approaches when small data is available.
Marco Dinarelli, Alessandro Moschitti, Giuseppe Ri
Added 17 Feb 2011
Updated 17 Feb 2011
Type Journal
Year 2009
Where EMNLP
Authors Marco Dinarelli, Alessandro Moschitti, Giuseppe Riccardi
Comments (0)
books