Sciweavers

LREC
2008

Evaluating Dialogue Act Tagging with Naive and Expert Annotators

13 years 5 months ago
Evaluating Dialogue Act Tagging with Naive and Expert Annotators
In this paper the dialogue act annotation of naive and expert annotators, both annotating the same data, are compared in order to characterise the insights annotations made by different kind of annotators may provide for evaluating dialogue act tagsets. It is argued that the agreement among naive annotators provides insight in the clarity of the tagset, whereas agreement among expert annotators provides an indication of how reliably the tagset can be applied when errors are ruled out that are due to deficiencies in understanding the concepts of the tagset, to a lack of experience in using the annotation tool, or to little experience in annotation more generally. An indication of the differences between the two groups in terms of inter-annotator agreement and tagging accuracy on task-oriented dialogue in different domains, annotated with the DIT ++ dialogue act tagset is presented, and the annotations of both groups are assessed against a gold standard. Additionally, the effect of the ...
Jeroen Geertzen, Volha Petukhova, Harry Bunt
Added 29 Oct 2010
Updated 29 Oct 2010
Type Conference
Year 2008
Where LREC
Authors Jeroen Geertzen, Volha Petukhova, Harry Bunt
Comments (0)