Use of Metadata for Question Answering and Novelty Tasks

11 years 1 months ago
Use of Metadata for Question Answering and Novelty Tasks
CL Research’s question-answering system for TREC 2003 was modified away from reliance on database technology to the core underlying technology of using massive XML-tagging for processing both questions and documents. This core technology was then extended to participate in the novelty task. This technology provides many opportuinities for experimenting with various approaches to question answering and novelty determination. For the QA track, we submitted one run and our overall main task score was 0.075, with scores of 0.070 for factoid questions, 0.000 for list questions, and 0.160 for definition questions. For the passage task, we submitted two runs, our better score was 0.119 for the factoid questions. These scores were all considerably below the medians for these tasks. We have implemented further routines since our official submission, improving our scores to 0.18 and 0.23 for the exact answer and passages tasks, respectively. For the Novelty track, we submitted four runs for t...
Kenneth C. Litkowski
Added 01 Nov 2010
Updated 01 Nov 2010
Type Conference
Year 2003
Where TREC
Authors Kenneth C. Litkowski
Comments (0)