Sciweavers

TREC
2001

CL Research Experiments in TREC-10 Question Answering

13 years 5 months ago
CL Research Experiments in TREC-10 Question Answering
CL Research's question-answering system (DIMAP-QA) for TREC-10 only slightly extends its semantic relation triple (logical form) technology in which documents are fully parsed and databases built around discourse entities. Time constraints did not allow us to make various changes planned from TREC-9. TREC-10 changes made fuller use of the integrated machine-readable lexical resources and extended the question-answering capability to handle list and context questions. Experiments to further exploit the dictionary resources were not fully completed at the time of the TREC-10 submission, affecting planned revisions in other QA components. The official score for the main TREC-10 QA task was 0.120 (compared to 0.135 in TREC-9), based on processing 10 of the top 50 documents provided by NIST, compared to the average of 0.235 for 67 submissions. Post-hoc analysis suggests a more accurate assessment of DIMAP-QA's performance in identifying answers is 0.217. For the list task, the CL...
Kenneth C. Litkowski
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2001
Where TREC
Authors Kenneth C. Litkowski
Comments (0)