This report describes the experiments of the University of Edinburgh and the University of Sydney at the TREC-2004 question answering evaluation exercise. Our system combines two ...
Kisuh Ahn, Johan Bos, Stephen Clark, Tiphaine Dalm...
cally identifying clinically relevant aspects of MEDLINE abstracts. These extracted elements serve as the input to an algorithm that scores the relevance of citations with respect ...
The TREC 2007 question answering (QA) track contained two tasks: the main task consisting of series of factoid, list, and “Other” questions organized around a set of targets, ...
Information extraction systems have been dealt with at length from the viewpoint of users posing definite questions whose expected answer is to be found in a document collection. T...
We evaluate the feasibility of applying currently available research tools to the problem of cross lingual QA. We establish a task baseline by combining a cross lingual IR system w...
Lucian Vlad Lita, Monica Rogati, Jaime G. Carbonel...