Sciweavers

AAAI
1998

Learning to Extract Symbolic Knowledge from the World Wide Web

13 years 6 months ago
Learning to Extract Symbolic Knowledge from the World Wide Web
The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable world wide knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledgebased inference and problem solving. Our approach is to develop a trainable information extraction system that takes two inputs: an ontology de ning the classes and relations of interest, and a set of training data consisting of labeled regions of hypertext representing instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system.
Mark Craven, Dan DiPasquo, Dayne Freitag, Andrew M
Added 01 Nov 2010
Updated 01 Nov 2010
Type Conference
Year 1998
Where AAAI
Authors Mark Craven, Dan DiPasquo, Dayne Freitag, Andrew McCallum, Tom M. Mitchell, Kamal Nigam, Seán Slattery
Comments (0)