Sciweavers

183 search results - page 19 / 37
» Language resources extracted from Wikipedia
Sort
View
WWW
2010
ACM
15 years 4 months ago
Not so creepy crawler: easy crawler generation with standard xml queries
Web crawlers are increasingly used for focused tasks such as the extraction of data from Wikipedia or the analysis of social networks like last.fm. In these cases, pages are far m...
Franziska von dem Bussche, Klara A. Weiand, Benedi...
CLEF
2008
Springer
14 years 11 months ago
Overview of VideoCLEF 2008: Automatic Generation of Topic-Based Feeds for Dual Language Audio-Visual Content
The VideoCLEF track, introduced in 2008, aims to develop and evaluate tasks related to analysis of and access to multilingual multimedia content. In its first year, VideoCLEF pilo...
Martha Larson, Eamonn Newman, Gareth J. F. Jones
WWW
2009
ACM
15 years 10 months ago
Crawling English-Japanese person-name transliterations from the web
Automatic compilation of lexicon is a dream of lexicon compilers as well as lexicon users. This paper proposes a system that crawls English-Japanese person-name transliterations f...
Satoshi Sato
87
Voted
SCAM
2008
IEEE
15 years 3 months ago
CoordInspector: A Tool for Extracting Coordination Data from Legacy Code
—More and more current software systems rely on non trivial coordination logic for combining autonomous services typically running on different platforms and often owned by diffe...
Nuno F. Rodrigues, Luís Soares Barbosa
IJMMS
2008
108views more  IJMMS 2008»
14 years 9 months ago
Ontology-based information extraction and integration from heterogeneous data sources
In this paper we present the design, implementation and evaluation of SOBA, a system for ontology-based information extraction from heterogeneous data resources, including plain t...
Paul Buitelaar, Philipp Cimiano, Anette Frank, Mat...