Sciweavers

30 search results - page 1 / 6
» Probabilistic models for focused web crawling
Sort
View
WIDM
2004
ACM
13 years 9 months ago
Probabilistic models for focused web crawling
A Focused crawler must use information gleaned from previously crawled page sequences to estimate the relevance of a newly seen URL. Therefore, good performance depends on powerfu...
Hongyu Liu, Evangelos E. Milios, Jeannette Janssen
VLDB
2000
ACM
125views Database» more  VLDB 2000»
13 years 7 months ago
Focused Crawling Using Context Graphs
Maintaining currency of search engine indices by exhaustive crawling is rapidly becoming impossible due to the increasing size and dynamic content of the web. Focused crawlers aim...
Michelangelo Diligenti, Frans Coetzee, Steve Lawre...
NIPS
2000
13 years 5 months ago
The Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity
We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is...
David A. Cohn, Thomas Hofmann
CIKM
2011
Springer
12 years 3 months ago
Focusing on novelty: a crawling strategy to build diverse language models
Word prediction performed by language models has an important role in many tasks as e.g. word sense disambiguation, speech recognition, hand-writing recognition, query spelling an...
Luciano Barbosa, Srinivas Bangalore
ICDE
2006
IEEE
146views Database» more  ICDE 2006»
14 years 5 months ago
Query Selection Techniques for Efficient Crawling of Structured Web Sources
The high quality, structured data from Web structured sources is invaluable for many applications. Hidden Web databases are not directly crawlable by Web search engines and are on...
Ping Wu, Ji-Rong Wen, Huan Liu, Wei-Ying Ma