A collaborative crawler is a group of crawling nodes, in which each crawling node is responsible for a specific portion of the web. We study the problem of collecting geographical...
Presence of duplicate documents in the World Wide Web adversely affects crawling, indexing and relevance, which are the core building blocks of web search. In this paper, we pres...
Hema Swetha Koppula, Krishna P. Leela, Amit Agarwa...
: E-Learning offers the advantage of interactivity: an E-Learning system can adapt the learning materials to suit the learner’s personality and his goals, and it can react to the...
Users of the web are increasingly interested in tracking the appearance of new postings rather than locating existing knowledge. Coupled with this is the emergence of the Web 2.0 ...
John Keeney, Dominic Jones, Dominik Roblek, David ...
Search engines largely rely on robots (i.e., crawlers or spiders) to collect information from the Web. Such crawling activities can be regulated from the server side by deploying ...
Yang Sun, Ziming Zhuang, Isaac G. Councill, C. Lee...