In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the...
This paper describes a new paradigm for modeling traffic levels on the world wide web (WWW) using a method of entropy maximization. This traffic is subject to the conservation con...
The search of Web resources is a very important topic due to the huge amount of valuable information available in the WWW. Standard search engines can be a great help but they are ...
The poster describes a fast, simple, yet accurate method to associate large amounts of web resources stored in a search engine database with geographic locations. The method uses ...
We present a new algorithm to measure domain-specific readability. It iteratively computes the readability of domainspecific resources based on the difficulty of domain-specific c...