Sciweavers

WWW
2007
ACM

A large-scale study of robots.txt

14 years 5 months ago
A large-scale study of robots.txt
Search engines largely rely on Web robots to collect information from the Web. Due to the unregulated open-access nature of the Web, robot activities are extremely diverse. Such crawling activities can be regulated from the server side by deploying the Robots Exclusion Protocol in a file called robots.txt. Although it is not an enforcement standard, ethical robots (and many commercial) will follow the rules specified in robots.txt. With our focused crawler, we investigate 7,593 websites from education, government, news, and business domains. Five crawls have been conducted in succession to study the temporal changes. Through statistical analysis of the data, we present a survey of the usage of Web robots rules at the Web scale. The results also show that the usage of robots.txt has increased over time. General Terms Experimentation, Measurement. Keywords crawler, robots exclusion protocol, robots.txt, search engine.
Yang Sun, Ziming Zhuang, C. Lee Giles
Added 22 Nov 2009
Updated 22 Nov 2009
Type Conference
Year 2007
Where WWW
Authors Yang Sun, Ziming Zhuang, C. Lee Giles
Comments (0)