Background: When publishing large-scale microarray datasets, it is of great value to create supplemental websites where either the full data, or selected subsets corresponding to ...
Christian A. Rees, Janos Demeter, John C. Matese, ...
A website can regulate search engine crawler access to its content using the robots exclusion protocol, specified in its robots.txt file. The rules in the protocol enable the site...
This paper presents a general framework for building classifiers that deal with short and sparse text & Web segments by making the most of hidden topics discovered from larges...
Parallel browsing describes a behavior where users visit Web pages in multiple concurrent threads. Web browsers explicitly support this by providing tabs. Although parallel browsi...
In this paper, we propose a multimodal Web image retrieval technique based on multi-graph enabled active learning. The main goal is to leverage the heterogeneous data on the Web t...