Sciweavers

SIGCSE
2008
ACM

Cluster computing for web-scale data processing

13 years 4 months ago
Cluster computing for web-scale data processing
In this paper we present the design of a modern course in cluster computing and large-scale data processing. The defining differences between this and previously published designs are its focus on processing very large data sets and its use of Hadoop, an open source Java-based implementation of MapReduce and the Google File System as the platform for programming exercises. Hadoop proved to be a key element for successfully implementing structured lab activities and independent design projects. Through this course, offered at the University of Washington in 2007, we imparted new skills on our students, improving their ability to design systems capable of solving web-scale problems. Categories and Subject Descriptors K.3.2 [Computer and Information Science Education]: Computer science education General Terms Design, Experimentation Keywords Education, Hadoop, MapReduce, Clusters, Distributed computing
Aaron Kimball, Sierra Michels-Slettvet, Christophe
Added 14 Dec 2010
Updated 14 Dec 2010
Type Journal
Year 2008
Where SIGCSE
Authors Aaron Kimball, Sierra Michels-Slettvet, Christophe Bisciglia
Comments (0)