Sciweavers

Share
SC
2009
ACM

Lessons learned from a year's worth of benchmarks of large data clouds

8 years 10 months ago
Lessons learned from a year's worth of benchmarks of large data clouds
In this paper, we discuss some of the lessons that we have learned working with the Hadoop and Sector/Sphere systems. Both of these systems are cloud-based systems designed to support data intensive computing. Both include distributed file systems and closely coupled systems for processing data in parallel. Hadoop uses MapReduce, while Sphere supports the ability to execute an arbitrary user defined function over the data managed by Sector. We compare and contrast these systems and discuss some of the design trade-offs necessary in data intensive computing. In our experimental studies over the past year, Sector/Sphere has consistently performed about 2 – 4 times faster than Hadoop. We discuss some of the reasons that might be responsible for this difference in performance. Categories and Subject Descriptors C.4 [Computer System Organization]: Performance of System General Terms Performance, Experimentation Keywords Cloud Computing, Data Intensive Computing, High Performance Computin...
Yunhong Gu, Robert L. Grossman
Added 19 May 2010
Updated 19 May 2010
Type Conference
Year 2009
Where SC
Authors Yunhong Gu, Robert L. Grossman
Comments (0)
books