Sciweavers

INTENSIVE
2009
IEEE

Accelerating K-Means on the Graphics Processor via CUDA

13 years 11 months ago
Accelerating K-Means on the Graphics Processor via CUDA
In this paper an optimized k-means implementation on the graphics processing unit (GPU) is presented. NVIDIA’s Compute Unified Device Architecture (CUDA), available from the G80 GPU family onwards, is used as the programming environment. Emphasis is placed on optimizations directly targeted at this architecture to best exploit the computational capabilities available. Additionally drawbacks and limitations of previous related work, e.g. maximum instance, dimension and centroid count are addressed. The algorithm is realized in a hybrid manner, parallelizing distance calculations on the GPU while sequentially updating cluster centroids on the CPU based on the results from the GPU calculations. An empirical performance study on synthetic data is given, demonstrating a maximum 14x speed increase to a fully SIMD optimized CPU implementation.
Mario Zechner, Michael Granitzer
Added 24 May 2010
Updated 24 May 2010
Type Conference
Year 2009
Where INTENSIVE
Authors Mario Zechner, Michael Granitzer
Comments (0)