Sciweavers

JCC
2007

Speeding up parallel GROMACS on high-latency networks

13 years 4 months ago
Speeding up parallel GROMACS on high-latency networks
: We investigate the parallel scaling of the GROMACS molecular dynamics code on Ethernet Beowulf clusters and what prerequisites are necessary for decent scaling even on such clusters with only limited bandwidth and high latency. GROMACS 3.3 scales well on supercomputers like the IBM p690 (Regatta) and on Linux clusters with a special interconnect like Myrinet or Infiniband. Because of the high single-node performance of GROMACS, however, on the widely used Ethernet switched clusters, the scaling typically breaks down when more than two computer nodes are involved, limiting the absolute speedup that can be gained to about 3 relative to a single-CPU run. With the LAM MPI implementation, the main scaling bottleneck is here identified to be the all-to-all communication which is required every time step. During such an all-to-all communication step, a huge amount of messages floods the network, and as a result many TCP packets are lost. We show that Ethernet flow control prevents netwo...
Carsten Kutzner, David van der Spoel, Martin Fechn
Added 15 Dec 2010
Updated 15 Dec 2010
Type Journal
Year 2007
Where JCC
Authors Carsten Kutzner, David van der Spoel, Martin Fechner, Erik Lindahl, Udo W. Schmitt, Bert L. de Groot, Helmut Grubmüller
Comments (0)