Sciweavers

ISCA
2005
IEEE

Direct Cache Access for High Bandwidth Network I/O

13 years 10 months ago
Direct Cache Access for High Bandwidth Network I/O
Recent I/O technologies such as PCI-Express and 10Gb Ethernet enable unprecedented levels of I/O bandwidths in mainstream platforms. However, in traditional architectures, memory latency alone can limit processors from matching 10 Gb inbound network I/O traffic. We propose a platform-wide method called Direct Cache Access (DCA) to deliver inbound I/O data directly into processor caches. We demonstrate that DCA provides a significant reduction in memory latency and memory bandwidth for receive intensive network I/O applications. Analysis of benchmarks such as SPECWeb9, TPC-W and TPC-C shows that overall benefit depends on the relative volume of I/O to memory traffic as well as the spatial and temporal relationship between processor and I/O memory accesses. A system level perspective for the efficient implementation of DCA is presented.
Ram Huggahalli, Ravi R. Iyer, Scott Tetrick
Added 25 Jun 2010
Updated 25 Jun 2010
Type Conference
Year 2005
Where ISCA
Authors Ram Huggahalli, Ravi R. Iyer, Scott Tetrick
Comments (0)