As more and more query processing work can be done in main memory, memory access is becoming a significant cost component of database operations. Recent database research has show...
Hardware prefetching is a simple and effective technique for hiding cache miss latency and thus improving the overall performance. However, it comes with addition of prefetch buff...
High-performance processors use a large set–associative L1 data cache with multiple ports. As clock speeds and size increase such a cache consumes a significant percentage of t...
Dan Nicolaescu, Alexander V. Veidenbaum, Alexandru...
In this paper we consider the distributed simulation of queueing networks of FCFS servers with infinite buffers, and irreducible Markovian routing. We first show that for either t...
ABSTRACT: Switch chips are building blocks for computer and communication systems. Switches need internal buffering, because of output contention; shared buffering is known to perf...