Sciweavers

ISCA
2006
IEEE

A Case for MLP-Aware Cache Replacement

13 years 10 months ago
A Case for MLP-Aware Cache Replacement
Performance loss due to long-latency memory accesses can be reduced by servicing multiple memory accesses concurrently. The notion of generating and servicing long-latency cache misses in parallel is called Memory Level Parallelism (MLP). MLP is not uniform across cache misses – some misses occur in isolation while some occur in parallel with other misses. Isolated misses are more costly on performance than parallel misses. However, traditional cache replacement is not aware of the MLP-dependent cost differential between different misses. Cache replacement, if made MLP-aware, can improve performance by reducing the number of performance-critical isolated misses. This paper makes two key contributions. First, it proposes a framework for MLP-aware cache replacement by using a runtime technique to compute the MLP-based cost for each cache miss. It then describes a simple cache replacement mechanism that takes both MLP-based cost and recency into account. Second, it proposes a novel, lo...
Moinuddin K. Qureshi, Daniel N. Lynch, Onur Mutlu,
Added 12 Jun 2010
Updated 12 Jun 2010
Type Conference
Year 2006
Where ISCA
Authors Moinuddin K. Qureshi, Daniel N. Lynch, Onur Mutlu, Yale N. Patt
Comments (0)