Sciweavers

DATE
2003
IEEE

Exploring High Bandwidth Pipelined Cache Architecture for Scaled Technology

13 years 10 months ago
Exploring High Bandwidth Pipelined Cache Architecture for Scaled Technology
In this paper we propose a design technique to pipeline cache memories for high bandwidth applications. With the scaling of technology cache access latencies are multiple clock cycles. The proposed pipelined cache architecture can be accessed every clock cycle and thereby, enhances bandwidth and overall processor performance. The proposed architecture utilizes the idea of banking to reduce bit-line and word-line delay, making word-line to sense amplifier delay to fit into a single clock cycle. Experimental results show that optimal banking allows the cache to be split into multiple stages whose delays are equal to clock cycle time. The proposed design is fully scalable and can be applied to future technology generations. Power, delay and area estimates show that on average, the proposed pipelined cache improves MOPS (millions of operations per unit time per unit area per unit energy) by 40-50% compared to current cache architectures.
Amit Agarwal, Kaushik Roy, T. N. Vijaykumar
Added 04 Jul 2010
Updated 04 Jul 2010
Type Conference
Year 2003
Where DATE
Authors Amit Agarwal, Kaushik Roy, T. N. Vijaykumar
Comments (0)