Sciweavers

111 search results - page 3 / 23
» Using Multiple Memory Access Instructions for Reducing Code ...
Sort
View
ISLPED
2003
ACM
88views Hardware» more  ISLPED 2003»
13 years 10 months ago
Reducing data cache energy consumption via cached load/store queue
High-performance processors use a large set–associative L1 data cache with multiple ports. As clock speeds and size increase such a cache consumes a significant percentage of t...
Dan Nicolaescu, Alexander V. Veidenbaum, Alexandru...
CASES
2006
ACM
13 years 11 months ago
A dynamic code placement technique for scratchpad memory using postpass optimization
In this paper, we propose a fully automatic dynamic scratchpad memory (SPM) management technique for instructions. Our technique loads required code segments into the SPM on deman...
Bernhard Egger, Chihun Kim, Choonki Jang, Yoonsung...
CODES
2006
IEEE
13 years 9 months ago
Application specific forwarding network and instruction encoding for multi-pipe ASIPs
Small area and code size are two critical design issues in most of embedded system designs. In this paper, we tackle these issues by customizing forwarding networks and instructio...
Swarnalatha Radhakrishnan, Hui Guo, Sri Parameswar...
WMPI
2004
ACM
13 years 10 months ago
Scalable cache memory design for large-scale SMT architectures
The cache hierarchy design in existing SMT and superscalar processors is optimized for latency, but not for bandwidth. The size of the L1 data cache did not scale over the past dec...
Muhamed F. Mudawar
CGO
2005
IEEE
13 years 11 months ago
Compiler Managed Dynamic Instruction Placement in a Low-Power Code Cache
Modern embedded microprocessors use low power on-chip memories called scratch-pad memories to store frequently executed instructions and data. Unlike traditional caches, scratch-p...
Rajiv A. Ravindran, Pracheeti D. Nagarkar, Ganesh ...