Sciweavers

25 search results - page 1 / 5
» Streamlining Data Cache Access with Fast Address Calculation
Sort
View
ISCA
1995
IEEE
120views Hardware» more  ISCA 1995»
13 years 8 months ago
Streamlining Data Cache Access with Fast Address Calculation
For many programs, especially integer codes, untolerated load instruction latencies account for a significant portion of total execution time. In this paper, we present the desig...
Todd M. Austin, Dionisios N. Pnevmatikatos, Gurind...
MICRO
1997
IEEE
86views Hardware» more  MICRO 1997»
13 years 9 months ago
Streamlining Inter-Operation Memory Communication via Data Dependence Prediction
We revisit memory hierarchy design viewing memory as an inter-operation communication agent. This perspective leads to the development of novel methods of performing inter-operati...
Andreas Moshovos, Gurindar S. Sohi
ICCD
2006
IEEE
92views Hardware» more  ICCD 2006»
14 years 1 months ago
Fast Speculative Address Generation and Way Caching for Reducing L1 Data Cache Energy
— L1 data caches in high-performance processors continue to grow in set associativity. Higher associativity can significantly increase the cache energy consumption. Cache access...
Dan Nicolaescu, Babak Salamat, Alexander V. Veiden...
JSA
2010
102views more  JSA 2010»
13 years 3 months ago
On reducing load/store latencies of cache accesses
— Effective address calculation for load and store instructions needs to compete for ALU with other instructions and hence extra latencies might be incurred to data cache accesse...
Yuan-Shin Hwang, Jia-Jhe Li
TII
2010
124views Education» more  TII 2010»
12 years 11 months ago
Address-Independent Estimation of the Worst-case Memory Performance
Abstract--Real-time systems are subject to temporal constraints and require a schedulability analysis to ensure that task execution finishes within lower and upper specified bounds...
Basilio B. Fraguela, Diego Andrade, Ramon Doallo