the memory abstraction
play

The Memory Abstraction Association of <name, value> pairs - PowerPoint PPT Presentation

The Memory Abstraction Association of <name, value> pairs typically named as byte addresses often values aligned on multiples of size Sequence of Reads and Writes Write binds a value to an address Read of addr returns


  1. Reducing Translation Time Machines with TLBs go one step further to reduce # cycles/cache access They overlap the cache access with the TLB access: high order bits of the VA are used to look in the TLB while low order bits are used as index into cache

  2. Overlapped Cache & TLB Access index assoc 1 K 32 TLB Cache lookup 4 bytes 10 2 00 Hit/ PA Data PA Hit/ 12 20 Miss Miss page # disp = IF cache hit AND (cache tag = PA) then deliver data to CPU ELSE IF [cache miss OR (cache tag = PA)] and TLB hit THEN access memory with the PA from the TLB ELSE do standard VA translation

  3. Problems With Overlapped TLB Access Overlapped access only works as long as the address bits used to index into the cache do not change as the result of VA translation This usually limits things to small caches, large page sizes, or high n-way set associative caches if you want a large cache Example: suppose everything the same except that the cache is increased to 8 K bytes instead of 4 K: 11 2 cache 00 index This bit is changed by VA translation, but 12 20 is needed for cache virt page # disp lookup Solutions: go to 8K byte page sizes; go to 2 way set associative cache; or SW guarantee VA[13]=PA[13] 2 way set assoc cache 1K 10 4 4

  4. SPEC: System Performance Evaluation Cooperative • First Round 1989 – 10 programs yielding a single number (“SPECmarks”) • Second Round 1992 – SPECInt92 (6 integer programs) and SPECfp92 (14 floating point programs) » Compiler Flags unlimited. March 93 of DEC 4000 Model 610: spice: unix.c:/def=(sysv,has_bcopy,”bcopy(a,b,c)= memcpy(b,a,c)” wave5: /ali=(all,dcom=nat)/ag=a/ur=4/ur=200 nasa7: /norecu/ag=a/ur=4/ur2=200/lc=blas • Third Round 1995 – new set of programs: SPECint95 (8 integer programs) and SPECfp95 (10 floating point) – “benchmarks useful for 3 years” – Single flag setting for all programs: SPECint_base95, SPECfp_base95

  5. SPEC: System Performance Evaluation Cooperative • Fourth Round 2000: SPEC CPU2000 – 12 Integer – 14 Floating Point – 2 choices on compilation; “aggressive” (SPECint2000,SPECfp2000), “conservative” (SPECint_base2000,SPECfp_base); flags same for all programs, no more than 4 flags, same compiler for conservative, can change for aggressive – multiple data sets so that can train compiler if trying to collect data for input to compiler to improve optimization

  6. How to Summarize Performance • Arithmetic mean (weighted arithmetic mean) tracks execution time: Σ Σ Σ Σ (T i )/n or Σ Σ Σ Σ (W i *T i ) • Harmonic mean (weighted harmonic mean) of rates (e.g., MFLOPS) tracks execution time: n/ Σ Σ (1/R i ) or n/ Σ Σ Σ Σ Σ (W i /R i ) Σ • Normalized execution time is handy for scaling performance (e.g., X times faster than SPARCstation 10) • But do not take the arithmetic mean of normalized execution time, use the geometric mean: ( Π Π Π Π T j / N j ) 1/n

  7. SPEC First Round • One program: 99% of time in single line of code • New front-end compiler could improve dramatically 800 700 600 500 400 300 200 100 0 gcc doduc epresso nasa7 spice eqntott li fpppp tomcatv matrix300 Benchmark

  8. Performance Evaluation • “For better or worse, benchmarks shape a field” • Good products created when have: – Good benchmarks – Good ways to summarize performance • Given sales is a function in part of performance relative to competition, investment in improving product as reported by performance summary • If benchmarks/summary inadequate, then choose between improving product for real programs vs. improving product to get more sales; Sales almost always wins! • Execution time is the measure of computer performance!

  9. Summary : Caches • The Principle of Locality: – Program access a relatively small portion of the address space at any instant of time. » Temporal Locality: Locality in Time » Spatial Locality: Locality in Space • Three Major Categories of Cache Misses: – Compulsory Misses: sad facts of life. Example: cold start misses. – Capacity Misses: increase cache size – Conflict Misses: increase cache size and/or associativity. • Write Policy: – Write Through: needs a write buffer. – Write Back: control can be complex • Today CPU time is a function of (ops, cache misses) vs. just f(ops): What does this mean to Compilers, Data structures, Algorithms?

  10. Summary #3/4: The Cache Design Space • Several interacting dimensions Cache Size – cache size – block size Associativity – associativity – replacement policy – write-through vs write-back Block Size • The optimal choice is a compromise – depends on access characteristics » workload Bad » use (I-cache, D-cache, TLB) – depends on technology / cost Factor A Factor B • Simplicity often wins Good Less More

  11. Review #4/4: TLB, Virtual Memory • Caches, TLBs, Virtual Memory all understood by examining how they deal with 4 questions: 1) Where can block be placed? 2) How is block found? 3) What block is repalced on miss? 4) How are writes handled? • Page tables map virtual address to physical address • TLBs make virtual memory practical – Locality in data => locality in addresses of data, temporal and spatial • TLB misses are significant in processor performance – funny times, as most systems can’t access all of 2nd level cache without TLB misses! • Today VM allows many processes to share single memory without having to swap all processes to disk; today VM protection is more important than memory hierarchy

  12. Who Cares About the Memory Hierarchy? µProc 1000 CPU 60%/yr. “Moore’s Law” Performance CPU-DRAM Gap 100 Processor-Memory Performance Gap: (grows 50% / year) 10 “Less’ Law?” DRAM 7%/yr. DRAM 1 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 • 1980: no cache in µproc; 1995 2-level cache on chip (1989 first Intel µproc with a cache on chip)

  13. Generations of Microprocessors • Time of a full cache miss in instructions executed: 1st Alpha: 340 ns/5.0 ns = 68 clks x 2 or 136 2nd Alpha: 266 ns/3.3 ns = 80 clks x 4 or 320 3rd Alpha: 180 ns/1.7 ns =108 clks x 6 or 648 • 1/2X latency x 3X clock rate x 3X Instr/clock ⇒ -5X

  14. Processor-Memory Performance Gap “Tax” Processor % Area %Transistors (-cost) (-power) • Alpha 21164 37% 77% • StrongArm SA110 61% 94% • Pentium Pro 64% 88% – 2 dies per package: Proc/I$/D$ + L2$ • Caches have no “inherent value”, only try to close performance gap

  15. What is a cache? • Small, fast storage used to improve average access time to slow memory. • Exploits spacial and temporal locality • In computer architecture, almost everything is a cache! – Registers “a cache” on variables – software managed – First-level cache a cache on second-level cache – Second-level cache a cache on memory – Memory a cache on disk (virtual memory) – TLB a cache on page table – Branch-prediction a cache on prediction information? Proc/Regs L1-Cache Bigger Faster L2-Cache Memory Disk, Tape, etc.

  16. Traditional Four Questions for Memory Hierarchy Designers • Q1: Where can a block be placed in the upper level? (Block placement) – Fully Associative, Set Associative, Direct Mapped • Q2: How is a block found if it is in the upper level? (Block identification) – Tag/Block • Q3: Which block should be replaced on a miss? (Block replacement) – Random, LRU • Q4: What happens on a write? (Write strategy) – Write Back or Write Through (with Write Buffer)

  17. Review: Cache performance • Miss-oriented Approach to Memory Access: MemAccess = = = = × × × × + + + + × × × × × × × × × × × × CPUtime IC CPI MissRate MissPenalt y CycleTime   Execution Inst     MemMisses = = = = × × × × + + + + × × × × × × × × CPUtime IC CPI MissPenalt y CycleTime   Execution Inst     – CPI Execution includes ALU and Memory instructions • Separating out Memory component entirely – AMAT = Average Memory Access Time – CPI ALUOps does not include memory instructions AluOps MemAccess = = = = × × × × × × × × + + + + × × × × × × × × CPUtime IC CPI AMAT CycleTime   Inst Inst AluOps     = = = = + + + + × × × × AMAT HitTime MissRate MissPenalt y ( ( ( ( ) ) ) ) = = + + × × + + = = HitTime + + MissRate × × MissPenalt y + + Inst Inst Inst ( ( ( ( ) ) ) ) + + + + × × × × HitTime MissRate MissPenalt y Data Data Data

  18. Impact on Performance • Suppose a processor executes at – Clock Rate = 200 MHz (5 ns per cycle), Ideal (no misses) CPI = 1.1 – 50% arith/logic, 30% ld/st, 20% control • Suppose that 10% of memory operations get 50 cycle miss penalty • Suppose that 1% of instructions get same miss penalty • CPI = ideal CPI + average stalls per instruction 1.1(cycles/ins) + [ 0.30 (DataMops/ins) x 0.10 (miss/DataMop) x 50 (cycle/miss)] + [ 1 (InstMop/ins) x 0.01 (miss/InstMop) x 50 (cycle/miss)] = (1.1 + 1.5 + .5) cycle/ins = 3.1 • 58% of the time the proc is stalled waiting for memory! • AMAT=(1/1.3)x[1+0.01x50]+(0.3/1.3)x[1+0.1x50]=2.54

  19. Unified vs Split Caches • Unified vs Separate I&D Proc Proc I-Cache-1 Proc D-Cache-1 Unified Cache-1 Unified Cache-2 Unified Cache-2 • Example: – 16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47% – 32KB unified: Aggregate miss rate=1.99% • Which is better (ignore L2 cache)? – Assume 33% data ops ⇒ 75% accesses from instructions (1.0/1.33) – hit time=1, miss time=50 – Note that data hit has 1 stall for unified cache (only one port) AMAT Harvard =75%x(1+0.64%x50)+25%x(1+6.47%x50) = 2.05 AMAT Unified =75%x(1+1.99%x50)+25%x(1+1+1.99%x50)= 2.24

  20. How to Improve Cache Performance? = = = = + + + + × × × × AMAT HitTime MissRate MissPenalt y 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

  21. Where to misses come from? • Classifying Misses: 3 Cs – Compulsory —The first access to a block is not in the cache, so the block must be brought into the cache. Also called cold start misses or first reference misses . (Misses in even an Infinite Cache) – Capacity —If the cache cannot contain all the blocks needed during execution of a program, capacity misses will occur due to blocks being discarded and later retrieved. (Misses in Fully Associative Size X Cache) – Conflict —If block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory & capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. Also called collision misses or interference misses . (Misses in N-way Associative, Size X Cache) • 4th “C”: – Coherence - Misses caused by cache coherence.

  22. 3Cs Absolute Miss Rate (SPEC92) 0.14 1-way Conflict 0.12 2-way 0.1 4-way 0.08 8-way 0.06 Capacity 0.04 0.02 0 1 2 4 8 16 32 64 128 Compulsory Cache Size (KB)

  23. Cache Size 0.14 1-way 0.12 2-way 0.1 4-way 0.08 8-way 0.06 Capacity 0.04 0.02 0 1 2 4 8 16 32 64 128 Compulsory Cache Size (KB) • Old rule of thumb: 2x size => 25% cut in miss rate • What does it reduce?

  24. Cache Organization? • Assume total cache size not changed: • What happens if: 1) Change Block Size: 2) Change Associativity: 3) Change Compiler: Which of 3Cs is obviously affected?

  25. Larger Block Size (fixed size&assoc) 25% 1K 20% 4K 15% Miss 16K Rate 10% 64K 5% 256K Reduced 0% compulsory 16 32 64 128 256 misses Increased Block Size (bytes) Conflict Misses What else drives up block size?

  26. Associativity 0.14 1-way Conflict 0.12 2-way 0.1 4-way 0.08 8-way 0.06 Capacity 0.04 0.02 0 1 2 4 8 16 32 64 128 Compulsory Cache Size (KB)

  27. 3Cs Relative Miss Rate 100% 1-way Conflict 80% 2-way 4-way 8-way 60% 40% Capacity 20% 0% 1 2 4 8 16 32 64 128 Flaws: for fixed block size Compulsory Good: insight => invention Cache Size (KB)

  28. Associativity vs Cycle Time • Beware: Execution time is only final measure! • Why is cycle time tied to hit time? • Will Clock Cycle time increase? – Hill [1988] suggested hit time for 2-way vs. 1-way external cache +10%, internal + 2% – suggested big and dumb caches Effective cycle time of assoc pzrbski ISCA

  29. Example: Avg. Memory Access Time vs. Miss Rate • Example: assume CCT = 1.10 for 2-way, 1.12 for 4-way, 1.14 for 8-way vs. CCT direct mapped Cache Size Associativity (KB) 1-way 2-way 4-way 8-way 1 2.33 2.15 2.07 2.01 2 1.98 1.86 1.76 1.68 4 1.72 1.67 1.61 1.53 8 1.46 1.48 1.47 1.43 16 1.29 1.32 1.32 1.32 32 1.20 1.24 1.25 1.27 64 1.14 1.20 1.21 1.23 128 1.10 1.17 1.18 1.20 (Red means A.M.A.T. not improved by more associativity)

  30. Fast Hit Time + Low Conflict => Victim Cache • How to combine fast hit time DATA TAGS of direct mapped yet still avoid conflict misses? • Add buffer to place data discarded from cache • Jouppi [1990]: 4-entry victim cache removed 20% to 95% of One Cache line of Data Tag and Comparator conflicts for a 4 KB direct One Cache line of Data Tag and Comparator mapped data cache One Cache line of Data • Used in Alpha, HP machines Tag and Comparator One Cache line of Data Tag and Comparator To Next Lower Level In Hierarchy

  31. Reducing Misses via “Pseudo-Associativity” • How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache? • Divide cache: on a miss, check other half of cache to see if there, if so have a pseudo-hit (slow hit) Hit Time Miss Penalty Pseudo Hit Time Time • Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles – Better for caches not tied directly to processor (L2) – Used in MIPS R1000 L2 cache, similar in UltraSPARC

  32. Reducing Misses by Hardware Prefetching of Instructions & Data • E.g., Instruction Prefetching – Alpha 21064 fetches 2 blocks on a miss – Extra block placed in “stream buffer” – On miss check stream buffer • Works with data blocks too: – Jouppi [1990] 1 data stream buffer got 25% misses from 4KB cache; 4 streams got 43% – Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches • Prefetching relies on having extra memory bandwidth that can be used without penalty

  33. Reducing Misses by Software Prefetching Data • Data Prefetch – Load data into register (HP PA-RISC loads) – Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) – Special prefetching instructions cannot cause faults; a form of speculative execution • Prefetching comes in two flavors: – Binding prefetch: Requests load directly into register. » Must be correct address and register! – Non-Binding prefetch: Load into cache. » Can be incorrect. Faults? • Issuing Prefetch Instructions takes time – Is cost of prefetch issues < savings in reduced misses? – Higher superscalar reduces difficulty of issue bandwidth

  34. Reducing Misses by Compiler Optimizations • McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in software • Instructions – Reorder procedures in memory so as to reduce conflict misses – Profiling to look at conflicts(using tools they developed) • Data – Merging Arrays : improve spatial locality by single array of compound elements vs. 2 arrays – Loop Interchange : change nesting of loops to access data in order stored in memory – Loop Fusion : Combine 2 independent loops that have same looping and some variables overlap – Blocking : Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows

  35. Merging Arrays Example /* Before: 2 sequential arrays */ int val[SIZE]; int key[SIZE]; /* After: 1 array of stuctures */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; Reducing conflicts between val & key; improve spatial locality

  36. Loop Interchange Example /* Before */ for (k = 0; k < 100; k = k+1) for (j = 0; j < 100; j = j+1) for (i = 0; i < 5000; i = i+1) x[i][j] = 2 * x[i][j]; /* After */ for (k = 0; k < 100; k = k+1) for (i = 0; i < 5000; i = i+1) for (j = 0; j < 100; j = j+1) x[i][j] = 2 * x[i][j]; Sequential accesses instead of striding through memory every 100 words; improved spatial locality

  37. Loop Fusion Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) d[i][j] = a[i][j] + c[i][j]; /* After */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) { a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} 2 misses per access to a & c vs. one miss per access; improve spatial locality

  38. Blocking Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) {r = 0; for (k = 0; k < N; k = k+1){ r = r + y[i][k]*z[k][j];}; x[i][j] = r; }; • Two Inner Loops: – Read all NxN elements of z[] – Read N elements of 1 row of y[] repeatedly – Write N elements of 1 row of x[] • Capacity Misses a function of N & Cache Size: – 2N 3 + N 2 => (assuming no conflict; otherwise …) • Idea: compute on BxB submatrix that fits

  39. Blocking Example /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+B-1,N); j = j+1) {r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) { r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r; }; • B called Blocking Factor • Capacity Misses from 2N 3 + N 2 to N 3 /B+2N 2 • Conflict Misses Too?

  40. Reducing Conflict Misses by Blocking 0.1 Direct Mapped Cache 0.05 Fully Associative Cache 0 0 50 100 150 Blocking Factor • Conflict misses in caches not FA vs. Blocking size – Lam et al [1991] a blocking factor of 24 had a fifth the misses vs. 48 despite both fit in cache

  41. Summary of Compiler Optimizations to Reduce Cache Misses (by hand) vpenta (nasa7) gmty (nasa7) tomcatv btrix (nasa7) mxm (nasa7) spice cholesky (nasa7) compress 1 1.5 2 2.5 3 Performance Improvement merged loop loop fusion blocking arrays interchange

  42. Summary: Miss Rate Reduction CPUtime = IC × CPI Execution + Memory accesses   × Miss rate × Miss penalty  × Clock cycle time  Instruction • 3 Cs: Compulsory, Capacity, Conflict 0. Larger cache 1. Reduce Misses via Larger Block Size 2. Reduce Misses via Higher Associativity 3. Reducing Misses via Victim Cache 4. Reducing Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Misses by Compiler Optimizations • Prefetching comes in two flavors: – Binding prefetch: Requests load directly into register. » Must be correct address and register! – Non-Binding prefetch: Load into cache. » Can be incorrect. Frees HW/SW to guess!

  43. Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

  44. Write Policy: Write-Through vs Write-Back • Write-through: all writes update cache and underlying memory/cache – Can always discard cached data - most up-to-date data is in memory – Cache control bit: only a valid bit • Write-back: all writes simply update cache – Can’t just discard cached data - may have to write it back to memory – Cache control bits: both valid and dirty bits • Other Advantages: – Write-through: » memory (or other processors) always have latest data » Simpler management of cache – Write-back: » much lower bandwidth, since data often overwritten multiple times » Better tolerance to long-latency memory?

  45. Write Policy 2: Write Allocate vs Non-Allocate (What happens on write-miss) • Write allocate: allocate new cache line in cache – Usually means that you have to do a “read miss” to fill in rest of the cache-line! – Alternative: per/word valid bits • Write non-allocate (or “write-around”): – Simply send write data through to underlying memory/cache - don’t allocate new cache line!

  46. 1. Reducing Miss Penalty: Read Priority over Write on Miss CPU in out Write Buffer write buffer DRAM (or lower mem)

  47. 1. Reducing Miss Penalty: Read Priority over Write on Miss • Write-through w/ write buffers => RAW conflicts with main memory reads on cache misses – If simply wait for write buffer to empty, might increase read miss penalty (old MIPS 1000 by 50% ) – Check write buffer contents before read; if no conflicts, let the memory access continue • Write-back want buffer to hold displaced blocks – Read miss replacing dirty block – Normal: Write dirty block to memory, and then do the read – Instead copy the dirty block to a write buffer, then do the read, and then do the write – CPU stall less since restarts as soon as do read

  48. 2. Reduce Miss Penalty: Early Restart and Critical Word First • Don’t wait for full block to be loaded before restarting CPU – Early restart —As soon as the requested word of the block ar rives, send it to the CPU and let the CPU continue execution – Critical Word First —Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first • Generally useful only in large blocks, • Spatial locality => tend to want next sequential word, so not clear if benefit by early restart block

  49. 3. Reduce Miss Penalty: Non-blocking Caches to reduce stalls on misses • Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss – requires F/E bits on registers or out-of-order execution – requires multi-bank memories • “ hit under miss ” reduces the effective miss penalty by working during miss vs. ignoring CPU requests • “ hit under multiple miss ” or “ miss under miss ” may further lower the effective miss penalty by overlapping multiple misses – Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses – Requires muliple memory banks (otherwise cannot support) – Penium Pro allows 4 outstanding memory misses

  50. Value of Hit Under Miss for SPEC Hit Under i Misses 2 1.8 1.6 1.4 0->1 0->1 1.2 1->2 1->2 1 2->64 2->64 0.8 Base Base 0.6 0.4 “Hit under n Misses” 0.2 0 doduc nasa7 espresso ear ora eqntott compress tomcatv wave5 fpppp hydro2d su2cor spice2g6 xlisp swm256 alvinn mdljsp2 mdljdp2 Integer Floating Point • FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26 • Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19 • 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss

  51. 4: Add a second-level cache • L2 Equations AMAT = Hit Time L1 + Miss Rate L1 x Miss Penalty L1 Miss Penalty L1 = Hit Time L2 + Miss Rate L2 x Miss Penalty L2 AMAT = Hit Time L1 + Miss Rate L1 x (Hit Time L2 + Miss Rate L2 + Miss Penalty L2 ) • Definitions: – Local miss rate — misses in this cache divided by the total number of memory accesses to this cache (Miss rate L2 ) – Global miss rate —misses in this cache divided by the total number of memory accesses generated by the CPU – Global Miss Rate is what matters

  52. Comparing Local and Global Miss Rates • 32 KByte 1st level cache; Linear Increasing 2nd level cache • Global miss rate close to single level cache rate provided L2 >> L1 • Don’t use local miss rate Cache Size • L2 not tied to CPU clock Log cycle! • Cost & A.M.A.T. • Generally Fast Hit Times and fewer misses • Since hits are few, target Cache Size miss reduction

  53. Reducing Misses: Which apply to L2 Cache? • Reducing Miss Rate 1. Reduce Misses via Larger Block Size 2. Reduce Conflict Misses via Higher Associativity 3. Reducing Conflict Misses via Victim Cache 4. Reducing Conflict Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Capacity/Conf. Misses by Compiler Optimizations

  54. L2 cache block size & A.M.A.T. Relative CPU Time 1.95 2 1.9 1.8 1.7 1.54 1.6 1.5 1.36 1.34 1.4 1.28 1.27 1.3 1.2 1.1 1 16 32 64 128 256 512 Block Size • 32KB L1, 8 byte path to memory

  55. Reducing Miss Penalty Summary CPUtime = IC × CPI Execution + Memory accesses   × Miss rate × Miss penalty  × Clock cycle time  Instruction • Four techniques – Read priority over write on miss – Early Restart and Critical Word First on miss – Non-blocking Caches (Hit under Miss, Miss under Miss) – Second Level Cache • Can be applied recursively to Multilevel Caches – Danger is that time to DRAM will grow with multiple levels in between – First attempts at L2 caches can make things worse, since increased worst case is worse

  56. What is the Impact of What You’ve Learned About Caches? 1000 CPU • 1960-1985: Speed = ƒ(no. operations) 100 • 1990 – Pipelined Execution & Fast Clock Rate 10 – Out-of-Order execution DRAM – Superscalar 1 Instruction Issue 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 • 1998: Speed = ƒ(non-cached memory accesses) • Superscalar, Out-of-Order machines hide L1 data cache miss (-5 clocks) but not L2 cache miss (-50 clocks)?

  57. Cache Optimization Summary Technique MR MP HT Complexity miss rate Larger Block Size + – 0 Higher Associativity + – 1 Victim Caches + 2 Pseudo-Associative Caches + 2 HW Prefetching of Instr/Data + 2 Compiler Controlled Prefetching + 3 Compiler Reduce Misses + 0 miss penalty Priority to Read Misses + 1 Early Restart & Critical Word 1st + 2 Non-Blocking Caches + 3 Second Level Caches + 2

  58. Main Memory Background • Random Access Memory (vs. Serial Access Memory) • Different flavors at different levels – Physical Makeup (CMOS, DRAM) – Low Level Architectures (FPM,EDO,BEDO,SDRAM) • Cache uses SRAM : Static Random Access Memory – No refresh (6 transistors/bit vs. 1 transistor Size : DRAM/SRAM - 4-8 , Cost/Cycle time : SRAM/DRAM - 8-16 • Main Memory is DRAM : Dynamic Random Access Memory – Dynamic since needs to be refreshed periodically (8 ms, 1% time) – Addresses divided into 2 halves (Memory as a 2D matrix): » RAS or Row Access Strobe » CAS or Column Access Strobe

  59. Static RAM (SRAM) • Six transistors in cross connected fashion – Provides regular AND inverted outputs – Implemented in CMOS process Single Port 6-T SRAM Cell

  60. SRAM Read Timing (typical) • t AA (access time for address): how long it takes to get stable output after a change in address. • t ACS (access time for chip select): how long it takes to get stable output after CS is asserted. • t OE (output enable time): how long it takes for the three-state output buffers to leave the high- impedance state when OE and CS are both asserted.

  61. SRAM Read Timing (typical) ADDR stable stable stable ≥ t AA Max(t AA, t ACS ) CS_L t OH t ACS OE_L t OE t AA t OZ t OE t OZ DOUT valid valid valid WE_L = HIGH

  62. Dynamic RAM • SRAM cells exhibit high speed/poor density • DRAM: simple transistor/capacitor pairs in Word Line high density form C Bit Line . . . Sense Amp

  63. Basic DRAM Cell • Planar Cell – Polysilicon-Diffusion Capacitance, Diffused Bitlines • Problem: Uses a lot of area (< 1Mb) Capacitor • You can’t just ride the process curve to shrink C (discussed later) Metal word line M1 word line SiO 2 poly Field Oxide n + n + Inversion layer Diffused poly induced by bit line plate bias Polysilicon Polysilicon plate gate (a) Cross-section (b) Layout

  64. Advanced DRAM Cells • Stacked cell (Expand UP)

  65. Advanced DRAM Cells • Trench Cell (Expand DOWN) Cell Plate Si Capacitor Insulator Refilling Poly Storage Node Poly Si Substrate 2nd Field Oxide

  66. DRAM Operations • Write – Charge bitline HIGH or LOW and set wordline HIGH • Read Word Line – Bit line is precharged to a voltage halfway between HIGH and LOW, and then the C word line is set HIGH. . – Depending on the charge in the cap, the Bit Line . precharged bitline is pulled slightly higher . or lower. Sense – Sense Amp Detects change Amp • Explains why Cap can’t shrink – Need to sufficiently drive bitline – Increase density => increase parasitic capacitance

  67. DRAM logical organization (4 Mbit) D Column Decoder … Sense Amps & I/O 11 Q Row Decoder Memory Array A0…A10 … (2,048 x 2,048) Word Line Storage Cell • Square root of bits per RAS/CAS

  68. So, Why do I freaking care? • By it’s nature, DRAM isn’t built for speed – Reponse times dependent on capacitive circuit properties which get worse as density increases • DRAM process isn’t easy to integrate into CMOS process – DRAM is off chip – Connectors, wires, etc introduce slowness – IRAM efforts looking to integrating the two • Memory Architectures are designed to minimize impact of DRAM latency – Low Level: Memory chips – High Level memory designs. – You will pay $$$$$$ and then some $$$ for a good memory system.

  69. So, Why do I freaking care? 1000 CPU • 1960-1985: Speed = ƒ(no. operations) 100 • 1990 – Pipelined Execution & 10 Fast Clock Rate – Out-of-Order DRAM execution – Superscalar 1 Instruction Issue 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 • 1998: Speed = ƒ(non-cached memory accesses) • What does this mean for – Compilers?,Operating Systems?, Algorithms? Data Structures?

  70. 4 Key DRAM Timing Parameters • t RAC : minimum time from RAS line falling to the valid data output. – Quoted as the speed of a DRAM when buy – A typical 4Mb DRAM t RAC = 60 ns – Speed of DRAM since on purchase sheet? • t RC : minimum time from the start of one row access to the start of the next. – t RC = 110 ns for a 4Mbit DRAM with a t RAC of 60 ns • t CAC : minimum time from CAS line falling to valid data output. – 15 ns for a 4Mbit DRAM with a t RAC of 60 ns • t PC : minimum time from the start of one column access to the start of the next. – 35 ns for a 4Mbit DRAM with a t of 60 ns

  71. DRAM Read Timing RAS_L CAS_L WE_L OE_L • Every DRAM access begins at: A 256K x 8 – The assertion of the RAS_L D DRAM 9 8 – 2 ways to read: early or late v. CAS DRAM Read Cycle Time RAS_L CAS_L A Row Address Col Address Junk Row Address Col Address Junk WE_L OE_L D High Z Junk Data Out High Z Data Out Read Access Output Enable Time Delay Early Read Cycle: OE_L asserted before CAS_L Late Read Cycle: OE_L asserted after CAS_L

  72. DRAM Performance • A 60 ns (t RAC ) DRAM can – perform a row access only every 110 ns (t RC ) – perform column access (t CAC ) in 15 ns, but time between column accesses is at least 35 ns (t PC ). » In practice, external address delays and turning around buses make it 40 to 50 ns • These times do not include the time to drive the addresses off the microprocessor nor the memory controller overhead! • Can it be made faster?

  73. Admin • Hand in homework assignment • New assignment is/will be on the class website.

  74. Fast Page Mode DRAM • Page: All bits on the same ROW (Spatial Locality) – Don’t need to wait for wordline to recharge – Toggle CAS with new column address

  75. Extended Data Out (EDO) • Overlap Data output w/ CAS toggle – Later brother: Burst EDO (CAS toggle used to get next addr)

  76. Synchronous DRAM • Has a clock input. – Data output is in bursts w/ each element clocked • Flavors: SDRAM, DDR Write Read PC100: Intel spec to meet 100MHz memory bus designs. Introduced w/ i440BX chipset

  77. RAMBUS (RDRAM) • Protocol based RAM w/ narrow (16-bit) bus – High clock rate (400 Mhz), but long latency – Pipelined operation • Multiple arrays w/ data transferred on both edges of clock RAMBUS Bank RDRAM Memory System

  78. RDRAM Timing

Recommend


More recommend