summary
play

Summary NUCA is giving us more capacity, but further away 40 - PowerPoint PPT Presentation

J IGSAW : S CALABLE S OFTWARE -D EFINED C ACHES N ATHAN B ECKMANN AND D ANIEL S ANCHEZ MIT CSAIL PACT13 - E DINBURGH , S COTLAND S EP 11, 2013 Summary NUCA is giving us more capacity, but further away 40 Applications have widely


  1. J IGSAW : S CALABLE S OFTWARE -D EFINED C ACHES N ATHAN B ECKMANN AND D ANIEL S ANCHEZ MIT CSAIL PACT’13 - E DINBURGH , S COTLAND S EP 11, 2013

  2. Summary ¨ NUCA is giving us more capacity, but further away 40 ¨ Applications have widely libquantum MPKI zeusmp varying cache behavior sphinx3 0 16MB Cache Size ¨ Cache organization should adapt to application ¨ Jigsaw uses physical cache resources as building blocks of virtual caches, or shares

  3. Approach 3 ¨ Jigsaw uses physical cache resources as building blocks of virtual caches, or shares libquantum zeusmp Bank 40 MPKI Tiled Multicore 0 16MB Cache Size sphinx3

  4. Agenda 4 ¨ Introduction ¨ Background ¤ Goals ¤ Existing Approaches ¨ Jigsaw Design ¨ Evaluation

  5. Goals 5 ¨ Make effective use of cache capacity ¨ Place data for low latency ¨ Provide capacity isolation for performance ¨ Have a simple implementation

  6. Existing Approaches: S-NUCA 6 Spread lines evenly across banks ¨ High Capacity ¨ High Latency ¨ No Isolation ¨ Simple

  7. Existing Approaches: Partitioning 7 Isolate regions of cache between applications. ¨ High Capacity ¨ High Latency ¨ Isolation ¨ Simple ¨ Jigsaw needs partitioning; uses Vantage to get strong guarantees with no loss in associativity

  8. Existing Approaches: Private 8 Place lines in local bank ¨ Low Capacity ¨ Low Latency ¨ Isolation ¨ Complex – LLC directory

  9. Existing Approaches: D-NUCA 9 Placement, migration, and replication heuristics ¨ High Capacity ¤ But beware of over-replication and restrictive mappings ¨ Low Latency ¤ Don’t fully exploit capacity vs. latency tradeoff ¨ No Isolation ¨ Complexity Varies ¤ Private-baseline schemes require LLC directory

  10. Existing Approaches: Summary 10 S-NUCA Private D-NUCA Partitioning High Yes Yes No Yes Capacity Low No No Yes Yes Latency Isolation No Yes Yes No Simple Yes Yes No Depends

  11. Jigsaw 11 ¨ High Capacity – Any share can take full capacity, no replication ¨ Low Latency – Shares allocated near cores that use them ¨ Isolation – Partitions within each bank ¨ Simple – Low overhead hardware, no LLC directory, software-managed

  12. Agenda 12 ¨ Introduction ¨ Background ¨ Jigsaw Design ¤ Operation ¤ Monitoring ¤ Configuration ¨ Evaluation

  13. Jigsaw Components 13 Accesses Size & Placement Operation Configuration Monitoring Miss Curves

  14. Jigsaw Components 14 Operation Configuration Monitoring

  15. Agenda 15 ¨ Introduction ¨ Background ¨ Jigsaw Design ¤ Operation ¤ Monitoring ¤ Configuration ¨ Evaluation

  16. Operation: Access 16 Data è shares, so no LLC coherence required Share 1 STB TLB Share 2 LLC Classifier Share 3 ... L2 Share N L1I L1D Core LD 0x5CA1AB1E

  17. Data Classification 17 ¨ Jigsaw classifies data based on access pattern ¤ Thread, Process, Global, and Kernel • 6 thread shares • 2 process shares • 1 global share • 1 kernel share Operating System ¨ Data lazily re-classified on TLB miss ¤ Similar to R-NUCA but… n R-NUCA: Classification è Location n Jigsaw: Classification è Share (sized & placed dynamically) ¤ Negligible overhead

  18. Operation: Share-bank Translation Buffer 18 ¨ Gives unique location of ¨ Hash lines proportionally the line in the LLC Share: q ¨ Address, Share è STB: A A B A A B q Bank, Partition ¨ 400 bytes; low overhead Address (from L1 miss) Share Id (from TLB) 0x5CA1AB1E 2706 H Bank/ Bank/ 1 Part 0 Part 63 STB Entry STB Entry … STB Entry 1/3 3/5 1/3 0/8 STB Entry Share Config Address 0x5CA1AB1E maps to 4 entries, associative, bank 3, partition 5 exception on miss

  19. Agenda 19 ¨ Introduction ¨ Background ¨ Jigsaw Design ¤ Operation ¤ Monitoring ¤ Configuration ¨ Evaluation

  20. Monitoring 20 ¨ Software requires miss curves for each share ¨ Add utility monitors (UMONs) per tile to produce miss curves ¨ Dynamic sampling to model full LLC at each bank; see paper Misses Way 0 … Way N-1 … 0x3DF7AB 0xFE3D98 0xDD380B 0x3930EA … 0xB3D3GA 0x0E5A7B 0x123456 0x7890AB Tag … 0xCDEF00 0x3173FC 0xCDC911 0xBAD031 Array … … … 0x7A5744 0x7A4A70 0xADD235 0x541302 Size Hit Cache Size … 717,543 117,030 213,021 32,103 Counters

  21. Configuration 21 ¨ Software decides share configuration ¨ Approach: Size è Place ¤ Solving independently is simple ¤ Sizing is hard, placing is easy Misses SIZE PLACE Size LLC

  22. Configuration: Sizing 22 ¨ Partitioning problem: Divide cache capacity of S among P partitions/shares to maximize hits ¨ Use miss curves to describe partition behavior ¨ NP-complete in general ¨ Existing approaches: ¤ Hill climbing is fast but gets stuck in local optima ¤ UCP Lookahead is good but scales quadratically: O(P x S 2 ) Utility-based Cache Partitioning, Qureshi and Patt, MICRO’06 Can we scale Lookahead?

  23. Configuration: Lookahead 23 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  24. Configuration: Lookahead 24 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  25. Configuration: Lookahead 25 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  26. Configuration: Lookahead 26 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  27. Configuration: Lookahead 27 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  28. Configuration: Lookahead 28 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  29. Configuration: Lookahead 29 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  30. Configuration: Lookahead 30 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  31. Configuration: Lookahead 31 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  32. Configuration: Lookahead 32 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  33. Configuration: Lookahead 33 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  34. Configuration: Lookahead 34 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  35. Configuration: Lookahead 35 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  36. Configuration: Lookahead 36 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Misses Size LLC Size

  37. Configuration: Lookahead 37 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Maximum cache utility Misses Size LLC Size

  38. Configuration: Lookahead 38 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Maximum cache utility Misses Size LLC Size

  39. Configuration: Lookahead 39 ¨ UCP Lookahead: ¤ Scan miss curves to find allocation that maximizes average cache utility (hits per byte) Maximum cache utility Misses Size LLC Size

  40. Configuration: Lookahead 40 ¨ Observation: Lookahead traces the convex hull of the miss curve Maximum cache utility Misses Size LLC Size

  41. Convex Hulls 41 ¨ The convex hull of a curve is the set containing all lines between any two points on the curve, or “the curve connecting the points along the bottom” Misses Misses Size Size LLC Size LLC Size

  42. Configuration: Peekahead 42 ¨ There are well-known linear algorithms to compute convex hulls ¨ Peekahead algorithm is an exact, linear-time implementation of UCP Lookahead Misses Misses Size Size LLC Size LLC Size

  43. Configuration: Peekahead 43 ¨ Peekahead computes all convex hulls encountered during allocation in linear time ¤ Starting from every possible allocation ¤ Up to any remaining cache capacity Misses Misses Size Size LLC Size LLC Size

  44. Configuration: Peekahead 44 ¨ Knowing the convex hull, each allocation step is O(log P) ¤ Convex hulls have decreasing slope è decreasing average cache utility è only consider next point on hull ¤ Use max-heap to compare between partitions Best Step?

Recommend


More recommend