efficient hardware assisted logging
play

Efficient Hardware-assisted Logging with Asynchronous and Direct - PowerPoint PPT Presentation

Efficient Hardware-assisted Logging with Asynchronous and Direct Update for Persistent Memory Jungi Jeong , Chang Hyun Park, Jaehyuk Huh, and Seungryoul Maeng International Symposium on Microarchitecture (MICRO) 2018 Storage-Class Memory


  1. Atomic Durability through Logging • Transaction • All stores in a transaction become durable all together or nothing • Ex) Atomic durability in software Durability with Atomicity and ordering with cache-flush write-ahead logging store B-> next= C store B-> next= C store B-> prev= A store B-> prev= A store A-> next= B store A-> next= B store C-> prev= B store C-> prev= B cache-flush 2018-10-23 48

  2. Atomic Durability through Logging • Transaction • All stores in a transaction become durable all together or nothing • Ex) Atomic durability in software Durability with Atomicity and ordering with cache-flush write-ahead logging store B-> next= C store B-> next= C Log store B-> prev= A store B-> prev= A store A-> next= B store A-> next= B Write store C-> prev= B store C-> prev= B cache-flush 2018-10-23 49

  3. Atomic Durability through Logging • Transaction • All stores in a transaction become durable all together or nothing • Ex) Atomic durability in software Durability with Atomicity and ordering with cache-flush write-ahead logging store B-> next= C store B-> next= C Log store B-> prev= A store B-> prev= A store A-> next= B store A-> next= B Write store C-> prev= B store C-> prev= B cache-flush Data Update 2018-10-23 50

  4. Atomic Durability through Logging • Transaction • All stores in a transaction become durable all together or nothing • Ex) Atomic durability in software Durability with Atomicity and ordering with cache-flush write-ahead logging store log[0]= A-> next store B-> next= C store B-> next= C Log store log[1]= C-> prev store B-> prev= A store B-> prev= A cache-flush store A-> next= B store A-> next= B Write store C-> prev= B store C-> prev= B sfence cache-flush store B-> next= C store B-> prev= A Data store A-> next= B Update store C-> prev= B cache-flush sfence 2018-10-23 51

  5. Atomic Durability through Logging • Transaction • All stores in a transaction become durable all together or nothing • Ex) Atomic durability in software Durability with Atomicity and ordering with cache-flush write-ahead logging store log[0]= A-> next store B-> next= C store B-> next= C Log store log[1]= C-> prev store B-> prev= A store B-> prev= A cache-flush store A-> next= B store A-> next= B Write store C-> prev= B store C-> prev= B sfence cache-flush store B-> next= C store B-> prev= A Data store A-> next= B Update store C-> prev= B cache-flush sfence 2018-10-23 52

  6. Atomic Durability through Logging • Transaction • All stores in a transaction become durable all together or nothing • Ex) Atomic durability in software Durability with Atomicity and ordering with cache-flush write-ahead logging store log[0]= A-> next store B-> next= C store B-> next= C Log store log[1]= C-> prev store B-> prev= A store B-> prev= A cache-flush store A-> next= B store A-> next= B Write store C-> prev= B store C-> prev= B sfence cache-flush store B-> next= C store B-> prev= A Data store A-> next= B Update store C-> prev= B cache-flush sfence Persist-ordering with store-fence 2018-10-23 53

  7. Atomic Durability through Logging • Transaction • All stores in a transaction become durable all together or nothing • Ex) Atomic durability in software Durability with Atomicity and ordering with cache-flush write-ahead logging store log[0]= A-> next store B-> next= C store B-> next= C Log store log[1]= C-> prev store B-> prev= A store B-> prev= A cache-flush store A-> next= B store A-> next= B Write store C-> prev= B store C-> prev= B sfence cache-flush store B-> next= C store B-> prev= A Data store A-> next= B Update store C-> prev= B cache-flush sfence Persist-ordering with store-fence 2018-10-23 54

  8. Atomic Durability through Logging • Transaction • All stores in a transaction become durable all together or nothing • Ex) Atomic durability in software Durability with Atomicity and ordering with cache-flush write-ahead logging store log[0]= A-> next store B-> next= C store B-> next= C Log store log[1]= C-> prev store B-> prev= A store B-> prev= A cache-flush store A-> next= B store A-> next= B Write store C-> prev= B store C-> prev= B sfence cache-flush store B-> next= C store B-> prev= A Data store A-> next= B Update store C-> prev= B cache-flush sfence Persist-ordering with store-fence 2018-10-23 55

  9. Atomic Durability through Logging • Transaction • All stores in a transaction become durable all together or nothing • Ex) Atomic durability in software Durability with Atomicity and ordering with cache-flush write-ahead logging store log[0]= A-> next store B-> next= C store B-> next= C Log store log[1]= C-> prev store B-> prev= A store B-> prev= A cache-flush store A-> next= B store A-> next= B Write store C-> prev= B store C-> prev= B sfence cache-flush store B-> next= C store B-> prev= A Data store A-> next= B Update store C-> prev= B cache-flush sfence Persist-ordering with store-fence 2018-10-23 56

  10. HW-assisted Logging 57 2018-10-23

  11. HW-assisted Logging • Simple programming model • HW is responsible for 1) log-write and 2) data update • Advantages over software-logging • Fine-grained ordering & less CPU cycles 58 2018-10-23

  12. HW-assisted Logging • Simple programming model Transa sact ct ion_begin( ) ( ) store B-> next= C … store C-> prev= B Transact ion_end( ) ( ) • HW is responsible for 1) log-write and 2) data update • Advantages over software-logging • Fine-grained ordering & less CPU cycles 59 2018-10-23

  13. HW-assisted Logging • Simple programming model Transa sact ct ion_begin( ) ( ) store B-> next= C store B-> next= C … … store C-> prev= B store C-> prev= B Transact ion_end( ) ( ) • HW is responsible for 1) log-write and 2) data update • Advantages over software-logging • Fine-grained ordering & less CPU cycles 60 2018-10-23

  14. HW-assisted Logging • Simple programming model Transa sact ct ion_begin( ) ( ) store B-> next= C store B-> next= C … … store C-> prev= B store C-> prev= B Transact ion_end( ) ( ) • HW is responsible for 1) log-write and 2) data update • Advantages over software-logging • Fine-grained ordering & less CPU cycles 61 2018-10-23

  15. HW-assisted Logging • Simple programming model Transa sact ct ion_begin( ) ( ) store B-> next= C store B-> next= C … … store C-> prev= B store C-> prev= B Transact ion_end( ) ( ) • HW is responsible for 1) log-write and 2) data update Caches Log Ctrl. Processor 2) Data-Update 1) Log-Write NVM Data NVM Log NVM • Advantages over software-logging • Fine-grained ordering & less CPU cycles 62 2018-10-23

  16. HW-assisted Logging • Simple programming model Transa sact ct ion_begin( ) ( ) store B-> next= C store B-> next= C … … store C-> prev= B store C-> prev= B Transact ion_end( ) ( ) • HW is responsible for 1) log-write and 2) data update Caches Log Ctrl. Processor 2) Data-Update 1) Log-Write NVM Data NVM Log NVM • Advantages over software-logging • Fine-grained ordering & less CPU cycles Fence Log B Store B Log B Store B Log A Store A Log A Store A (a) Ordering with SW (b) Ordering with HW 63 2018-10-23

  17. Past Proposal: Undo-based HW-Logging Processor Caches Log Ctrl. NVM Data NVM Log NVM • A. Joshi et al. HPCA 2017. • S. Shin et al. ISCA 2017. 64 2018-10-23

  18. Past Proposal: Undo-based HW-Logging • Store old value in logs • Update data in NVM before commit  Synchronous data-update Processor Caches Log Ctrl. NVM Data NVM Log NVM • A. Joshi et al. HPCA 2017. • S. Shin et al. ISCA 2017. 65 2018-10-23

  19. Past Proposal: Undo-based HW-Logging • Store old value in logs • Update data in NVM before commit  Synchronous data-update Processor Caches Log Ctrl. 1) Store old value in NVM logs NVM Data NVM Log NVM • A. Joshi et al. HPCA 2017. • S. Shin et al. ISCA 2017. 66 2018-10-23

  20. Past Proposal: Undo-based HW-Logging • Store old value in logs • Update data in NVM before commit  Synchronous data-update Processor Caches Log Ctrl. 1) Store old value in NVM logs NVM Data NVM Log NVM • A. Joshi et al. HPCA 2017. • S. Shin et al. ISCA 2017. 67 2018-10-23

  21. Past Proposal: Undo-based HW-Logging • Store old value in logs Addr • Update data in NVM before commit  Synchronous data-update Processor Caches Log Ctrl. 1) Store old value in NVM logs NVM Data NVM Log NVM • A. Joshi et al. HPCA 2017. • S. Shin et al. ISCA 2017. 68 2018-10-23

  22. Past Proposal: Undo-based HW-Logging • Store old value in logs Old Value Addr • Update data in NVM before commit  Synchronous data-update Processor Caches Log Ctrl. 2) Update data 1) Store old value in NVM in NVM logs NVM Data NVM Log NVM • A. Joshi et al. HPCA 2017. • S. Shin et al. ISCA 2017. 69 2018-10-23

  23. Past Proposal: Undo-based HW-Logging • Store old value in logs Old Value Addr • Update data in NVM before commit  Synchronous data-update Processor Caches Log Ctrl. 2) Update data 1) Store old value in NVM in NVM logs NVM Data NVM Log NVM Long critical path due to synchronous data-update • A. Joshi et al. HPCA 2017. • S. Shin et al. ISCA 2017. 70 2018-10-23

  24. Past Proposal: Undo-based HW-Logging • Store old value in logs Old Value Addr • Update data in NVM before commit  Synchronous data-update Processor Caches Log Ctrl. 2) Update data 1) Store old value in NVM in NVM logs NVM Data NVM Log NVM Long critical path due to synchronous data-update • A. Joshi et al. HPCA 2017. • S. Shin et al. ISCA 2017. 71 2018-10-23

  25. Past Proposal: Undo-based HW-Logging • Store old value in logs Old Value Addr • Update data in NVM before commit  Synchronous data-update Processor Caches Log Ctrl. 2) Update data 1) Store old value in NVM in NVM logs NVM Data NVM Log NVM Long critical path due to synchronous data-update • A. Joshi et al. HPCA 2017. • S. Shin et al. ISCA 2017. 72 2018-10-23

  26. Past Proposal: Redo-based HW-Logging Addr New Value Processor Caches Log Ctrl. NVM NVM Data NVM Log • K. Doshi et al. HPCA 2016. 73 2018-10-23

  27. Past Proposal: Redo-based HW-Logging • Store new value in logs Addr New Value • Update data in NVM after commit  Asynchronous data-update • However, update by reading log entries from NVM  I ndirect data-update Processor Caches Log Ctrl. NVM NVM Data NVM Log • K. Doshi et al. HPCA 2016. 74 2018-10-23

  28. Past Proposal: Redo-based HW-Logging • Store new value in logs Addr New Value • Update data in NVM after commit  Asynchronous data-update • However, update by reading log entries from NVM  I ndirect data-update Processor Caches Log Ctrl. 1) Store new value in NVM logs NVM NVM Data NVM Log • K. Doshi et al. HPCA 2016. 75 2018-10-23

  29. Past Proposal: Redo-based HW-Logging • Store new value in logs Addr New Value • Update data in NVM after commit  Asynchronous data-update • However, update by reading log entries from NVM  I ndirect data-update Processor Caches Log Ctrl. 1) Store new value in NVM logs NVM NVM Data NVM Log • K. Doshi et al. HPCA 2016. 76 2018-10-23

  30. Past Proposal: Redo-based HW-Logging • Store new value in logs Addr New Value • Update data in NVM after commit  Asynchronous data-update • However, update by reading log entries from NVM  I ndirect data-update Processor Caches Log Ctrl. 1) Store new value 2) Update data in NVM logs in NVM NVM NVM Data NVM Log • K. Doshi et al. HPCA 2016. 77 2018-10-23

  31. Past Proposal: Redo-based HW-Logging • Store new value in logs Addr New Value • Update data in NVM after commit  Asynchronous data-update • However, update by reading log entries from NVM  I ndirect data-update Processor Caches Log Ctrl. 1) Store new value 2) Update data in NVM logs in NVM NVM NVM Data NVM Log • K. Doshi et al. HPCA 2016. 78 2018-10-23

  32. Past Proposal: Redo-based HW-Logging • Store new value in logs Addr New Value • Update data in NVM after commit  Asynchronous data-update • However, update by reading log entries from NVM  I ndirect data-update Processor Caches Log Ctrl. 1) Store new value 2) Update data in NVM logs in NVM NVM NVM Data NVM Log Wastes extra NVM bandwidth for reading logs from NVM • K. Doshi et al. HPCA 2016. 79 2018-10-23

  33. Past Proposal: Undo-Redo HW-Logging Addr Old Value New Value Processor Caches Log Buffer NVM NVM Data NVM Log • M. Ogleari et al. HPCA 2018. 80 2018-10-23

  34. Past Proposal: Undo-Redo HW-Logging • Store both old and new value in logs  Larger log sizes Addr Old Value New Value • Update data in NVM after commit Processor Caches Log Buffer NVM NVM Data NVM Log • M. Ogleari et al. HPCA 2018. 81 2018-10-23

  35. Past Proposal: Undo-Redo HW-Logging • Store both old and new value in logs  Larger log sizes Addr Old Value New Value • Update data in NVM after commit Processor Caches Log Buffer 1) Store both old and new value in NVM logs NVM NVM Data NVM Log Requires more NVM writes for storing logs in NVM • M. Ogleari et al. HPCA 2018. 82 2018-10-23

  36. Past Proposal: Undo-Redo HW-Logging • Store both old and new value in logs  Larger log sizes Addr Old Value New Value • Update data in NVM after commit Processor Caches Log Buffer 1) Store both old 2) Update data and new value in NVM in NVM logs NVM NVM Data NVM Log Requires more NVM writes for storing logs in NVM • M. Ogleari et al. HPCA 2018. 83 2018-10-23

  37. Past Proposal: Undo-Redo HW-Logging • Store both old and new value in logs  Larger log sizes Addr Old Value New Value • Update data in NVM after commit Processor Caches Log Buffer 1) Store both old 2) Update data and new value in NVM in NVM logs NVM NVM Data NVM Log Requires more NVM writes for storing logs in NVM • M. Ogleari et al. HPCA 2018. 84 2018-10-23

  38. Past Proposal: Undo-Redo HW-Logging • Store both old and new value in logs  Larger log sizes Addr Old Value New Value • Update data in NVM after commit Processor Caches Log Buffer 1) Store both old 2) Update data and new value in NVM in NVM logs NVM NVM Data NVM Log Requires more NVM writes for storing logs in NVM • M. Ogleari et al. HPCA 2018. 85 2018-10-23

  39. Past Proposal: Undo-Redo HW-Logging • Store both old and new value in logs  Larger log sizes Addr Old Value New Value • Update data in NVM after commit Processor Caches Log Buffer 1) Store both old 2) Update data and new value in NVM in NVM logs NVM NVM Data NVM Log Requires more NVM writes for storing logs in NVM • M. Ogleari et al. HPCA 2018. 86 2018-10-23

  40. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous 87 2018-10-23

  41. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous 1.4 1.2 1.2 1 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Workloads 88 2018-10-23

  42. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Workloads 89 2018-10-23

  43. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 90 2018-10-23

  44. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 91 2018-10-23

  45. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 92 2018-10-23

  46. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 93 2018-10-23

  47. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 94 2018-10-23

  48. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Redo-Friendly Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 95 2018-10-23

  49. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Redo-Friendly Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 96 2018-10-23

  50. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Redo-Friendly Cycles per Transaction (CPT) Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 97 2018-10-23

  51. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Redo-Friendly Cycles per Transaction (CPT) Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 98 2018-10-23

  52. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Redo-Friendly Cycles per Transaction (CPT) Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 99 2018-10-23

  53. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Redo-Friendly Cycles per Transaction (CPT) Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 100 2018-10-23

  54. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Redo-Friendly Cycles per Transaction (CPT) Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better Lower is Better 1 0.8 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 101 2018-10-23

  55. Past Proposals: Summary Log-Write Data-Update Drawback Synchronous ATOM [HPCA 2017] Undo Direct Long Critical Path Proteus [ISCA 2017] Undo Direct Synchronous I ndirect Waste NVM Bandwidth Wrap [HPCA 2016] Redo Asynchronous UndoRedo More Log Write FWB [HPCA 2018] Direct Asynchronous Undo-Friendly Redo-Friendly Cycles per Transaction (CPT) Cycles per Transaction (CPT) 1.4 1.2 1.2 1 Lower is Better Lower is Better 1 0.8 Trade-offs exist! 0.8 0.6 0.6 0.4 0.4 1 2 3 1 2 3 Large & Sequential Small & Random Workloads Workloads 102 2018-10-23

  56. Design Goal & Challenges 103 2018-10-23

  57. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches 104 2018-10-23

  58. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Caches 105 2018-10-23

  59. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Processor Caches NVM Log 106 2018-10-23

  60. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Processor Caches NVM Log 107 2018-10-23

  61. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Processor Caches NVM Log 108 2018-10-23

  62. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Processor Caches NVM Log NVM Data 109 2018-10-23

  63. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Processor Caches NVM Log NVM Data NVM 110 2018-10-23

  64. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Processor Caches NVM Log NVM Data NVM 111 2018-10-23

  65. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Tx1 Processor Caches NVM Log NVM Data NVM 112 2018-10-23

  66. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Tx1 Processor Tx2 Caches NVM Log NVM Data NVM 113 2018-10-23

  67. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Tx1 Processor Tx2 Tx3 Caches NVM Log NVM Data NVM 114 2018-10-23

  68. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Tx1 Processor Tx2 Tx3 Caches NVM Log NVM Data Tx1 NVM 115 2018-10-23

  69. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Tx1 Processor Tx2 Tx3 Caches NVM Log NVM Data Tx1 NVM Tx2 116 2018-10-23

  70. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Tx1 Processor Tx2 Tx3 Caches NVM Log NVM Data Tx1 NVM Tx2 117 2018-10-23

  71. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Tx1 Processor Tx2 Tx3 Caches NVM Log NVM Data Tx1 NVM Tx2 118 2018-10-23

  72. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Tx1 Processor Tx2 Transaction # 1 Tx3 Caches NVM Log NVM Data Tx1 NVM Tx2 119 2018-10-23

  73. Design Goal & Challenges • Redo log with asynchronous & direct update to NVM • Challenge # 1: tracking write-sets of previous transactions • Without data update, logs keep growing • Challenge # 2: handling an early-eviction • Eviction of uncommitted changes from volatile CPU caches Tx1 Processor Tx2 Transaction # 1 (committed) Tx3 Caches NVM Log NVM Data Tx1 NVM Tx2 120 2018-10-23

Recommend


More recommend