pattern based write scheduling and read balance oriented
play

Pattern-based Write Scheduling and Read Balance-oriented - PowerPoint PPT Presentation

Pattern-based Write Scheduling and Read Balance-oriented Wear-leveling for Solid State Drivers Jun Li 1 , Xiaofei Xu 1 , Xiaoning Peng 2 , and Jianwei Liao 1,2 1 College of Computer and Information Science, Southwest University 2 College of


  1. Pattern-based Write Scheduling and Read Balance-oriented Wear-leveling for Solid State Drivers Jun Li 1 , Xiaofei Xu 1 , Xiaoning Peng 2 , and Jianwei Liao 1,2 1 College of Computer and Information Science, Southwest University 2 College of Computer Science and Engineering, Huaihua University

  2. Outline • Introduction and motivation • Design • Evaluation • Conclusion

  3. Outline • Introduction and motivation • Design • Evaluation • Conclusion

  4. Introduction • SSDs are widely used in smartphone and PCs • Bit density develops from SLC to 3D TLC, QLC Performance & Lifetime • Endurance and performance degrade Purpose : improve performance and endurance

  5. Introduction SSD Architecture Overview Internal Parallelism

  6. Motivation Data can be split into frequent and not, and frequent requests may appear as patterns. Access distribution of Read distribution of Percentage of frequent access logic sector addresses logic sector addresses addresses in patterns • Pattern: frequent addresses group set

  7. Outline • Introduction and motivation • Design • Evaluation • Conclusion

  8. Our work schedule write requests in the pattern same block cut down garbage collection overhead Basic idea migrate hot read data during WL different parallelism units boost read performance

  9. Pattern-based Write Scheduling: background Garbage Collection Overhead Block without hot/cold splitting Block with hot/cold splitting invalid valid invalid invalid invalid valid invalid invalid invalid valid invalid invalid valid invalid invalid invalid High GC overhead with page moves Low GC overhead with direct erase

  10. Pattern-based Write Scheduling: workflow • Pattern Mining – Get the write patterns based on part of the requests in each time window – Make use of FP-growth algorithm, a mature data mining scheme • Pattern Matching – Match the requests in I/O queue based on patterns – Introduce a matching matrix • Scheduling – Schedule the requests in the same pattern to the same block – We argue that these requests can be invalided together more possibly than only scheduling the hot data together.

  11. Pattern-based Write Scheduling: illustration time . . . Req. A B A B C C D D E E F A’ G F A’ G B’ D’ F’ H D’ F’ H B’ Instruct {A, B, D ,F}, {I, O, K, Y}, . . . Matching & Scheduling Cold block Hot block Cold block Be Invalided

  12. Read Balance-oriented Wear-leveling: background • Why do we need to do wear leveling? – P/E (Program/Erase) cycle damages the unit of SSDs, and the basic erase unit is block. – If the block wears unevenly and some blocks reach the erase limits, the available capacity will reduce. • Static Wear Leveling – BET (Block Erase Table) selects the WL target block migrate Hot block cold block

  13. Read Balance-oriented Wear-leveling: workflow • PRT (Page Read Table) defines the block type (hot/cold) – 1 represents read, 0 represents not read. Identify target block no Is hot/cold • Three steps yes – Identify wear leveling target blocks (hot/cold/…) Regroup pages – Regroup (hot/cold) pages into two blocks – Migrate these blocks to different chip units block2 block1 Separating Normal Migrating Migrating

  14. Read Balance-oriented Wear-leveling: illustration Channel i Chip n Chip n-1 Chip 0 … ① Identify target Heavily erased blocks ② Re-group pages in a block Less erased block unit block ③ Inter-chip data Hot read page movement Cold read page Other data

  15. Outline • Introduction and motivation • Design • Evaluation • Conclusion

  16. Implementation • Simulator: SSDSim[1] • One workload from MSRC[2] and three workloads from daily collection • 2 channels, 4 chips per channel and 4 planes per chip • Comparing these 4 schemes  Baseline: Normal flash without scheduling, SWL  SWL[3]: Static wear leveling  PGIS[4]+SWL: PGIS with native SWL  Pattern: The proposal [1] Y. Hu, H. Jiang, D. Feng et al. Exploring and exploiting the multi-level parallelism inside SSDs for improved performance and endurance. IEEE Transactions on Computers , Vol. 62(6):1141–1155, 2013. [2] http://iotta.snia.org/traces/388. [3] Y. Chang, J. Hsieh, and T. Kuo. Improving flash wear leveling by proactively moving static data. IEEE Transactions on Computers , Vol. 59(1):53–65,2010 [4] J. Guo, Y. Hu, B. Mao, and S. Wu. Parallelism and Garbage Collection Aware I/O Scheduler with Improved SSD Performance. In Proceedings of 2017 IEEE International Parallel and Distributed Processing Symposium ( IPDPS ’2017 ), pp. 1184–1193, 2017.

  17. Experiments • Evaluate GC overhead and WL overhead GC Time (Second) • Pattern has the least GC time and WL time (except for baseline).

  18. Experiments • Evaluate read performance Normalized Read Latency • Compared to other schemes, Pattern improves 12.8% read response time.

  19. Experiments • Evaluate endurance Distribution of block erases • Pattern has the least Block Erases (except for baseline). • PGIS+SWL and Pattern have almost the same standard deviation, but note that Pattern has better read performance than PGIS+SWL.

  20. Experiments • Overhead Memory Overhead about PRT and BET (KB) Mapping Overhead (Second) • Pattern has a little mapping time more than other schemes, by less than 138ms.

  21. Outline • Introduction and motivation • Design • Evaluation • Conclusion

  22. Conclusion • We study about low garbage collection overhead and better read performance with wear-leveling. • We propose pattern-based write scheduling and read balance-oriented wear-leveling. • Results show that the proposed approach achieves 11.3% garbage collection overhead reduction and 12.8% read performance improvement, in average.

  23. Thank you for your attention ! Any Questions?

  24. 附录一:匹配矩阵 Appendix 1: Matching Matrix

Recommend


More recommend