physical separation in modern storage systems
play

Physical Separation in Modern Storage Systems Lanyue Lu Committee: - PowerPoint PPT Presentation

Physical Separation in Modern Storage Systems Lanyue Lu Committee: Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau Shan Lu, Michael Swift, Xinyu Zhang University of Wisconsin - Madison Tuesday, December 1, 15 Local Storage Systems Are


  1. Physical Separation in Modern Storage Systems Lanyue Lu Committee: Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau Shan Lu, Michael Swift, Xinyu Zhang University of Wisconsin - Madison Tuesday, December 1, 15

  2. Local Storage Systems Are Important GFS, vmware HDFS docker Riak, Local MongoDB Storage ext4, NTFS, SQLite Tuesday, December 1, 15

  3. Data Layout of Storage Systems Data layout is fundamental ➡ how to organize data on disks and in memory ➡ impact both reliability and performance Locality is the key ➡ store relevant data together ➡ locality is pursued in various storage systems ➡ file systems, key-value stores, databases ➡ better performance (caching and prefetching) ➡ high space utilization ➡ optimize for hard drives Tuesday, December 1, 15

  4. Problems of Data Locality New environments ➡ fast storage hardware (e.g., SSDs) ➡ servers with many cores and large memory ➡ sharing infrastructure is the reality ➡ virtualization, containers, data centers Unexpected entanglement ➡ shared failures (e.g., VMs, containers) ➡ bundled performance (e.g., apps) ➡ lack flexibility to manage data differently Tuesday, December 1, 15

  5. New Technique: Physical Separation Redesign data layout ➡ rethink existing data layouts ➡ key: separate data structures ➡ apply in both file systems and key-value stores Many new benefits ➡ IceFS: disentangle structures and transactions ➡ isolated failures, faster recovery ➡ customized performance ➡ WiscKey: key-value separation ➡ minimize I/O amplification ➡ leverage devices’ internal parallelism Tuesday, December 1, 15

  6. Research Contributions A study of Linux file system evolution 1 ➡ the first comprehensive file-system study ➡ published in FAST ’13 (best paper award) Physical disentanglement in IceFS 2 ➡ localized failure, localized recovery ➡ specialized journaling performance ➡ published in OSDI ’14 Key-value separation in WiscKey 3 ➡ an SSD-conscious LSM-tree ➡ over 100x performance improvement ➡ submitted to FAST ’16 Tuesday, December 1, 15

  7. Outline Introduction Disentanglement in IceFS ➡ File system Disentanglement ➡ The Ice File System ➡ Evaluation Key-Value Separation in WiscKey ➡ Key-value Separation Idea ➡ Challenges and Optimization ➡ Evaluation Conclusion Tuesday, December 1, 15

  8. Isolation Is Important Reliability ➡ independent failures and recovery Performance ➡ isolated performance Isolation at various scenarios ➡ computing: virtual machines, Linux containers ➡ security: BSD jail, sandbox ➡ cloud: multi-tenant systems Tuesday, December 1, 15

  9. File Systems Lack Isolation Local file systems are core building blocks ➡ manage user data ➡ long-standing and stable ➡ foundation for distributed file systems Existing abstractions provide logical isolation ➡ file, directory, namespace ➡ just illusion Physical entanglement in local file systems prevents isolation ➡ entangled data structures and transactions Tuesday, December 1, 15

  10. Metadata Entanglement Shared metadata for multiple files ➡ e.g., multiple files share one inode block ➡ many shared structures: bitmap, directory block foo.txt bar.c I/O failure foo.txt bar.c inode Metadata corruption inode one 4KB inode block Problem: faults in shared structures lead to shared failures and recovery Tuesday, December 1, 15

  11. Transaction Entanglement A shared transaction for all updates foo.txt bar.c fsync(bar.c) data of Mem data of foo.txt bar.c Disk Problem: shared transactions lead to entangled performance Tuesday, December 1, 15

  12. Our Solution: IceFS Propose a data container abstraction: cube Disentangle data structures and transactions Provide reliability and performance isolation Benefits for local file systems ➡ isolated failures for data containers ➡ up to 8x faster localized recovery ➡ up to 50x higher performance Benefits for high-level services ➡ virtualized systems: reduce the downtime over 5x ➡ HDFS: improve the recovery efficiency over 7x Tuesday, December 1, 15

  13. Data Container Abstraction: Cube An isolated directory in a file system ➡ physically disentangled on disk and in memory / cube1 cube2 a b d c b1 b2 d1 c1 Disk /, a, c, c1 b, b1, b2 d, d1 Tuesday, December 1, 15

  14. Principles of Disentanglement No shared physical resources ➡ no shared metadata: e.g., block groups ➡ no shared disk blocks or buffers No dependency ➡ partition linked lists or trees ➡ avoid directory hierarchy dependency No entangled updates ➡ use separate transactions ➡ enable customized journaling modes Tuesday, December 1, 15

  15. Outline Introduction Disentanglement in IceFS ➡ File system Disentanglement ➡ The Ice File System ➡ Evaluation Key-Value Separation in WiscKey ➡ Key-value Separation Idea ➡ Challenges and Optimization ➡ Evaluation Conclusion Tuesday, December 1, 15

  16. IceFS Overview A data container based file system ➡ isolated reliability and performance for containers Disentanglement techniques ➡ physical resource isolation ➡ directory indirection ➡ transaction splitting A prototype based on Ext3 ➡ local file system: Ext3/JBD ➡ kernel: VFS ➡ user level tool: e2fsprogs Tuesday, December 1, 15

  17. Ext3 Disk Layout A disk is divided into block groups ➡ physical partition for disk locality block block block block block block block Disk SB group group group group group group group One block group metadata data blocks group descriptors bitmaps inodes Tuesday, December 1, 15

  18. IceFS Disk Layout Each cube has isolated metadata ➡ sub-super block (Si) and isolated block groups sub super blocks block block block block block Disk SB S0 S1 group group group group group cube metadata Tuesday, December 1, 15

  19. Directory Indirection 1. load cube pathnames 2. pathname prefix match from sub-super blocks read file “/a/b/b2” match cube1 /a/b/, cube1 dentry jump to cube1 top directory /d/, cube2 dentry ... ... / a b d c b1 b2 d1 c1 cube1 cube2 Tuesday, December 1, 15

  20. Ext3/4 Transaction file2 file3 file1 dirty dirty dirty fsync(file1) data data data Memory Disk commit tx Journal Tuesday, December 1, 15

  21. IceFS Transaction Splitting file2 file3 file1 dirty dirty dirty fsync(file1) fsync(file3) fsync(file2) data data data Memory Disk commit commit commit tx tx tx Journal Tuesday, December 1, 15

  22. Benefits of Disentanglement Localized reactions to failures ➡ per-cube read-only and crash ➡ encourage more runtime checking Localized recovery ➡ only check faulty cubes ➡ offline and online Specialized journaling ➡ concurrent and independent transactions ➡ diverse journal modes (e.g., no journal, no fsync) Tuesday, December 1, 15

  23. Outline Introduction Disentanglement in IceFS ➡ File system Disentanglement ➡ The Ice File System ➡ Evaluation Key-Value Separation in WiscKey ➡ Key-value Separation Idea ➡ Challenges and Optimization ➡ Evaluation Conclusion Tuesday, December 1, 15

  24. Evaluation Does IceFS isolate failures ? ➡ inject around 200 faults ➡ per-cube failure (read-only or crash) in IceFS Tuesday, December 1, 15

  25. Evaluation Does IceFS isolate failures ? ➡ inject around 200 faults ➡ per-cube failure (read-only or crash) in IceFS Does IceFS have faster recovery ? Tuesday, December 1, 15

  26. Recovery In Ext3 Ext3 1007 1000 Ext3: 20 directories 723 800 Fsck Time (s) 600 476 400 231 200 0 200GB 400GB 600GB 800GB File-system Capacity Tuesday, December 1, 15

  27. Fast Recovery In IceFS Ext3 IceFS 1007 1000 Ext3: 20 directories 723 800 Fsck Time (s) 600 476 IceFS: 400 20 cubes 231 122 200 91 64 35 0 200GB 400GB 600GB 800GB File-system Capacity Partial recovery for a cube (up to 8x) Tuesday, December 1, 15

  28. Evaluation Does IceFS isolate failures ? ➡ inject around 200 faults ➡ per-cube failure (read-only or crash) for IceFS Does IceFS have faster recovery ? ➡ independent recovery for a cube Tuesday, December 1, 15

  29. Evaluation Does IceFS isolate failures ? ➡ inject around 200 faults ➡ per-cube failure (read-only or crash) for IceFS Does IceFS have faster recovery ? ➡ independent recovery for a cube Does IceFS have better performance ? Tuesday, December 1, 15

  30. Workloads SQLite ➡ a database application ➡ sequentially write large key/value pairs ➡ asynchronous Varmail ➡ an email server workload ➡ randomly write small blocks ➡ fsync after each write Tuesday, December 1, 15

  31. Ext3 Journaling SQLite Varmail 180 146.7 150 Throughput (MB/s) 120.6 120 Ext3 runs with 2 directories 90 76.1 60 20 30 9.8 1.9 0 Alone Ext3 Tuesday, December 1, 15

  32. Ext3 Journaling SQLite Varmail 180 146.7 150 Throughput (MB/s) 120.6 Ext3 runs with 2 directories 120 90 76.1 60 20 30 9.8 1.9 0 Alone Together Ext3 Ext3 Shared transactions hurt performance (over 10x) Tuesday, December 1, 15

  33. Isolated Journaling In IceFS SQLite Varmail 180 Ext3 runs with 2 146.7 150 directories Throughput (MB/s) 120.6 120 90 76.1 IceFS runs with 2 cubes 60 20 30 9.8 1.9 0 Alone Together Together in Ext3 in Ext3 in IceFS Parallel transactions in IceFS provide isolated performance (over 5x) Tuesday, December 1, 15

Recommend


More recommend