a generic framework for testing parallel file systems
play

A Generic Framework for Testing Parallel File Systems Jinrui Cao, - PowerPoint PPT Presentation

A Generic Framework for Testing Parallel File Systems Jinrui Cao, Simeng Wang, Dong Dai, Mai Zheng, and Yong Chen Computer Science Department, New Mexico State University Computer Science Department, Texas Tech University


  1. A Generic Framework for Testing Parallel File Systems Jinrui Cao†, Simeng Wang†, Dong Dai‡, Mai Zheng†, and Yong Chen‡ † Computer Science Department, New Mexico State University ‡ Computer Science Department, Texas Tech University Presented by Simeng Wang SC16’ PDSW-DISCS 11. 14. 2016.

  2. Motivation Jan, 2016 @HPCC: power outage lead to unmeasurable data loss 2

  3. Motivation q Existingmethods fortestingstoragesystemsare not good enough for large- scaleparallelfilesystems (PFS) Ø Modelchecking [e.g., EXPLODE@OSDI’06] v difficult to build a controllable model for PFS v state explosion problem Ø Formalmethods [e.g., FSCQ@SOSP’15] v challenging to write correct specifications for PFS Ø Automatic Testing[e.g., TorturingDB, CrashConsistency@OSDI’14] v closely tied to local storage stack: intrusive for PFS v only work for single-node 3

  4. Our Contributions q A genericframeworkfor testingfailurehandlingofparallelfilesystem Ø Minimalinterference& high portability v decouple PFS from the testing framework through a remote storage protocol(iSCSI) Ø Systematicallygeneratefailureevents with high fidelity v fine-grained,controllablefailureemulation v emulaterealisticfailuremodes q An initialprototypefor Lustre filesystem Uncover internal I/O behaviors of Lustre under different workloads and Ø failureconditions 4

  5. Outline q Introduction q Design Ø Virtual Device Manager Ø Failure State Emulator Ø Data-Intensive Workloads Ø Post-Failure Checker q Preliminary Experiments q Conclusion and Future Work 5

  6. Outline q Introduction q Design Ø Virtual Device Manager Ø Failure State Emulator Ø Data-Intensive Workloads Ø Post-Failure Checker q Preliminary Experiments q Conclusion and Future Work 6

  7. Overview Testing Framework LustreNodes Data-Intensive Workload Post-Failure Checker MGS MDS OSS OSS OSS ….. OST Failure State Emulator MGT MDT OST OST Virtual Device Manager …… Virtual Virtual Virtual Virtual Virtual Device Device Device Device Device …… Device File MGS: Management Server MGT: Management Target MDS: Metadata Server MDT: Metadata Target OSS: Object Storage Server OST: Object Storage Target 7

  8. Overview Testing Framework LustreNodes Data-Intensive Workload Post-Failure Checker MGS MDS OSS OSS OSS ….. OST Failure State Emulator MGT MDT OST OST Virtual Device Manager …… Virtual Virtual Virtual Virtual Virtual Device Device Device Device Device …… Device File 8

  9. Virtual Device Manager q Createsand maintainsdevicefiles for storagedevices. q Mounted to Lustrenodesas virtualdevices via iSCSI. q I/O operations aretranslatedinto diskI/O commands q Log commandsintoa commandhistorylog Ø Include nodeIDs,commanddetails, andactual datatransferred Ø Used bythe FailureStateEmulator 9

  10. Overview Testing Framework LustreNodes Data-Intensive Workload MGS MDS OSS OSS OSS Post-Failure Checker ….. OST MGT MDT OST OST Failure State Emulator Virtual Device Manager …… Virtual Virtual Virtual Virtual Virtual Device Device Device Device Device …… …… Device File 10

  11. Failure State Emulator q Generatefailureeventsin a systematic and controllable way. Ø Manipulate I/Ocommandsand emulatesfailure state ofeach individualdevice Ø Emulate four realistic failure modes based on previous studies [e.g., FAST’13 , OSDI’14, TOCS’16, FAST’16] 1.WholeDeviceFailure Device becomes invisible to the host 2.CleanTerminationofWrites Emulates simplestpower outage 3.ReorderingoftheWrites Commits writes in an order different from the issuing order 4.CorruptionoftheDeviceBlock Change content of writes 11

  12. Overview Testing Framework LustreNodes Data-Intensive Workload MGS MDS OSS OSS OSS Post-Failure Checker ….. OST MGT MDT OST OST Failure State Emulator Virtual Device Manager …… Virtual Virtual Virtual Virtual Virtual Device Device Device Device Device …… Device File 12

  13. Co-design Workloads and Checkers q Data-Intensive workloads Ø Stress Lustre and generate I/O operations to age the system and bring it to a statethatmaybedifficultto recover Ø Mayuseexistingdata-intensiveworkloads Ø Mayincludeself-identification/verificationinformation q Post-Failure Checkers Ø examines the post-failurebehavior andcheckifit can recover withoutdataloss Ø May use existing checkers ( e.g.,, LFSCKfor Lustre) 13

  14. Outline q Introduction q Design Ø Virtual Device Manager Ø Failure State Emulator Ø Data-Intensive Workloads Ø Post-Failure Checker q Preliminary Experiments q Conclusion and Future Work 14

  15. Preliminary Experiment q Experiment setup Ø Cluster of seven VMs, installed with CentOS 7. Ø Lustrefile system (version 2.8) on five VMs. Ø One MGS/MGT node, one MDS/MDT node, and three OSS/OST nodes. Ø Sixth VM : hosts the Virtual Device Manager and the Failure State Emulator v Virtual Device Manager is built on top of the Linux SCSI target framework Ø Last VM : used as client for launching workloads and LFSCK v Data-Intensive Workload , Post-Failure Checker 15

  16. Preliminary Experiment q Workloads Ø Normal Workloads ran on Lustre Workload Description Montage/m101 astronomical imagemosaic engine cp copy a file into Lustre tar decompress a file on Lustre rm delete a file from Lustre Ø Post-Failure Workloads ran on Lustre Operation Description lfs setstripe set striping pattern dd-nosync create & extend a Lustrefile dd-sync create & extend a Lustrefile LFSCK check & repair Lustre 16

  17. Preliminary Results q Internal Pattern of Writes without Failure Ø Numbers of bytes (MB) written to different Lustrenodes under different workloads. Ø Montage/m101 is spilt into twelve steps (i.e., s1 to s12) to show the fine-grained write pattern. Luster cp tar rm Montage/m101 Nodes s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 MGS/MGT 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 MDS/MDT 0.1 5 0.2 6 0.4 6 0.5 6 0.6 6 0.7 6 1 6 1 OSS/OST#1 0 14 0 14 28 14 66 14 66 18 66 18 94 56 94 OSS/OST#2 15 14 15 14 43 14 81 14 81 19 81 19 109 19 110 OSS/OST#3 0 16 0 16 24 16 24 17 24 21 24 21 49 58 49 17

  18. Preliminary Results q Internal Pattern of Writes without Failure Ø Accumulated numbers of bytes (KB) written to different nodes during the workloads . 18

  19. Preliminary Results q Post-Failure Behavior q Emulate a whole device failure on MDS/MDT node q Run operations on Lustre after the emulated device failure Ø dd-nosyncmeans using dd to create and extend a Lustrefile Ø dd-sync means enforcing synchronous writes on the dd command Ø The last column shows whether the operation reported error or not Operation Description Report Error? lfs setstripe set striping pattern No dd-nosync create & extend a Lustrefile No dd-sync create & extend a Lustrefile Yes LFSCK check & repair Lustre No 19

  20. Outline q Introduction q Design Ø Virtual Device Manager Ø Failure State Emulator Ø Data-Intensive Workloads Ø Post-Failure Checker q Preliminary Experiments q Conclusion and Future Work 20

  21. Conclusion and Future Work q Proposed and prototyped a framework for testing failure handling of large-scale parallel file systems. q Uncovered internal behaviors towards workloads under normal and failure conditions q More effective post-failure checking operations q More file systems (e.g., PVFS, Ceph) q Explore novel mechanisms to enhance the resilience of large-scale parallel file systems 21

  22. 22

Recommend


More recommend