Online Stream Detection Assisted, Log Buffer-Based Mulitiple- Associative Sector Translation FTL Design (SLMAST) Zhengyu Yang , Manu Awasthi Aug 26 2015
Content 1. Backgrounds 2. Related Works 3. Algorithm Design 2
1. Backgrounds - SSD is not good for dealing with random write and re-write but sequential writes. - OS mixes streams from multiple Apps and VM tenants, which decrease the sequentility. - SSD has no idea of stream info. - We proposed a FTL design SLMAST: (1) detect sequential streams based on the locality with low overhead. (2) multiple-associative blocks are used in log block (buffer) to reduce the cost of full merge. 3
1. Backgrounds OS LPA SSD PPA SSD limitations: Three jobs for FTL: 1. Erase-before-write 1. Mapping table 2. Lifetime 2. Wear Leveling 3. Recovery 3. Garbage Collection 4
SSD Erase-before-write limitation DataBlk0 DataBlk1 DataBlk2 DataBlk3 DataBlk4 0 12 1 13 Incoming I/O request 2 14 W(16 17 18 20) 3 15 Re-Write 4 16 5 17 6 18 7 19 8 20 9 21 10 11
SSD Erase-before-write limitation DataBlk0 DataBlk1 DataBlk2 DataBlk3 DataBlk4 0 12 1 13 Incoming I/O request 2 14 W(16 17 18 20) 3 15 Re-Write 4 16 5 17 6 18 7 19 8 20 9 21 dataBlk 10 11 DataBlk93 Step3 Recycle dataBlk 12 Step2 Assemble 13 Update mapTable 14 15 16 17 FreeBlkList 18 19 EraseBlk=2ms 20 ReadPage=25us Step1 Get a freeBlk 21 WritePage=200us
SSD Erase-before-write issue DataBlk0 DataBlk1 DataBlk2 DataBlk3 DataBlk4 0 12 18 1 13 19 Incoming I/O request 2 14 W(16 17 18 20) 3 15 4 16<I> 5 17<I> 6 18<I> 7 19<I> 8 20 9 21 10 16 dataBlk 11 17 Very early solution: page mapping However page mapping has issues: (1) Continuous page are separated (2) Mapping table overhead (3) Too many invalid pages are occupying the disk
SSD Erase-before-write issue DataBlk0 DataBlk1 DataBlk2 DataBlk3 DataBlk4 0 12 18 1 13 19 Incoming I/O request 2 14 W(16 17 18 20) 3 15 4 16<I> 5 17<I> 6 18<I> 7 19<I> 8 20 9 21 10 16 dataBlk 11 17 Very early solution: page mapping However page mapping has issues: (1) Continuous page are separated Block-mapping (2) Mapping table overhead (3) Too many invalid pages are occupying the disk Log structure
SSD Erase-before-write issue DataBlk0 DataBlk1 DataBlk2 DataBlk3 DataBlk4 0 12 18 1 13 19 Incoming I/O request 2 14 W(16 17 18 20) 3 15 4 16<I> 5 17<I> 6 18<I> 7 19<I> 8 20 9 21 10 16 dataBlk 11 17 Very early solution: page mapping Page-level: 48MB for 8G However page mapping has issues: Block-level: 1.5MB for 8G (1) Continuous page are separated Block-mapping (2) Mapping table overhead (3) Too many invalid pages are occupying the disk Log structure
1. Backgrounds An Example of Log Structure If dataBlk collision: writeToLogBlk Log Blocks Once logBlk is full: do merge Data Blocks Block mapping 10
2. Related Works 2.1 BAST 2.2 FAST 2.3 KAST 11
2.1 BAST SRAM - mappingTable SSD Block mapping
2.1 BAST – Full Merge 2 Erases + N Copies DataBlk0 DataBlk1 DataBlk2 DataBlk3 DataBlk4 0 12 24 36 1 13 25 37 2 14 26 38 3 15 27 39 4 16 40 5 17 41 6 18 42 7 19 43 8 20 44 9 21 45 10 22 46 dataBlks Step3 Recycle dataBlk 11 23 47 (2 Erases) LogBlk0 Assign freeLogBlk 16 12 17 logBlks 13 18 14 19 15 18 16 Step2 Assemble 19 17 Step1 Get FreeBlk Update mapTable 22 18 FreeBlkList 20 19 16 20 14 21 16 22 17 23
2.1 BAST – Switch Merge 1 Erase DataBlk0 DataBlk1 DataBlk2 DataBlk3 DataBlk4 0 12 24 36 1 13 25 37 2 14 26 38 Step3 Recycle DataBlk 3 15 27 39 Erase DataBlk 4 16 40 5 17 41 6 18 42 7 19 43 8 20 44 9 21 45 dataBlks 10 22 46 FreeBlkList 11 23 47 LogBlk0 LogBlk1 LogBlk2 LogBlk3 LogBlk4 16 5 36 58 25 Step1 Update 17 9 37 59 26 MapTable 18 5 38 60 25 19 4 39 26 18 1 40 26 19 2 41 Step2 Assign LogBlk 22 3 42 23 4 43 16 5 44 14 45 logBlks 16 46 17 47
2.1 BAST - Issues Besides RND write issue, BAST also has block-thrashing issue: dataBlks dataBlks PBN0 PBN1 PBN2 PBN3 PBN0 PBN1 PBN2 PBN3 0 4 8 12 0 4 8 12 1 5 9 13 1 5 9 13 2 6 10 14 2 6 10 14 3 7 11 15 3 7 11 15 logBlks logBlks LBN0 LBN2 LBN0 LBN2 Space Locality Temporal Locality W(0,4,8,12,4,8,12,0,0,4,12,8) W(0,1,2,3,3,2,0,1,0,0,1,2)
2.2 FAST SRAM - MapTable Flash
2.2 FAST - FullAssoLogBlk DataBlk0 DataBlk1 DataBlk2 DataBlk3 DataBlk4 0 12 24 36 1 13 25 37 2 14 26 38 3 15 27 39 4 16 40 5 17 41 6 18 42 7 19 43 8 20 44 9 21 45 dataBlks 10 22 46 11 23 47 SLB0 RLB0 RLB1 RLB2 RLB3 0 32 56 36 104 1 65 57 37 32 1 Sequential LogBlk 2 163 58 38 N Rand LogBlks 3 362 59 39 Capture SeqPgs 4 963 36 40 Increase SwitchMerge 5 17 37 41 6 18 54 54 7 19 55 55 8 40 56 56 9 41 110 57 logBlks 10 42 58 46 11 43 59 47
2.2 FAST - PartialMerge 1 Erase + ( 𝑶 𝑸 − |𝑴| ) Copies DataBlk0 DataBlk1 DataBlk2 DataBlk3 DataBlk4 0 12 24 36 1 13 25 37 2 14 26 38 3 15 27 39 4 16 40 5 17 41 6 18 42 7 19 43 8 20 44 9 21 45 dataBlks 10 22 46 11 23 47 SLB0 RLB0 RLB1 RLB2 RLB3 0 32 56 36 104 1 65 57 37 32 2 163 58 38 3 362 59 39 4 963 36 40 5 17 37 41 6 18 54 54 19 55 55 40 56 56 41 110 57 logBlks 42 58 46 43 59 47
2.2 FAST - PartialMerge 1 Erase + ( 𝑶 𝑸 − |𝑴| ) Copies DataBlk0 DataBlk1 DataBlk2 DataBlk3 DataBlk4 0 12 24 36 1 13 25 37 2 14 26 38 3 15 27 39 4 16 40 5 17 41 6 18 42 7 19 43 8 20 44 9 21 45 dataBlks 10 22 46 11 23 47 SLB0 RLB0 RLB1 RLB2 RLB3 0 32 56 36 104 1 65 57 37 32 Copy from 2 163 58 38 dataBlk to logBlk 3 362 59 39 4 963 36 40 5 17 37 41 6 18 54 54 7 19 55 55 8 40 56 56 9 41 110 57 logBlks 10 42 58 46 11 43 59 47
2.2 FAST - FullMerge ( 𝐿 𝑀 +1) Erase + ( 𝐿 𝑀 +1 ) Copies DataBlk0 DataBlk1 DataBlk2 DataBlk3 DataBlk4 0 12 24 36 1 13 25 37 2 14 26 38 3 15 27 39 4 16 40 5 17 41 6 18 42 7 19 43 8 20 44 9 21 45 dataBlks 10 22 46 11 23 47 SLB0 RLB0 RLB1 RLB2 RLB3 0 32 56 36 104 1 65 57 37 32 2 163 58 38 3 362 59 39 4 963 36 40 5 17 37 41 6 18 54 54 19 55 55 40 56 56 41 110 57 logBlks 42 58 46 43 59 47
2.2 FAST Contributions: 1. FullMerge -> Partial/SwitchMerge Merge Copy Times Erase Times Full 𝐿 𝑀 ×𝑂 * 𝐿 𝑀 +1 Switch 0 1 Partial 𝑂 * − |𝑀| 1 2. Capture sequential streams
2.2 FAST - Issues 1. Does not consider multiple sequential streams. Thrashing issues still exist:if multiple sequential streams simultaneously come, they will be interleaved together. StreamA:0,1,2,3,4 StreamB:12,13,14,15,16 0 1 12 24 2 3 13 14 15 25 26 4 16 27 28 StreamC:24,25,26,27,28 SLB0 RLB0 RLB1 RLB2 RLB3 0 32 56 36 104 1 65 57 37 32 2 163 58 38 3 362 59 39 4 963 36 40 5 17 37 41 6 18 54 54 7 19 55 55 8 40 56 56 9 41 110 57 10 42 58 46 11 43 59 47
2.2 FAST - Issues 2. Only treats block header as a VIP role: - Not really detecting sequential or not. - BlkHdr is only necessary condition of SwitchMerge. - One block can have more than 256 pages these days. - If sequential stream begins from any place of a block, FAST cannot capture it. SLB0 RLB0 RLB1 RLB2 RLB3 ? 32 56 36 104 ? 65 57 37 32 2 163 58 38 3 362 59 39 4 963 36 40 5 17 37 41 6 18 54 54 7 19 55 55 8 40 56 56 9 41 110 57 10 42 58 46 11 43 59 47
2.3 KAST
2.3 KAST - Guarantee max associated blocks when full merging - Multiple SEQ log blocks - Dynamically partition between SEQ and RND log blocks
3. Algorithm Design 26
3. Algorithm Design Stream Detector 27
write(LPA, data): if a collision occurs: writeToLogBlk(LPA, LBA, offset, data) else write data at offset in dataBlk of PBA updateStreamRecord(LPA) writeToLogBlk(LPA ) 1. Page is header 1.1. Page’s block is found in SZ, then switchMerge or fullMerge with the new page 1.2. Page’s block is not found in SZ, then switchMerge or partialMerge a victim block and add the new page into a new free block 2. Page is not header 2.1. Page’s owner is found in SZ, then add the page to SZ 2.2. Page’s owner is found in KZ, then add the page to KZ 2.3. Page’s owner is not in SZ and KZ 2.3.1. Page is detected as part of a sequential stream, add to KZ 2.3.2. Page is not detected as part of a sequential stream, then add to RZ 28
3. Algorithm Design writeToLogBlk(LPA ) 1. Page is header 1.1. Page’s block is found in SZ, then switchMerge or fullMerge with the new page 1.2. Page’s block is not found in SZ, then switchMerge or partialMerge a victim block and add the new page into a new free block 2. Page is not header 2.1. Page’s owner is found in SZ, then add the page to SZ 2.2. Page’s owner is found in KZ, then add the page to KZ 2.3. Page’s owner is not in SZ and KZ 2.3.1. Page is detected as part of a sequential stream, add to KZ 2.3.2. Page is not detected as part of a sequential stream, then add to RZ 29
Recommend
More recommend