PebblesDB: Building Key-Value Stores using Fragmented Log Structured Merge Trees
Pandian Raju1, Rohan Kadekodi1, Vijay Chidambaram1,2, Ittai Abraham2
1The University of Texas at Austin 2VMware Research
PebblesDB: Building Key-Value Stores using Fragmented Log - - PowerPoint PPT Presentation
PebblesDB: Building Key-Value Stores using Fragmented Log Structured Merge Trees Pandian Raju 1 , Rohan Kadekodi 1 , Vijay Chidambaram 1,2 , Ittai Abraham 2 1 The University of Texas at Austin 2 VMware Research What is a key-value store?
Pandian Raju1, Rohan Kadekodi1, Vijay Chidambaram1,2, Ittai Abraham2
1The University of Texas at Austin 2VMware Research
123 124
Keys
{“name”: “John Doe”, “age”: 25} {“name”: “Ross Gel”, “age”: 28}
Values
2
123 124
Keys
{“name”: “John Doe”, “age”: 25} {“name”: “Ross Gel”, “age”: 28}
Values
3
123 124
Keys
{“name”: “John Doe”, “age”: 25} {“name”: “Ross Gel”, “age”: 28}
Values
4
123 124
Keys
{“name”: “John Doe”, “age”: 25} {“name”: “Ross Gel”, “age”: 28}
Values
5
123 124
Keys
{“name”: “John Doe”, “age”: 25} {“name”: “Ross Gel”, “age”: 28}
Values
6
systems of many companies
7
used in key-value stores
suffers high write amplification
8
used in key-value stores
suffers high write amplification
data
KV-store Client 10 GB User data
If total write I/O is 200 GB Write amplification = 20
9
45 300 600 900 1200 1500 1800 2100 RocksDB LevelDB PebblesDB User Data Write IO (GB)
10
1868 (42x) 1222 (27x) 756 (17x) 45 300 600 900 1200 1500 1800 2100 RocksDB LevelDB PebblesDB User Data Write IO (GB) 11
(Intel SSD DC P4600 – can last ~5 years assuming ~5 TB write per day)
RocksDB can write ~500 GB of user data per day to a SSD to last 1.25 years
Data source: https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/data-center-ssds/dc-p4600-series/dc-p4600-1-6tb-2-5inch-3d1.html 12
Built using new data structure Fragmented Log-Structured Merge Tree High performance write-optimized key-value store
Achieves 3-6.7x higher write throughput and 2.4-3x lesser write amplification compared to RocksDB Gets the highest write throughput and least write amplification as a backend store to MongoDB
13
14
15
Data is stored both in memory and storage
Memory Storage In-memory
16
File 1
Writes are directly put to memory
In-memory Memory Storage Write (key, value)
17
File 1
Memory File 1 File 2
In-memory data is periodically written as files to storage (sequential I/O)
In-memory
18
Storage
Files on storage are logically arranged in different levels
In-memory Memory Level 0 Level 1 Level n
19
Storage
Compaction pushes data to higher numbered levels
In-memory Memory Level 0 Level 1 Level n
20
Storage
Files are sorted and have non-overlapping key ranges
In-memory Memory 1 .… 12 15 …. 19 25 …. 75 79 …. 99 Search using binary search Level 0 Level 1 Level n
21
Storage
Level 0 can have files with overlapping (but sorted) key ranges
In-memory Memory 2 …. 57 23 …. 78 Level 0 Level 1 Level n Limit on number
22
Storage
Max files in level 0 is configured to be 2
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 25 39 …. 62 77 …. 95 Level 0 Level 1 Level n In-memory 58 …. 68
Level 1 re-write counter: 1
23
Storage
Level 0 has 3 files (> 2), which triggers a compaction
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 25 39 …. 62 77 …. 95 Level 0 Level 1 Level n 58 …. 68 In-memory
Level 1 re-write counter: 1
24
Storage
* Files are immutable * Sorted non-overlapping files
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 25 39 …. 62 77 …. 95 Level 0 Level 1 Level n 58 …. 68 In-memory
Level 1 re-write counter: 1
25
Storage
Set of overlapping files between levels 0 and 1
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 25 39 …. 62 77 …. 95 Level 0 Level 1 Level n 58 …. 68 In-memory
Level 1 re-write counter: 1
26
Storage
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 25 39 …. 62 77 …. 95 Level 0 Level 1 Level n 58 …. 68 In-memory
Level 1 re-write counter: 1
27
Storage
Set of overlapping files between levels 0 and 1
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 25 39 …. 62 77 …. 95 Level 0 Level 1 Level n 58 …. 68 In-memory
Level 1 re-write counter: 1
28
Storage
Set of overlapping files between levels 0 and 1
1 …. 23 47 …. 68 24 …. 46 1 …. 68
Compacting level 0 with level 1
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 25 39 …. 62 77 …. 95 Level 0 Level 1 Level n 58 …. 68 In-memory
Level 1 re-write counter: 1 Level 1 re-write counter: 2
29
Storage
Level 0 is compacted
Memory 1 …. 23 24 …. 46 47 …. 68 77 …. 95 Level 0 Level 1 Level n In-memory
Level 1 re-write counter: 2
30
Storage
Data is being flushed as level 0 files after some Write operations
Memory 1 …. 23 24 …. 46 47 …. 68 77 …. 95 Level 0 Level 1 Level n 10 …. 33 17 …. 53 1 …. 121
Level 1 re-write counter: 2
31
Storage
Compacting level 0 with level 1
Memory 1 …. 23 24 …. 46 47 …. 68 77 …. 95 Level 0 Level 1 Level n 10 …. 33 17 …. 53 1 …. 121
Level 1 re-write counter: 2
32
Storage
92 …. 121 62 …. 90 31 …. 60 1 …. 30
Memory Level 0 Level 1 Level n 1 …. 121
Level 1 re-write counter: 2 Level 1 re-write counter: 3
33
Storage
Compacting level 0 with level 1
Existing data is re-written to the same level (1) 3 times
Memory 1 …. 30 31 …. 60 62 …. 90 92 …. 121 Level 0 Level 1 Level n
Level 1 re-write counter: 3
34
Storage
35
36
1 …. 89 6 …. 91 5 …. 65 9 …. 99 1 …. 102 1 … 271 8 …. 95 Level i
(all files have overlapping key ranges)
37
1 …. 12 18 …. 31 13 …. 34 42 …. 65 72 …. 87 45 …. 56 40 …. 47 Level i
(files of same color can have overlapping key ranges)
38
13 35 70
Novel modification of LSM data structure Uses guards to maintain partially sorted levels Writes data only once per level in most cases
39
Note how files are logically grouped within guards
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 70 40 70 15 95
40
Storage
Guards get more fine grained deeper into the tree
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 70 40 70 15 95
41
Storage
42
In-memory
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 15 70 40 70 15 95 30 …. 68
Max files in level 0 is configured to be 2
43
Storage
2 …. 14 15 …. 68
Compacting level 0
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 70 40 70 15 95 30 …. 68 2 …. 68
44
15 Storage
15 …. 59 2 …. 14 15 …. 68
Fragmented files are just appended to next level
Memory 1 …. 12 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 40 70 15 95 77 …. 87 82 …. 95 70
45
15 Storage
15 …. 59 2 …. 14 15 …. 68
Guard 15 in Level 1 is to be compacted
Memory 1 …. 12 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 40 70 15 95 77 …. 87 82 …. 95 70 15 …. 68
46
Storage
15 …. 39 40 …. 68 2 …. 14
Files are combined, sorted and fragmented
Memory 1 …. 12 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 40 70 15 95 77 …. 87 82 …. 95 70 15 …. 68
47
40 Storage
15 …. 39 40 …. 68 2 …. 14
Fragmented files are just appended to next level
Memory 1 …. 12 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 40 70 15 95 77 …. 87 82 …. 95 70
48
40 Storage
FLSM maintains partially sorted levels to efficiently reduce the search space
How does FLSM reduce write amplification?
FLSM doesn’t re-write data to the same level in most cases
How does FLSM maintain read performance?
49
50
51
1 1e+9 Keyspace
52
1 1e+9 Keyspace
53
1 1e+9 Keyspace
FLSM structure
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 70 40 70 15 95
Put(1, “abc”)
Write (key, value)
54
Storage
FLSM structure
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 70 40 70 15 95
Get(23)
55
Storage
Search level by level starting from memory
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 70 40 70 15 95
Get(23)
56
Storage
All level 0 files need to be searched
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 70 40 70 15 95
Get(23)
57
Storage
Level 1: File under guard 15 is searched
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 70 40 70 15 95
Get(23)
58
Storage
Level 2: Both the files under guard 15 are searched
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 70 40 70 15 95
Get(23)
59
Storage
Memory Storage 1 …. 37 18 …. 48 Level 0 In-memory 2 …. 98 23 …. 48 Write (key, value)
If rate of insertion is higher than rate of compaction, write throughput depends on the rate of compaction
60
Memory Storage 1 …. 37 18 …. 48 Level 0 In-memory 2 …. 98 23 …. 48 Write (key, value)
If rate of insertion is higher than rate of compaction, write throughput depends on the rate of compaction
61
FLSM has faster compaction because of lesser I/O and hence higher write throughput
files per level
increased by 5x (assuming no cache hits) Trade-off between write I/O and read performance
62
63
parallelism and compaction
64
65
66
Bloom filter Is key 25 present?
Definitely not Possibly yes
1 …. 12 15 …. 39 82 …. 95 Level 1 15 70
Bloom Filter Bloom Filter Bloom Filter Bloom Filter
77 …. 97
Maintained in-memory
67
1 …. 12 15 …. 39 82 …. 95 Level 1 15 70
Bloom Filter Bloom Filter Bloom Filter Bloom Filter
77 …. 97
Maintained in-memory
68
PebblesDB reads same number of files as any LSM based store
and better compaction
69
70
Micro-benchmarks
71
Low memory Small dataset Crash recovery CPU and memory usage Aged file system Real world workloads - YCSB NoSQL applications
Micro-benchmarks
72
Low memory Small dataset Crash recovery CPU and memory usage Aged file system Real world workloads - YCSB NoSQL applications
0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt HyperLevelDB
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
73
35.08 Kops/s 25.8 Kops/s 33.98 Kops/s 22.41 Kops/s 57.87 Kops/s 34.06 Kops/s 5.8 Kops/s 32.09 Kops/s 952.93 GB 0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt HyperLevelDB
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
74
35.08 Kops/s 25.8 Kops/s 33.98 Kops/s 22.41 Kops/s 57.87 Kops/s 34.06 Kops/s 5.8 Kops/s 32.09 Kops/s 952.93 GB 0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt HyperLevelDB
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
75
35.08 Kops/s 25.8 Kops/s 33.98 Kops/s 22.41 Kops/s 57.87 Kops/s 34.06 Kops/s 5.8 Kops/s 32.09 Kops/s 952.93 GB 0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt HyperLevelDB
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
76
35.08 Kops/s 25.8 Kops/s 33.98 Kops/s 22.41 Kops/s 57.87 Kops/s 34.06 Kops/s 5.8 Kops/s 32.09 Kops/s 952.93 GB 0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt HyperLevelDB
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
77
35.08 Kops/s 25.8 Kops/s 33.98 Kops/s 22.41 Kops/s 57.87 Kops/s 34.06 Kops/s 5.8 Kops/s 32.09 Kops/s 952.93 GB 0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt HyperLevelDB
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
78
0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt WiredTiger
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
79
20.73 Kops/s 9.95 Kops/s 15.52 Kops/s 19.69 Kops/s 23.53 Kops/s 20.68 Kops/s 0.65 Kops/s 9.78 Kops/s 426.33 GB 0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt WiredTiger
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
80
20.73 Kops/s 9.95 Kops/s 15.52 Kops/s 19.69 Kops/s 23.53 Kops/s 20.68 Kops/s 0.65 Kops/s 9.78 Kops/s 426.33 GB 0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt WiredTiger
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
81
20.73 Kops/s 9.95 Kops/s 15.52 Kops/s 19.69 Kops/s 23.53 Kops/s 20.68 Kops/s 0.65 Kops/s 9.78 Kops/s 426.33 GB 0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt WiredTiger
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
82
20.73 Kops/s 9.95 Kops/s 15.52 Kops/s 19.69 Kops/s 23.53 Kops/s 20.68 Kops/s 0.65 Kops/s 9.78 Kops/s 426.33 GB 0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt WiredTiger
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
83
20.73 Kops/s 9.95 Kops/s 15.52 Kops/s 19.69 Kops/s 23.53 Kops/s 20.68 Kops/s 0.65 Kops/s 9.78 Kops/s 426.33 GB 0.5 1 1.5 2 2.5 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt WiredTiger
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
84
PebblesDB combines low write IO of WiredTiger with high performance of RocksDB
85
Merge Trees
several attempts to optimize them
structure) with careful systems building
86
https://github.com/utsaslab/pebblesdb
https://github.com/utsaslab/pebblesdb
89
which is >= target
between 5 and 18)
Get(1)
Level 0 – 1, 2, 100, 1000 Level 1 – 1, 5, 10, 2000 Level 2 – 5, 300, 500
90
which is >= target
between 5 and 18)
Seek(200)
Level 0 – 1, 2, 100, 1000 Level 1 – 1, 5, 10, 2000 Level 2 – 5, 300, 500
91
which is >= target
between 5 and 18)
92
FLSM structure
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 70 40 70 15 95
Seek(23)
93
Storage
All levels and memtable need to be searched
Memory 2 …. 37 23 …. 48 1 …. 12 15 …. 59 77 …. 87 82 …. 95 2 …. 8 15 …. 23 16 …. 32 70 …. 90 96 …. 99 45 …. 65 Level 0 Level 1 Level 2 In-memory 15 70 40 70 15 95
Seek(23)
94
Storage
Bloom filter Is key 25 present?
Definitely not Possibly yes
95
1 …. 12 15 …. 39 82 …. 95 Level 1 15 70
Get(97)
True
Bloom Filter Bloom Filter Bloom Filter Bloom Filter
77 …. 97
Maintained in-memory
96
1 …. 12 15 …. 39 82 …. 95 Level 1 15 70
Get(97)
False True
Bloom Filter Bloom Filter Bloom Filter Bloom Filter
77 …. 97
97
1 …. 12 15 …. 39 82 …. 95 Level 1 15 70
Bloom Filter Bloom Filter Bloom Filter Bloom Filter
77 …. 97
PebblesDB reads at most one file per guard with high probability
98
1 …. 12 15 …. 39 77 …. 97 82 …. 95 Level 1 15 70
Seek(85)
Thread 1 Thread 2
99
during a seek-heavy workload
Seek based compaction increases write I/O but as a trade-off to improve seek performance
100
all depend on one parameter, maxFilesPerGuard (default 2 in PebblesDB)
101
in PebblesDB
102
103
7.2 GB 100.7 GB 756 GB 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 10M 100M 500M
Write IO ratio wrt PebblesDB Number of keys inserted
128 bytes
104
11.72 Kops/s 6.89 Kops/s 7.5 Kops/s 0.5 1 1.5 2 2.5 3 Random-Writes Reads Range-Queries
Throughput ratio wrt HyperLevelDB Benchmark
105
239.05 Kops/s 11.72 Kops/s 6.89 Kops/s 7.5 Kops/s 126.2 Kops/s 0.5 1 1.5 2 2.5 3 Seq-Writes Random-Writes Reads Range-Queries Deletes
Throughput ratio wrt HyperLevelDB Benchmark
106
44.4 Kops/s 40.2 Kops/s 38.8 Kops/s 0.5 1 1.5 2 2.5 Writes Reads Mixed
Throughput ratio wrt HyperLevelDB Benchmark
107
108 45.25 Kops/s 205.76 Kops/s 205.34 Kops/s 0.5 1 1.5 2 2.5 Writes Reads Range-Queries
Throughput ratio wrt HyperLevelDB Benchmark
109 44.48 Kops/s 6.34 Kops/s 6.31 Kops/s 0.5 1 1.5 2 2.5 3 3.5 Writes Reads Range-Queries
Throughput ratio wrt HyperLevelDB Benchmark
17.37 Kops/s 5.65 Kops/s 6.29 Kops/s 0.5 1 1.5 2 2.5 Writes Reads Range-Queries
Throughput ratio wrt HyperLevelDB Benchmark
pairs in random order
110
27.78 Kops/s 2.86 Kops/s 4.37 Kops/s 0.5 1 1.5 2 2.5 Writes Reads Range-Queries
Throughput ratio wrt HyperLevelDB Benchmark
111
with value size 512 bytes
keys
guards
112
22.08 Kops/s 21.85 Kops/s 31.17 Kops/s 32.75 Kops/s 38.02 Kops/s 7.62 Kops/s 0.37 Kops/s 19.11 Kops/s 1349.5 GB 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Load A Run A Run B Run C Run D Load E Run E Run F Total IO
Throughput ratio wrt HyperLevelDB
Load A - 100 % writes Run A - 50% reads, 50% writes Run B - 95% reads, 5% writes Run C - 100% reads Run D - 95% reads (latest), 5% writes Load E - 100% writes Run E - 95% range queries, 5% writes Run F - 50% reads, 50% read-modify-writes
113
merging multiple files in a guard
114
~3.2 GB
115
116
117
118