Webinar Series: Triangulate your Storage Architecture with SvSAN Caching Luke Pruen – Technical Services Director
What can you expect from this webinar? To answer a simple question How can I create the perfect balanced storage architecture to meet my capacity, footprint and performance needs, now and in the future?
Introducing StorMagic What do we do? StorMagic SvSAN eliminates the need for physical SANs by exposing the storage of an industry standard server as a virtual SAN thereby dramatically reducing CAPEX and OPEX. How does SvSAN achieve this? StorMagic’s virtual SAN converts the internal disk, flash and memory of industry standard servers into robust, cost effective and flexible shared-storage. Where is this most applicable? SvSAN is deployed for hyperconverged infrastructure for multi-site enterprises and SMEs and server-based storage arrays as an alternative to a traditional physical SAN.
Introducing StorMagic Global Customer Across Many 1 to Thousands Global Partner Adoption Verticals of Sites Network Large and small Wherever you are, StorMagic Retail, health, government, Within 72 countries, deployments from has resellers, integrators, and industrial, education, organisations depend on enterprises with 1000s of server partners to meet your finance, pharma and many StorMagic for sever and sites to SMEs with a single needs more storage infrastructure site
What is a Storage Array? Wikipedia Definition: A disk array is a hardware element that contains a large group of hard disk drives (HDDs). It may contain several disk drive trays and has an architecture which improves speed and increases data protection. The system is run via a storage controller, which coordinates activity within the unit.
̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ Storage Array Components • SAN Presentation iSCSI Fibre Channel • SAN Switch Fibre channel Ethernet (iSCSI) • Physical Storage Controller Storage Controller CPU Memory Dedicated Storage Hardware • Enterprise Drives 10K or 15K SAS SSD SSD
Utilise industry standard server as a Virtual SAN Virtual Storage Appliance Hypervisor Storage Controller SSD SSD
StorMagic SvSAN: Overview “SvSAN turns the internal disk, SSD and memory of industry standard servers into highly available shared storage ”
̶ ̶ ̶ ̶ ̶ ̶ Use Cases: Hyperconverged or Server-Based Storage Array • Hyperconverged (HCI) Shared storage and compute platform Custom-build and avoid appliance over-provisioning Perfect for remote, branch and private cloud services • Server-Based Storage Array (Server SAN) Custom-build active/active mirrored shared storage Fraction of the cost of off-the-shelf SAN Flexibility to scale capacity and performance as needed
Triangulate your storage requirements: 3 Axis 10
Triangulate your requirements: Capacity Object: Capacity • Capacity - High priority • Footprint - Low priority • Performance - Low priority Solution: High capacity drives • SATA 7.2k rpm 65 – 100 IOPS • SAS 10k.15k rpm 140 – 210 IOPS • Low cost per GB Compromise: Performance & Footprint • Require more drives to achieve performance (spindles) • Larger server footprint • Lower cost per GB, higher cost per IOPS 11
Triangulate your requirements: Footprint Object: Footprint • Capacity - Low priority • Footprint - High priority • Performance - Low priority Solution: Smaller servers • Use smaller U servers • 2.5” drives Compromise: Capacity & Performance • Limits the number of drive • Limits performance when using spinning disks 12
Triangulate your requirements: Performance Object: Performance Capacity - L ow priority • Footprint - L ow priority • Performance - H igh priority • Solution: SSD/Flash • SSD/Flash 8.6k to 10 millions IOPs • High cost per GB compared to magnetic • Add more servers to cluster Compromise: Capacity & Resource • High cost per GB • Lower capacity servers???? • Deduplication available but heavy on server resource 13
Triangulate your requirements: How to balance all 3? Priority: All high Priority: Two high Priority: Mixed • Capacity - Medium/High priority • Capacity - Low priority • Capacity - Medium/Low priority • Footprint - Medium/High priority • Footprint - High priority • Footprint - High priority • Performance – Medium/High priority • Performance - High priority • Performance - Medium priority 14
Triangulate your requirements: No compromise Object: Capacity • Capacity - High priority • Footprint - High priority • Performance - High priority Solution: Caching • Leverage high capacity SAS or SATA drive for capacity • Leverage SSD/Flash & memory for performance SAS 10k.15k rpm 140 – 210 IOPS • Compromise: None 15
Caching: The importance of caching Virtualized environments suffer from the ‘I/O blender’ effect • Multiple Virtual Machines sharing a set of disks • Resulting in predominantly random I/O • Magnetic drives provide poor random performance • SSD & Flash storage ideal for workloads but expensive Working sets of data • Driven by workloads which are ever changing • Refers to the amount of data most frequently accessed • Always related to a time period • Working sets sizes evolve as workloads change Caching • Combat the I/O blender effect without the expense of all Flash or SSD • Working sets of data can be identified and elevated to cache 16
Optimising Storage: Write Caching Stage 1 All new data written t o SSD • • Provides low latency and high IOPs • Data is marked as “dirty” as is has not be committed to the backing store Stage 2 • The write operation is acknowledged immediately to the server/application Stage 3 • The “dirty” data is reordered and grouped based on disk locality • The data is destaged and written out the backing store sequentially as possible Stage 4 • Cache is notified when data has successfully been written to backing store • The data in cache is marked as “clean” and remains in cache until the space is required
Caching: Predictive read caching Intelligent read caching algorithm • All read I/Os are monitored and analyzed • Most frequently used data – “Hot” data • Cache tiers are populated based on access frequency Tiering • RAM: Most frequently accessed data • SSD/Flash: Next most frequently accessed data • HDD: Infrequently accessed data – “Cold” data Tracker Module • Identifies transient data to prevent cache pollution • Promotes and demotes data through tiers • Intelligently adjusts tracker window to workload Sizing • Assign cache sizes to meet requirements • Grow caches as working sets change • Use any combination of Memory, SSD/Flash and Disk
Caching: Read Ahead & Pinning Read-ahead mode Data pinning mode • Pin specific data/workloads in memory • I/O blender effect aware! • Identifies sequential interleaved I/O requests • Delivers most efficient read performance • Detects sequential read streams to allow read ahead • Databases, VDI, frequently repeated operations • Pre-fetches data into memory • Manage multiple pin groups 19
Caching: Example Platform Numbers Hardware • Cisco USC C220 M3 • 1 x Intel Xeon CPU E5-2699 v4 @ 2.20GHz 1xSSD • 22 Cores 3 x 1.2TB 10k SAS RAID 5 • 44 Threads • 128GB RAM per host Networking • 2 x 1Gb NICs VM Network • 2 x 10Gb direct connect, SvSAN iSCSI/Mirror Storage • 1 x RAID5 = 3 x 1.2TB 10K SAS disks • 1 x 200GB Samsung SSD 1xSSD 3x1.2TB 10k SAS RAID 5 20
Caching: Customer Workload Workload • 12 Virtual Machines Block Size Distribution • 78 applications 25,000,000 • Back up service 20,000,000 Hit Count 15,000,000 Writes 10,000,000 Read Write Read 5,000,000 Read/Write % 77% 23% 0 Sequential % 49% 39% 1 2 4 8 16 32 64 128 256 512 1 2 4 Average Per Day 991 GB 294 GB KB KB KB KB KB KB KB KB KB KB MB MB MB Average Block Size 58 KB 54 KB Block Size Average IOPS 212 138 Throughput IOPs Locality of access 3500 10000 Number of accesses (logarithmic scale) 1000 3000 Thousands 100 2500 10 2000 IOPs 1 Read 1500 Read 0.1 Write 1000 Write 0.01 0.001 500 21 MB 110 GB 221 GB 332 GB 443 GB 554 GB 665 GB 776 GB 887 GB 998 GB 1.1 TB 1.2 TB 1.3 TB 1.4 TB 1.5 TB 1.6 TB 1.7 TB 1.8 TB 2.0 TB 2.1 TB 2.2 TB 2.3 TB 2.4 TB 2.5 TB 2.6 TB 0 18:39 21:25 00:11 02:57 05:43 08:29 11:15 14:01 16:47 19:33 22:19 01:05 03:51 06:37 09:23 12:09 14:55 17:41 20:27 23:13 01:59 04:45 07:31 10:17 13:03 15:49 18:35 Time of Day (UTC)
Recommend
More recommend