demystifying benchmarks
play

Demystifying Benchmarks How to Use Them to Better Evaluate Databases - PowerPoint PPT Presentation

Flexible transactional scale for the connected world. Demystifying Benchmarks How to Use Them to Better Evaluate Databases Peter Friedenbach, Performance Architect | Clustrix Demystifying Benchmarks Outline A Brief History of Database


  1. Flexible transactional scale for the connected world. Demystifying Benchmarks How to Use Them to Better Evaluate Databases Peter Friedenbach, Performance Architect | Clustrix

  2. Demystifying Benchmarks Outline A Brief History of Database Benchmarks o A Brief Introduction to Open Source Database Benchmark Tools o How to Evaluate Database Performance o – Best Practices and Lessons Learned 2

  3. First, a little history … In the beginning, there was the “TPS Wars” 3

  4. Debit/Credit: The 1 st Attempt o Published in 1985 by Anon. et. Al (Jim Gray) o First publication of a database performance benchmark. o Novelties of DebitCredit: – ACID property transactions – Price/Performance – Response Time Constraints – Database Scaling Rules 4

  5. The Call for an Industry Standard TPC Timeline: Transaction Processing o Performance Council 1988 : TPC Established – Founded in 1988, the TPC was 1989 : TPC-A Approved (Formalizes Debit/Credit) chartered to established industry 1990 : TPC-B Approved (Database only version of TPC-A) standards out of the madness. 1992 : TPC-C Approved (Replaces TPC-A OLTP workload) 1995 : TPC-D Approved (1 st Decision Support Workload) TPC-A & TPC-B retired. 1999 : TPC-H Approved (Replaces TPC-D Workload) 2000 : TPC-C v5 Approved (Major revision to TPC-C) 2006 : TPC-E Approved (Next Generation OLTP workload) 2009 : First TPC Technology Conference on Performance Evaluation & Benchmark (Held in conjunction with VLDB) 2012 : TPC-VMS Approved (1 st Virtualized Database benchmark) 2014 : TPCx-HS Approved (1 st Hadoop based benchmark) 2015 : TPC-DS Approved (Next Generation Decision Support Benchmark) 2016 : TPCx-V Approved (Virtual DB Server benchmark) 2016 : TPCx-BB Published (Hadoop Big Data benchmark) 5

  6. Transaction Processing Performance Council o The Good: o The Bad: – Established the rules to the game – Expensive to play – For the first time, competitive – “Benchmarketing” and performance claims could be gamesmanship compared – Dominated by vendors – Audited results • Hardware vendors – Standard workloads focused the • Database vendors industry and drove performance – Slow to evolve to a changing improvements marketplace. 6

  7. The Impact of the TPC 7

  8. What happened to the TPC? 8

  9. Open Source Database Benchmarking Tools Sysbench o – Open source toolkit • Moderated by Alexey Kopytov at Percona “ sysbench is a modular, – Implements multiple workloads designed cross-platform and multi- to test the CPU, disk, memory, and threaded benchmark tool for database capabilities of a system evaluating OS parameters – Database workload allows for a mixture of that are important for a reads (singleton selects and range system running a database queries) and writes (inserts, updates, and under intensive load.” deletes) – Sysbench is popular in the Mysql Marketplace 9

  10. Open Source Database Benchmarking Tools o YCSB (Yahoo! Cloud Serving Benchmark) – Open source toolkit • Moderated by Brian Cooper at “ YCSB is a framework and Google common set of workloads for – Multi-threaded driver exercising evaluating the performance of get and put operations against an different “key-value” and “cloud” serving stores.” object store – YCSB is popular with the NoSQL Marketplace 10

  11. Open Source Database Benchmarking Tools o Others – “TPC-like” workloads live on • DBT2 (MySQL Benchmark Tool Kit) • DebitCredit (TPC-A / TPC-B like) • OrderEntry (TPC-C like) – OLTPBench • https://github.com/oltpbenchmark/oltpbench – Others? 11

  12. How to Evaluate Database Performance o #1 – Know your objective. – What are you trying to measure/test? • OLTP Performance: Capacity and Throughput • Data Analytics: Query Performance • Do you need full ACID property transactions? • … And other questions? 12

  13. How to Evaluate Database Performance o #2 – Choose your approach. Option 1: Rely on “published Results” • Words of advice: Trust but verify! • Be wary of competitive benchmark claims • Without the TPC, there is no standard for comparison Option 2: Leverage open source benchmarks • Leverage Sysbench and/or YCSB mixed workloads • Create your own custom mix as appropriate Option 3: Model your own workload (“Proof of Concept”) • Particularly useful if you have existing data and existing query profiles 13

  14. How to Evaluate Database Performance o #3 – A data point is not a curve. (Common mistake.) Performance Curves Performance is a tradeoff Latency (Response Time) of throughput versus latency. Design your tests with a variable in mind. Throughput 14

  15. How to Evaluate Database Performance o #4 – Understand where there’s a bottleneck. (Common mistake.) – Where systems can bottleneck: • Hardware (CPU, disk, network) • Database (internal locks/latches, buffer managers, transaction managers, … ) • Application (data locks) • Driver systems 15

  16. How to Evaluate Database Performance #5 – Limit the number of variables. o In any test, there are: – Three fundamental system variables • Hardware, operating system, and database system – Driver mode • On box versus external client – Database design variables • Connectivity (odbc, jdbc, session pools, proprietary techniques, … ) • Execution model (session-less, prepare/exec, stored procedures, … ) • # of tables, # of columns, types of columns – Multiple test variables • Database scale size • Concurrent sessions/streams Real performance • Query Complexity work is an exercise of control. 16

  17. How to Evaluate Database Performance o #6 – Scalability testing: The quest for “linear scalability” A workload will scale only if sized appropriately. 17

  18. How to Evaluate Database Performance o #7 – The myth of “representable workloads.” Benchmarks are not “representable workloads” – The complexity of the benchmark does not determine the goodness of the benchmark. Good benchmark performance is “necessary but not sufficient” for good application performance. 18

  19. How I Use the Tools Available o To access “system” health, I use: – CPU Processor: sysbench cpu – Disk Subsystem: sysbench fileio – Network Subsystem: iperf o To access “database” health, I use: – Sysbench for ACID transactions: point selects, point updates, and simple mixes (90:10 or 80:20) – YCSB for nonACID transactions: workloadc (readonly), workloada and workloadb (read/write mixes) o To access “database” transaction capabilities, I use: – DebitCredit and OrderEntry (“TPC like database only workloads.”) o To model application specific problems I will sometimes leverage any or all of the above. 19

Recommend


More recommend