ener energy gy and and pe performance can can a wi wimpy
play

Ener Energy gy and and Pe Performance Can Can a Wi Wimpy mpy - PowerPoint PPT Presentation

Ener Energy gy and and Pe Performance Can Can a Wi Wimpy mpy Node Node Cl Clus uster Challeng Challenge a Br Brawn awny Ser Server? er? Daniel Schall Theo Hrder University of Kaiserslautern, Germany Motivation Analyzing


  1. Ener Energy gy and and Pe Performance – Can Can a Wi Wimpy mpy ‐ Node Node Cl Clus uster Challeng Challenge a Br Brawn awny Ser Server? er? Daniel Schall Theo Härder University of Kaiserslautern, Germany

  2. Motivation ‘‘ Analyzing the Energy Efficiency of a Database Server “, Tsirogiannis, Harizopoulos, and Shah SIGMOD 2010 ‘‘ Distributed Computing at Multi ‐ dimensional Scale “, Alfred Z. Spector Keynote on MIDDLEWARE 2008 BTW 2015 2

  3. Motivation “Average CPU utilization of more than 5,000 servers” A. Barroso and U. Hölzle: The Case for Energy ‐ Proportional Computing •  state ‐ of ‐ the ‐ art servers waste lots of energy BTW 2015 3

  4. Energy Efficiency • EE = (relative load / rel. energy consumption) • best efficiency at 100 % load, i.e. efficiency = 1,0 • efficiency quickly drops: load 90 % 70 % 50 % 30 % 10 % efficiency 90 % 73 % 55 % 35 % 14 % • with the same energy budget, the server performs only 14 % of work at 10 % load • and takes 10 times as long BTW 2015 4

  5. _______________ Energy Efficiency Energy Ener gy Proportionality oportionality • energy consumption proportional to system utilization load load 90 % 90 % 70 % 70 % 50 % 50 % 30 % 30 % 10 % 10 % efficiency efficiency 100 % 90 % 100 % 73 % 100 % 55 % 100 % 35 % 100 % 14 % % power@utilization 100 P ower Consumption 80 60 40 energy ‐ proportional 20 behavior % 0 20 40 60 80 100 System utilization BTW 2015 5

  6. The WattDB Approach • single server • cluster of nodes • static configuration • dynamically adjust to current workload • turn nodes on/off according to demand • migrate data to balance workload 1 beefy server n wimpy servers 100 % P ower Consumption 80 60 40 20 0 20 40 60 80 100 % BTW 2015 6 System utilization

  7. The WattDB Approach database workloads vs. elasticity • transactional consistency • automatic scale ‐ out and ‐ in • shared data • fit storage and processing to workload • large data volume • minimize impact on • read / write workloads performance • save energy  migrate data, while keeping it available  distribute queries, on a variable number of nodes  balance performance and energy consumption 7

  8. The WattDB Approach Implementation: • combine elastic storage and processing elastic storage • migrate ownership with data • enable adaption to both: IO and CPU demands elastic DBMS elastic processing • shared disk „with a twist“ • physiological repartitioning for fast adaption BTW 2015 8

  9. The WattDB Approach • physiological partitioning Key ranges Key ranges IX 32 MB • self ‐ contained mini ‐ partitions (32MB) • with primary ‐ key index • top meta ‐ index (key ranges) • modified MVCC for updates BTW 2015 9

  10. The WattDB Approach • extrapolate time series from monitoring data • generate forecast over next 60 minutes  from reactive to proactive BTW 2015 10

  11. The WattDB Approach so far: • promising results compared to static clusters • better energy efficiency • without sacrificing too much performance Question of the day: • compare prototype with state ‐ of ‐ the ‐ art server • performance and energy efficiency • realistic workloads BTW 2015 11

  12. Cluster vs. BigServer Cluster BigServer • 10x Intel Atom D510 CPU • 2x Intel Xeon X5670 • @ 1.66 GHz • @ 2.93 GHz • 20/40 cores • 12/24 cores • 20 MB of L2 Cache • 24 MB of L2 Cache • 20 GB DRAM • 24 GB RAM BTW 2015 12

  13. Performance Figures • similar performance on paper • in theory BTW 2015 13

  14. Power Consumption BigServer exhibits • higher energy consumption • no energy proportionality BTW 2015 14

  15. Experimental Setup • OLAP and OLTP experiments on both systems OLAP • TPC ‐ H dataset with SF 300 • LINEITEM and ORDERS are partitioned • TPC ‐ H like queries OLTP • TPC ‐ C dataset with SF 1.000 • TPC ‐ C like queries • number of parallel DB clients determines workload • think times to avoid performance ‐ only benchmark for dynamic workload BTW 2015 15

  16. Performance ‐ centric Experiments number of DB clients • OLAP performance • server generally faster • server better suited for peak workloads • OLTP performance • server clearly wins • especially under heavy utiliziation BTW 2015 16

  17. Energy ‐ centric Experiments • OLTP, target response time: 200 msec Runtime Energy consumption • performance and power consumption of server lower • cluster exhibits high friction losses BTW 2015 17

  18. Energy ‐ centric Experiments • OLAP, target response time: 20 sec Runtime Energy consumption • server‘s idle time penalized • cluster is more energy efficient BTW 2015 18

  19. Dynamic Workload • workload trace • derived from DB performance monitoring traces • workload changes every 5 minutes • 1:15 hours total runtime • simulate real ‐ world database utilization BTW 2015 19

  20. Dynamic Workload • OLTP Runtime Power consumption Energy consumption BTW 2015 20

  21. Dynamic Workload • OLTP • cluster is more energy efficient  ½ the energy cons. • server is a lot faster  1.4 times throughput • forecasting trades performance for energy efficiency BTW 2015 21

  22. Dynamic Workload • OLAP Runtime Power consumption Energy consumption BTW 2015 22

  23. Dynamic Workload • OLAP forecasting cluster • exhibits challenging performance  78% throughput • less energy consumption  40 % energy BTW 2015 23

  24. Dynamic Workload Energy Consumption x Runtime = EDP Joule x seconds = Js ( = Ws²) • Energy Delay Product • forecasting pays off (better perf., slightly higher ec) • predictability / forecasting crucial for OLTP • energy efficiency worth the performance drop? BTW 2015 24

  25. Conclusion • no use for performance ‐ centric workloads • energy savings at low to midrange workloads • especially for OLAP • OTLP harder to make energy proportional • predictable workloads needed • forecasting crucial • proactively prepare for upcoming workloads • OLAP better suited than OLTP < • streaming • less close to data http://wattdb.de Thank You! 25

Recommend


More recommend