incentivizing self capping to increase cloud utilization
play

Incentivizing Self-Capping to Increase Cloud Utilization Mohammad - PowerPoint PPT Presentation

Incentivizing Self-Capping to Increase Cloud Utilization Mohammad Shahrad Cristian Klein Liang Zheng Mung Chiang Erik Elmroth David Wentzlaff September 25, 2017 Installment cost of a datacenter ~$100M [1] Google Traces


  1. 
 Incentivizing Self-Capping to Increase Cloud Utilization Mohammad Shahrad Cristian Klein Liang Zheng Mung Chiang Erik Elmroth David Wentzlaff September 25, 2017

  2. Installment cost of a datacenter ~$100M [1] Google Traces [2] 40% CPU utilization 
 Energy efficiency 53% memory utilization Provider Competitiveness [1] J. Koomey, A Simple Model for Determining True Total Cost of Ownership for Data Centers, Uptime Institute White Paper, Version 2 (2007): 2007. [2] C. Reiss, A. Tumanov, G. R. Gange, R. H. Katz, M. A. Kozuch,Towards understanding heterogeneous clouds at scale:Google trace analysis. 2 Technical Report ISTC–CC–TR–12–101, Carnegie Mellon University, Pittsburgh, PA, USA, Apr. 2012.

  3. Workload Matters Jan. to Mar. 2013, 20,000-server clusters Utilization Utilization Large continuous Mix of workloads batch workloads including online services [1] Barroso, L. A., Clidaras, J., & Hölzle, U. (2013). The datacenter as a computer: An introduction to the design of warehouse-scale machines. Synthesis lectures on computer architecture , 8 (3), 1-154. 3

  4. Dealing with Low Utilization 1. More efficient resource provisioning Better resource sharing/reclamation (e.g. Borg) • Antagonist co-location • Resource overbooking • 2. Improve deployment models Resource bidding • (e.g. Spot instances) Burstable instances • Long-term SLOs • Availability Knob • 4

  5. Managing Uncertainty is 
 Fundamentally Challenging Demand fluctuations Spare Capacity QoS / SLO’s Cloud services have became more and more elastic. Offloading some of the burden to tenants? 5

  6. Motivating Tenants to Have 
 Less Fluctuations Provide a mechanism to control capacity demand Economic incentives to fluctuations change behavior Graceful degradation 6

  7. Graceful Degradation (GD) Methodology Dynamic Adaptive Streaming over HTTP (DASH) Brownout self-adaptation Maintain response time - Deactivating non- - essential content E.g. online recommendations • Make revenue • Compute-heavy 7

  8. How to flatten capacity demand? Pricing Model Graceful Degradation Cutting the peaks Filling the valleys 8

  9. Shaping The Capacity Demand Activate GD 3.5 C max Capacity 3.0 Delivery Limit AggUHgDtH C3U UtilizDtion (7Hz) C d 2.5 2.0 Charged based On-demand on usage 
 Capacity 1.5 (price p d ) 1.0 C b Reserved Always charged 0.5 C min Capacity (price p b ) 0.0 0 1 2 3 4 5 6 7 DDys ()iUst wHHk of Aug. 2013) • GD helps shape the peaks . Globally dynamic price pair • p b < p d helps shape the valleys . 9

  10. System Overview Clients queries Service Provider capacity GD-compliant Capacity demand Application Controller capacity dynamic request price Hypervisor Price Controller capacities Infrastructure Provider 10

  11. Tenants’ Profit Maximization Given a price pair, tenants can select the best capacity pair: Optimal Demand PDF* Reserved Capacity Revenue Function Optimal Capacity Price Capacity Limit *PDF: Probability Density Function 11

  12. Infrastructure Provider Controlling Utilization with Price We prove that for all tenants: Reserved capacity ~ This empowers a robust feedback mechanism: Capacity limit ~ 1 / Subscription 
 Utilization Renegotiation Infrastructure Dynamic Price 12

  13. Evaluation Bitbrains and Materna traces: • Simulations on real-world traces • Business-critical applications • Implement and test a prototype for enterprise customers Cutting Peaks Filling Valleys GD-compliant GD-noncompliant 13

  14. 14

  15. Even the Simplest Demand PDF Prediction Shows Major Gains Our simple prediction: using the PDF* of previous period Effective utilization: Profit: ~16% from 41% to 73% Simple Prediction Simple Prediction 6500 1 GD-compliant GD-noncompliant Effective Utilization (u e ) 6000 0.85 Net Profit ($) 5500 0.7 GD-compliant GD-noncompliant 5000 0.55 4500 0.4 4000 0.25 0 5 10 15 20 25 30 0 5 10 15 20 25 30 Oracle (Perfect Prediction) Oracle (Perfect Prediction) 6500 1 Effective Utilization (u e ) 6000 0.85 Net Profit ($) 5500 0.7 GD-compliant GD-noncompliant 5000 0.55 4500 0.4 GD-compliant GD-noncompliant 0.25 4000 0 5 10 15 20 25 30 0 5 10 15 20 25 30 SLA Period (Days) SLA Period (Days) *PDF: Probability Density Function 15

  16. Improving Effective Utilization Effective utilization (u e ): amount of requested capacity limit a tenant has used Infrastructure utilization : average of tenants’ effective utilization 1 GD-compliant (k=k 0 =0.7) 0.9 GD-compliant (k=0.9k 0 ) Effective Utilization (u e ) GD-compliant (k=1.1k 0 ) 0.8 GD-noncompliant 0.7 More Less sensitive sensitive to GD 0.6 to GD 0.5 0.4 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 /p d • Capacity getting more expensive • Capacity getting cheaper compared to revenue compared to tenant’s revenue 16

  17. Multi-Tenant Scenario More degradation Increased on-demand price 17

  18. Prototype Evaluation • Used Xen hypervisor (CPU scaling capabilities) • GD-enabled RUBiS (eBay-like benchmark) • Scaled down the traces in two dimensions: • time-wise • magnitude-wise Renegotiations https :// github . com / cristiklein / gdinc - experiment 18

  19. Takeaways • Demand uncertainty is a fundamental challenge to increase cloud utilization • One way to deal with it is incentivize tenants to fluctuate less • Graceful degradation resilience methodology can be applied • A well-defined pricing model allows tenants to maximize for profit using GD • IP’s can control utilization without having full knowledge of tenants 19

  20. Incentivizing Self - Capping to Increase Cloud Utilization 3.5 C max Clients 3.0 queries Service Provider AggUHgDtH C3U UtilizDtion (7Hz) C d 2.5 capacity GD-compliant Capacity demand 2.0 Application Controller 1.5 capacity dynamic request price 1.0 C b Hypervisor Price Controller 0.5 capacities C min 0.0 0 1 2 3 4 5 6 7 Infrastructure Provider DDys ()iUst wHHk of Aug. 2013) Mohammad Shahrad 
 mshahrad@princeton . edu 20

Recommend


More recommend