comprehensive elastic resource management to ensure
play

Comprehensive Elastic Resource Management to Ensure Predictable - PowerPoint PPT Presentation

Comprehensive Elastic Resource Management to Ensure Predictable Performance for Scientific Applications on Public IaaS Clouds. In Kee Kim , Jacob Steele, Yanjun Qi, Marty Humphrey CS@University of Virginia Motivation Goals Meet Job


  1. Comprehensive Elastic Resource Management to Ensure Predictable Performance for Scientific Applications on Public IaaS Clouds. In Kee Kim , Jacob Steele, Yanjun Qi, Marty Humphrey CS@University of Virginia

  2. Motivation • Goals – Meet Job Deadline – Low Cost

  3. Current Approach • Auto-Scaling • Scale Up – Job Deadline Satisfaction (High Demand) • Scale Down – Cost Efficiency (Low Demand) [1] Schedule-based Scaling [2] Rule-based Scaling Under Provisioning Over Provisioning Scale Up Delay Scale Down Delay T 1 T 2 T 3 T 1 Schedule-based Scaling Rule-based Scaling • Static approach Dynamic but Delays – Reactive

  4. Research Goal and Approach • In order to meet 1) user-defined job deadline and 2) minimize execution cost for scientific applications that have highly variable job execution time , we design a Comprehensive Resource Management System by utilizing - L ocal Linear Regression-based Job Execution Time Prediction - C ost/Performance-Ratio based Resource Evaluation - A vailability-Aware Job Scheduling and VM Scaling 4

  5. Outline • Motivation • Three approaches of LCA • Experiment • Conclusion 5

  6. LLR: Job Execution Time Prediction • Initial Intuition – Job execution time has a linear relationship with IaaS/Application parameters • Data Collection (26 samples on 4 types of VMs) and Correlation Analysis Size of Data Type of VM Non-Data Intensive Operation 0.0973 (negligible) 0.7089 (strong) Data Intensive Operation 0.6129 (moderate) 0.3223 (weak) Simple Linear Model → Cannot Produce Reliable Prediction • Local Linear Regression Job Execution Time (sec.) Job Execution Time (sec.) error (a) Global Linear regression on m1.large (b) Local Linear Regression on m1.large (using all samples) (Using three samples)

  7. Cost-Perf. Ratio-based Resource Evaluation •

  8. Availability-Aware Job Scheduling • AAJS first assigns as many jobs as possible to current running VMs based on CP evaluation results. – Maximize machine utilization of current running VM instances. – Minimizing overhead from staring new VMs • Job Assignment Criteria 1) VM which has higher order (rank) in Cost/Performance ratio. 2) VM which offers earliest job completion time if multiple options available. Queue Wait Time + New Job Exec Time

  9. Experiment Setup • • Baselines Workload Generation – SCS – MH [SC 2011] # of Jobs 100 Watershed Delineation Jobs – SCS + LLR [NEW] Mean Deadline STD DEV Job Deadline 30 min. 9.7 min. • Implementation & Deploy Mean Duration STD DEV Job – LCA and 2 baselines on AWS Duration 15 min. 12.5 min. • VM Types for Experiments Instance CPU/Mem Hourly Price Type m1.small 1/1.7G $0.091/Hr. (a) Steady (b) Bursty m1.medium 1/3.7G $0.182/Hr. m1.large 2/7. 5G $0.364/Hr. m1.xlarge 4/15G $0.728/Hr. (c) Incremental (d) Random

  10. Job Exec. Time Predictor Performance LLR LR kNN Mean 78.77% Avg. Predict. Acc. 67.62% 65.38% 60.99% 0.2773 MAPE 0.3901 0.5012 0.8254 LLR: Local Linear Regression, LR: Linear Regression, MAPE: Mean Absolute Percentage Error

  11. Job Deadline Satisfaction Rate LCA : Average 83.25% of Job Deadline Satisfaction Rate - 9% better than SCS+LLR - 33% better than SCS

  12. Overall Running Cost LCA : Average $8.9 of Overall Running Cost - $2.5 of cheaper than SCS+LLR - $5.2 of more expensive than SCS - (but performance is not comparable)

  13. Conclusion • LCA is a novel elastic resource management system for scientific applications on public IaaS cloud based on three approaches: [1] L ocal Linear Regression-based Job Execution Time Prediction [2] C ost-Performance Ratio-based Resource Evaluation [3] A vailability-Aware Job Scheduling and VM Scaling • LCA has better performance than baselines (SCS, SCS with LLR) in Four different workload patterns (Steady, Bursty, Incremental, Random). – Predictor Performance: 11%-18% better accuracy – Job Deadline Satisfaction Rate: 9%-33% better rate – Overall Running Cost: $2.45 (22%) better cost efficiency

  14. Thank you & Questions?

  15. Back-up Slides

  16. LCA System Design Resource Evaluation Prediction Module Prediction Results Job + Deadline Cost-Performance LLR Predictor Optimized Evaluation Request Samples User VM Ranking & Selection Results Update Job Scheduling & VM Scaling Exe Info Job History Repository Availability-Aware Job Scheduling and VM Scaling VM Req, Job Assign +/- VMs, Job Assignment VM Manager VMs on IaaS 6

  17. VM Utilization Idle Job Running Startup LCA : Average 69.17% of VM Utilization - 25% higher than SCS + LLR - 11% higher than SCS

  18. VM Instance Types T ABLE . S PECIFICATION OF G ENERAL P URPOSE M ICROSOFT W INDOWS I NSTANCES ON A MAZON EC2 IN US E AST R EGION (T HE P RICE IS BASED ON M ARCH 2014) Instance Type ECU [1] CPU Cores Memory Hourly Price m1.small 1 1 1.7GB $0.091/Hr. m1.medium 2 1 3.7GB $0.182/Hr. m1.large 4 2 7.5GB $0.364/Hr. m1.xlarge 8 4 15GB $0.728/Hr. 1 Single ECU (EC2 Compute Unit) provides the equivalent CPUI capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon Processor ← Back to Slide – Experiment Setup

Recommend


More recommend