building and optimizing learning augmented computer
play

Building and Optimizing Learning- augmented Computer Systems Hongzi - PowerPoint PPT Presentation

Building and Optimizing Learning- augmented Computer Systems Hongzi Mao October 24, 2019 Learning Scheduling Algorithms for Data Processing Clusters . Hongzi Mao, Malte Schwarzkopf, Shaileshh Bojja Venkatakrishnan, Zili Meng, Mohammad


  1. Building and Optimizing Learning- augmented Computer Systems Hongzi Mao October 24, 2019 • Learning Scheduling Algorithms for Data Processing Clusters . Hongzi Mao, Malte Schwarzkopf, Shaileshh Bojja Venkatakrishnan, Zili Meng, Mohammad Alizadeh. ACM SIGCOMM , 2019.

  2. Mot Motivation on Scheduling is a fundamental task in computer systems • Cluster management (e.g., Kubernetes, Mesos, Borg) • Data analytics frameworks (e.g., Spark, Hadoop) • Machine learning (e.g., Tensorflow) Efficient scheduler matters for large datacenters • Small improvement can save millions of dollars at scale 2

  3. Desi De signing Op Optimal Sch chedulers s is s Intract ctable Must consider many factors for optimal performance: • Job dependency structure Graphene [OSDI ’16], Carbyne [OSDI ’16] • Modeling complexity Tetris [SIGCOMM ’14], Jockey [EuroSys ’12] TetriSched [EuroSys ‘16], device placement [NIPS ’17] • Placement constraints Delayed Scheduling [EuroSys ’10] • Data locality …… • …… Practical deployment: No “one-size-fits-all” solution: Ignore complexity à resort to simple heuristics Best algorithm depends on specific workload and system Sophisticated system à complex configurations and tuning

  4. Can machine learning help tame the complexity of efficient schedulers for data processing jobs?

  5. Dec ecima: A Lea earned ned Clus uster er Schedul heduler er • Learns workload-specific scheduling algorithms for jobs with dependencies (represented as DAGs) “Stages”: Identical tasks Executor 1 that can run in parallel Executor 2 Scheduler Data dependencies Executor m Job 2 Job 3 Job DAG 5 5

  6. Dec ecima: A Lea earned ned Clus uster er Schedul heduler er • Learns workload-specific scheduling algorithms for jobs with dependencies (represented as DAGs) Server 1 Server 2 Scheduler Server m Job 1 6

  7. Scheduling policy: FIFO De Demo Number of servers working on this job Average Job Completion Time: 225 sec

  8. Scheduling policy: Shortest-Job-First Average Job Completion Time: 135 sec

  9. Scheduling policy: Fair Average Job Completion Time: 120 sec

  10. Shortest-Job-First Fair Average Job Completion Time: Average Job Completion Time: 135 sec 120 sec

  11. Scheduling policy: Decima Average Job Completion Time: 98 sec

  12. Decima Fair Average Job Completion Time: Average Job Completion Time: 98 sec 120 sec

  13. 166 sec Decima it=0 ² 20 Spark jobs (TPC-H queries), 50 servers

  14. 160 sec Decima it=3000

  15. 148 sec Decima it=6000

  16. 145 sec Decima it=9000

  17. 142 sec Decima it=12000

  18. 126 sec Decima it=15000

  19. 111 sec Decima it=18000

  20. 108 sec Decima it=21000

  21. 107 sec Decima it=24000

  22. 93 sec Decima it=27000

  23. 89 sec Decima it=30000

  24. Design 24

  25. De Design n overvi view Reward State Scheduling Agent Objective Schedulable Environment Nodes Graph Policy Job DAG 1 Job DAG n Neural p[ Network Network Executor 1 Executor m Observation of jobs and cluster status

  26. Con Contribution ons Reward State Scheduling Agent Objective Schedulable Environment Nodes Graph Policy Job DAG 1 Job DAG n Neural p[ Network Network Executor 1 Executor m Observation of jobs and cluster status 1. First RL-based scheduler for complex data processing jobs 2. Scalable graph neural network to express scheduling policies 3. New learning methods that enables training with online job arrivals 26

  27. Enc Encode de sc sche hedul duling ng de decisi sions ns as s actions ns Server 1 Server 2 Job DAG 1 Set of identical Server 3 free executors Server 4 Job DAG n Server m 27

  28. Option Op n 1: Assign n al all Exec ecut utors in n 1 Action Server 1 Job DAG 1 Server 2 Server 3 Server 4 Job DAG n Problem: huge action space Server m 28

  29. Option Op n 2: Assign n One One Exec ecut utor Per er Action Server 1 Job DAG 1 Server 2 Server 3 Server 4 Job DAG n Problem: long action sequences Server m 29

  30. De Decima: Assign Gr Groups of Executors per Action Server 1 Use 1 server Job DAG 1 Server 2 Use 1 server Server 3 Use 3 servers Server 4 Job DAG n Action = (node, parallelism limit) Server m 30

  31. Process Job Informat Pr ation Node features: Job DAG 1 • # of tasks Arbitrary • avg. task duration number • # of servers currently of jobs assigned to the node • are free servers local to Job DAG n this job? 31

  32. Gr Grap aph Ne Neural al Ne Netw twork 8 6 3 2 Score on Job DAG each node 32

  33. Gr Grap aph Neural al Network ! " = $ % " , ! ' '∈) " ; + Step 1 Job DAG 1 Step 2 Children of v Same aggregation applied to all nodes for each DAG Step 1 Job DAG n Step 2

  34. Gr Grap aph Neural al Network Critical path max max Same aggregation applied max everywhere in the DAGs

  35. Gr Grap aph Neural al Network Supervised learning training curve 100% Testing DccurDcy 80% 60% 6ingle non-lineDr DggregDtion 'ecimD's two-level DggregDtion 40% 0 50 100 150 200 250 300 350 Same aggregation applied Use supervised learning to verify a representation 1umber of iterDtions everywhere in the DAGs

  36. Tr Training Generate experience data Decima Reinforcement agent learning training cluster 36

  37. Ha Handle e Online e Jo Job Arri Arrival The RL agent has to experience continuous job arrival during training . → inefficient if simply feeding long sequences of jobs Initial Number of random backlogged jobs policy Time 37

  38. Handle Ha e Online e Jo Job Arri Arrival The RL agent has to experience continuous job arrival during training . → inefficient if simply feeding long sequences of jobs Initial Number of random backlogged jobs policy Waste training time Time 38

  39. Handle Ha e Online e Jo Job Arri Arrival The RL agent has to experience continuous job arrival during training . → inefficient if simply feeding long sequences of jobs Initial Number of random backlogged jobs policy Early reset for initial training Time 39

  40. Ha Handle e Online e Jo Job Arri Arrival The RL agent has to experience continuous job arrival during training . → inefficient if simply feeding long sequences of jobs Curriculum learning Increase the As training proceeds, reset time Number of stronger policy keeps backlogged jobs the queue stable Time 40

  41. Variance from Job Sequences Va RL agent needs to be robust to the variation in job arrival patterns. → huge variance can throw off the training process 41

  42. Re Review: Policy Gradient RL Methods 2 3 , 0 − 5(+ , ) ! ← ! + $ % & log * & + , , . , / , 0 1, “return” from step t “baseline” Expected return Increase probability of actions with from state s t better-than-expected returns 42

  43. Va Variance from Job Sequences Future action a t workload #1 Job size Must consider the entire job sequence to score actions Future workload #2 Time t Score for action a t = ( return after a t ) − ( average return ) & ∑ # $ %# = ' # $ − ((* # ) 43

  44. In Input-De Depe pende ndent t Baseline ne % % Score for action a t = ∑ " # $" Score for action a t = ∑ " # $" & " # − ((* " ) & " # − ((* " , - " , - "./ , … ) Average return for trajectories from state s t with job sequence z t , z t+1 , … Theorem: Input-dependent baselines reduce variance without adding bias ) 1 2 log 6 7 * " , 8 " ((* " , - " , - "./ , … = 0 • Variance reduction for reinforcement learning in input-driven environments . Hongzi Mao, Shaileshh Bojja Venkatakrishnan, Malte Schwarzkopf, 44 Mohammad Alizadeh. International Conference on Learning Representations (ICLR) , 2019.

  45. In Input-De Depe pende ndent t Baseline ne Broadly applicable to other systems with external input process: Adaptive video streaming, load balancing, caching, robotics with disturbance… Train with standard baseline Train with input-dependent baseline • Variance reduction for reinforcement learning in input-driven environments . Hongzi Mao, Shaileshh Bojja Venkatakrishnan, Malte Schwarzkopf, 45 Mohammad Alizadeh. International Conference on Learning Representations (ICLR) , 2019.

  46. Evaluation 46

  47. Decima De ma vs. Baseline nes: : Batche hed d Arrivals • 20 TPC-H queries sampled at random; input sizes: 2, 5, 10, 20, 50, 100 GB • Decima trained on simulator; tested on real Spark cluster Decima improves average job completion time by 21%-3.1x over baseline schemes 47

  48. De Decima ma with th Conti tinuo nuous us Job b Arrivals 1000 jobs arrives as a Poisson Better process with avg. inter-arrival time = 25 sec Decima achieves 28% lower average JCT than best heuristic, and 2X better JCT in overload 48

  49. Un Under erstanding D g Dec ecima Tuned weighted fair Tuned weighted fair Decima Decima 49

  50. Flexibility: y: Multi-Re Resource Scheduling Industrial trace (Alibaba): 20,000 jobs from production cluster Multi-resource requirement: CPU cores + memory units 50

Recommend


More recommend