a resource allocation controller for cloud based adaptive
play

A Resource Allocation Controller for Cloud-based Adaptive Video - PowerPoint PPT Presentation

1/16 A Resource Allocation Controller for Cloud-based Adaptive Video Streaming Luca De Cicco , Saverio Mascolo, Dario Calamita Politecnico di Bari, Dipartimento di Ingegneria Elettrica e dellInformazione MCN 2013 - Budapest, Hungary 13 June


  1. 1/16 A Resource Allocation Controller for Cloud-based Adaptive Video Streaming Luca De Cicco , Saverio Mascolo, Dario Calamita Politecnico di Bari, Dipartimento di Ingegneria Elettrica e dell’Informazione MCN 2013 - Budapest, Hungary 13 June 2013

  2. 2/16 Motivation Two ongoing trends (Cisco VNI) Video is booming : video applications today account for more than half of the global traffic Mobile is growing : mobile data traffic will be half of global traffic in 2017 Video P2P 40 Data Web Video conf 30 EB 20 10 0 2010 2011 2012 2013 2014 2015

  3. Introduction 3/16 The challenge Main Goal Design a cloud-based platform for massive distribution of adaptive videos

  4. Introduction 3/16 The challenge Main Goal Design a cloud-based platform for massive distribution of adaptive videos Issues 1 Bandwidth is unpredictable in best-effort Internet 2 Mobile devices have limited CPU and display resolution 3 User demand is highly time-varying

  5. Introduction 3/16 The challenge Main Goal Design a cloud-based platform for massive distribution of adaptive videos Issues 1 Bandwidth is unpredictable in best-effort Internet 2 Mobile devices have limited CPU and display resolution 3 User demand is highly time-varying Design Goals 1 Issues 1 and 2 ⇒ Implement video adaptivity 2 Issue 3 ⇒ Resource Allocation to dynamically turn on/off servers

  6. The control plane 4/16 The proposed Control Plane Client Cloud API Central Unit 1 N ( t ) 4 B ( j ) Load Resource Alloc. A Balancer Controller B ( i ) A 2 l ( i ) l ( j ) ˜ ˜ 1 1 SSAC SSAC 3 Mon. Mon. Clients other SSAC SSAC flows l ( i ) ˜ n ( i ) i−th server j−th server M ( t ) Architecture Controllers One Central Unit Stream Switching Adaptation Controller (per-flow) M ( t ) servers Load balancer (centralized) Resource Allocation Controller (centralized)

  7. The control plane 5/16 Stream Switching Adaptation Controller Stream-switching approach The video is available at different resolutions and bitrates, a controller selects the video to be streamed Quality Adaptation Controller (QAC) - ACM MMSYS 2011 sender Server Player buffer r ( t ) , l ( t ) τ f l ( t ) Decoder HTTP buffer Internet q ( t ) traffic Video Levels selects video level Switching Controller Adaptation logic is executed at the server (in the Cloud) The video flow behaves as any TCP greedy flow Fairness is inherited by TCP congestion control

  8. Resource Allocation Controller 6/16 Inelastic videos Total Uplink Capacity C T Interruptions occur Video bitrate l C T / l n Fact If video is not adaptive, the delivery network must be always overprovisioned to prevent playback interruptions

  9. Resource Allocation Controller 7/16 Elastic videos Total Uplink Capacity C T Interruptions occur l ∈ { l 0 , . . . , l M } C T / l 0 C T / l M n We can work at 100% uplink channel utilization But : users will not receive the maximum video level anymore Action : increase the number of servers to increase uplink capacity

  10. Resource Allocation Controller 8/16 Why flows do not get the maximum video level? Where’s the bottleneck? 1 At the Server. Can act on these flows by turning ON machines. 2 At the Client. Cannot act on these flows (threated as a disturbance)

  11. Resource Allocation Controller 8/16 Why flows do not get the maximum video level? Where’s the bottleneck? 1 At the Server. Can act on these flows by turning ON machines. 2 At the Client. Cannot act on these flows (threated as a disturbance) The goal of the RAC is to steer to zero the number of uplink-limited flows n UL ( t ) We need to estimate n UL ( t ) # limited flows = # uplink-limited flows + # client limited flows n L ( t ) = n UL ( t ) + n CL ( t ) The CU measures n L ( t ) easily A variable threshold mechanism estimates n CL ( t ) (details in the paper)

  12. Resource Allocation Controller 9/16 The Resource Allocation Controller Switch-on Controller : steers ˆ n UL ( t ) to zero (control-loop set point) Switch-off Controller : turns off servers when the goal of the switch-on controller is reached ˆ N on N M n UL 0 1 1 G 0 z − r c ( z ) CU 1 − z − 1 C Switch−on N off 1 B A (1 − z − r )ˆ G ( z ) delay B Switch−on controller Switch−off controller Switch-on controller PD controller: G 0 c ( z ) = K p + K d (1 − z − 1 ) Switch-off controller It turns off (if N on = 0) The Smith predictor compensates the effect of the switch-on delay a number of machines equal to B A / B The model used in the SP is an integrator (tf from N to M)

  13. Simulator 10/16 Simulations Simulator based on CDNSim implements the control modules and a module monitoring CPU costs Metrics Fraction of flows obtaining the maximum level: α ( t ) = 1 − n L ( t ) / n ( t ) Total Servers costs C c ( t ) Considered controllers The proposed PD controller with K p = − 0 . 7, K d = − 0 . 3 The proposed controller without the Smith predictor Feed forward controller : N ( t k ) = n ( t k ) / C − M ( t k ) (difference between the number of servers that should be ON to provide maximum quality and the number of active server)

  14. Simulator 11/16 Scenarios Scenarios Client downlink is not the bottleneck ⇒ n CL ( t ) = 0 16% of users have a downlink channel not allowing maximum video level ( n CL ( t ) � = 0): Request arrival (Poisson with variable intensity r ( t )) 40 35 Requests/s 30 25 20 15 10 5 0 0 200 400 600 800 1000 1200 Time (s)

  15. Results 12/16 Results: client limited flows (ˆ n CL = 0) Number of active servers over SC FF RAC no SP RAC 150 time is smooth with RAC Other controllers exhibit 100 M(t) overshoots when r increases 50 Machines are turned on, but the r=20 r=30 r=5 r=30 r=0 effect on n UL is measured only 0 0 200 400 600 800 1000 1200 after the switch-on delay Time (sec) 1 Overshoots waste resources, 0.8 undershoots hurt QoE (less 0.6 videos receiving max video level) Undershoots α (t) 0.4 RAC is worse than FF in terms FF of α only during transients when 0.2 RAC no SP RAC r increases 0 0 200 400 600 800 1000 1200 Time (sec)

  16. Results 13/16 Results: client limited flows (ˆ n CL � = 0) 16% of flows with 1Mbps connection ⇒ expected maximum α = 0 . 84 100 Large overprovisioning in the SC FF RAC no SP RAC case of feed forward controller 80 RAC w/o SP performs better 60 M(t) but shows overshoots when 40 requests rate increases 20 r=20 r=30 r=5 r=30 r=0 0 0 200 400 600 800 1000 Time (sec) 1 RAC outperforms other 0.8 controllers in terms of costs 0.6 α FF = 0 . 78 (saves 10%) and pays a slight α (t) α RAC = 0 . 73 performance degradation (4%) 0.4 FF 0.2 RAC no SP RAC 0 0 200 400 600 800 1000 Time (sec)

  17. Results 14/16 Cost savings ( n CL � = 0) 50 FF RAC no SP 40 Costs savings % 30 20 10 0 0 250 500 750 1000 Time (sec)

  18. Results 15/16 Let’s see RAC in motion Heat map Warmer color at (x,y) ⇒ many flows are receiving level x by server y Ideal: dark blue (0) everywhere except for a bright evenly colored bar at level 9 Levels pdf Fraction of flows obtaining level x Ideal: zero for x < 9, one for x = 9

  19. Conclusions 16/16 Conclusions We have proposed a Resource Allocation Controller for cloud-based adaptive video streaming Feedback control theory is employed to compute the number of servers to turn on/off The RAC strives to minimize delivery network costs while delivering the maximum video quality The RAC controller saves up to 30% CPU costs while paying a small performance quality degradation during transients Future work: make the system distributed

  20. Thanks! Questions ? ? ? ? ? ? ? ? ?

  21. BACKUP SLIDES

  22. Conclusions 15/16 Estimating n UL ( t ) Estimating the number of Cumulative Flows Cumulative Flows uplink-limited flows number of number of L to estimate n L ( t ) concurrent flows concurrent flows (limited flows) L ( t ) to estimate ˆ n CL ( t ) n UL ( t ) = n L ( t ) − ˆ ˆ n CL ( t ) Ideally L n UL ( t ) = 0 with the minimum number of l 0 l 1 l 2 l 3 l 4 l 5 l M servers

  23. Conclusions 15/16 Estimating n UL ( t ) Estimating the number of Cumulative Flows Cumulative Flows Cumulative Flows Cumulative Flows uplink-limited flows number of number of number of number of L to estimate n L ( t ) concurrent flows concurrent flows concurrent flows concurrent flows (limited flows) L ( t ) to estimate ˆ n CL ( t ) ˆ n UL ( t ) = n L ( t ) − ˆ n CL ( t ) n L Ideally L L n UL ( t ) = 0 with the minimum number of l 0 l 0 l 1 l 1 l 2 l 2 l 3 l 3 l 4 l 4 l 5 l 5 l M l M servers

  24. Conclusions 15/16 Estimating n UL ( t ) Estimating the number of Cumulative Flows Cumulative Flows Cumulative Flows Cumulative Flows Cumulative Flows Cumulative Flows uplink-limited flows number of number of number of number of number of number of L to estimate n L ( t ) concurrent flows concurrent flows concurrent flows concurrent flows concurrent flows concurrent flows (limited flows) L ( t ) to estimate ˆ n CL ( t ) ˆ n UL ( t ) = n L ( t ) − ˆ n CL ( t ) n L n L ˆ n CL Ideally Estimate of L ( t ) L L L Client limited flows n UL ( t ) = 0 with the minimum number of l 0 l 0 l 0 l 1 l 1 l 1 l 2 l 2 l 2 l 3 l 3 l 3 l 4 l 4 l 4 l 5 l 5 l 5 l M l M l M servers

Recommend


More recommend