congestion control in distributed media streaming
play

Congestion Control in Distributed Media Streaming Lin Ma and - PowerPoint PPT Presentation

Congestion Control in Distributed Media Streaming Lin Ma and Wei Tsang Ooi National University of Singapore What is Distributed Media Streaming? (aka multi-source streaming) Multiple senders collaboratively stream media content to


  1. Congestion Control in Distributed Media Streaming Lin Ma and Wei Tsang Ooi National University of Singapore

  2. What is “Distributed Media Streaming”? (aka multi-source streaming)

  3. Multiple senders collaboratively stream media content to a receiver. Sender 1 Receiver Sender 2 Sender 3

  4. Receiver coordinates between the senders using a pull-based protocol to request different segments from different senders. Sender 1, please send me data packets x, y, z. Sender 2, ...

  5. • Exploits path diversity and server diversity to increase resilient to congestion and sender failure • Using media coding scheme such as MDC, the receiver can still playback continuously (at a lower quality) if a sender fails.

  6. Congestion Control in Distributed Media Streaming

  7. Per-flow Congestion Control ?

  8. Using multiple flows is unfair to other single flow applications.

  9. • Similar problem observed in parallel TCP flows: 1. TCP-P (Soonghyun Cho et al) 2. TCP-ROME (Roger Karrer et al) 3. Multi-priority TCP (Ronald Tse et al)

  10. Task-level TCP Friendliness • The total bandwidth of flows, belonging to the same task, on a link should be no larger than other TCP flows on the same link (experiencing similar network conditions). � B i ≤ B T CP f i ∈ L

  11. Task-Level TCP Friendliness Bottleneck

  12. The Challenges • Different media flows may experience different congested links • How to determine the “fair” throughput of a media flow?

  13. DMSCC : Congestion Control Algorithm

  14. Suppose (i) we know the topology, and (ii) the topology is a tree. Receiver

  15. 1. Find out where the congested link(s) are. Receiver congestion

  16. 2. Control the rate of the flows on congested links. Receiver Each flow should consume half the bandwidth of a TCP flow

  17. Identifying Congested Links Given end-to-end measurements on a set of flows, determine which flows share bottleneck link(s). Controlling Throughput Given a set of flows on a bottleneck link, how to control the throughput of the flows so that they satisfy � B i ≤ B T CP f i ∈ L

  18. Identifying Congested Links Given end-to-end measurements on a set of flows, determine which flows share bottleneck link(s). Controlling Throughput Given a set of flows on a bottleneck link, how to control the throughput of the flows so that they satisfy � B i ≤ B T CP f i ∈ L

  19. Identifying Congested Links • Non-trivial problem for one shared bottleneck • Rubenstein (TON’02), Kim (SIGCOMM ‘04) • Even harder for multiple bottlenecks. • We use Rubenstein’s method as a building block.

  20. Rubenstein’s Method • SHARE( f , g ): Does two flows f and g share the same bottleneck? • Observe the packet delay of flow f and g . • Yes, if cross-correlation of f and g is larger than auto-correlation of f .

  21. Congestion Location (one bottleneck) • Suppose a packet from flow f is lost • Find all other flows g such that SHARE ( f, g ) = true • Find all common links of these flows • Return the link furthest away from receiver

  22. Congestion Location A packet from this flow is lost.

  23. Congestion Location These two flows share a bottleneck

  24. Congestion Location Common links for both flows

  25. Congestion Location Shared bottleneck

  26. Congestion Location (multiple bottlenecks) • Keep a history of h previous bottleneck detections. • All links in this set are presumed to be congested.

  27. Identifying Congested Links Given end-to-end measurements on a set of flows, determine which flows share bottleneck link(s). Controlling Throughput Given a set of flows on a bottleneck link, how to control the throughput of the flows so that they satisfy � B i ≤ B T CP f i ∈ L

  28. Recall that we are running a pull-based protocol Sender 1, please send me data packets x, y, z. Sender 2, ...

  29. To control the throughput, the receiver maintains a “congestion window” for each sender and never pulls more than the window allows. window of sender 1 is 5 window of sender 2 is 6 ..

  30. The window is adjusted according to AIMD when packet transmission is successful or lost. window of sender 1 is 5 window of sender 2 is 6 ..

  31. How to adjust window? • If we follows TCP’s algorithm, then we will achieve similar throughput to a single TCP flow. • To achieve k (k < 1) times the throughput of a TCP flows, we need to be less aggressive in increasing our window.

  32. Congestion Window (pkt) W W/2 Time The window increases by α for every RTT; Packet loss occur every 1/p packet .

  33. Congestion Window W/(2 α ) (pkt) W W/2 Time The window increases by α for every RTT; Packet loss occur every 1/p packet .

  34. Considering the area under the curve, we get 3 W 2 = 1 p 8 α � 8 W = √ α 3 p To get k times the throughput of a TCP flow, the increasing factor α should be k 2

  35. Identifying Congested Links Given end-to-end measurements on a set of flows, determine which flows share bottleneck link(s). Controlling Throughput Given a set of flows on a bottleneck link, how to control the throughput of the flows so that they satisfy � B i ≤ B T CP f i ∈ L

  36. DMSCC : Congestion Control Algorithm

  37. DMSCC Algorithm On packet loss • Find the set of bottleneck links • For each bottleneck links l let n be number of flows on l set α of each flow on l to min( α , 1 /n 2 ) If no packet loss for some time t • Reset all α to 1

  38. Simulation and Results

  39. Network Topology L 3 L 2 L 1 L 0

  40. Background Traffic Time 0 to 50 L 3 L 2 L 1 L 0 Time 50 to 100 L 3 L 2 L 1 L 0 Time 100 to 150 L 3 L 2 L 1 L 0

  41. Background Traffic Time 150 to 200 L 3 L 2 L 1 L 0 Time 200 to 250 L 3 L 2 L 1 L 0 Time 250 to 350 L 3 L 2 L 1 L 0

  42. 20 DMS TCP 15 L 0 10 5 0 1 51 101 151 201 251 301 351 40 30 L 1 20 10 0 1 51 101 151 201 251 301 351 60 L 2 40 20 0 1 51 101 151 201 251 301 351 80 60 L 3 40 20 0 1 51 101 151 201 251 301 351

  43. 20 DMS TCP 15 L 0 10 5 0 1 51 101 151 201 251 301 351 40 30 L 1 20 10 0 1 51 101 151 201 251 301 351 60 L 2 40 20 0 1 51 101 151 201 251 301 351 80 60 L 3 40 20 0 1 51 101 151 201 251 301 351

  44. 20 DMS TCP 15 L 0 10 5 0 1 51 101 151 201 251 301 351 40 30 L 1 20 10 0 1 51 101 151 201 251 301 351 60 L 2 40 20 0 1 51 101 151 201 251 301 351 80 60 L 3 40 20 0 1 51 101 151 201 251 301 351

  45. 20 DMS TCP 15 L 0 10 5 0 1 51 101 151 201 251 301 351 40 30 L 1 20 10 0 1 51 101 151 201 251 301 351 60 L 2 40 20 0 1 51 101 151 201 251 301 351 80 60 L 3 40 20 0 1 51 101 151 201 251 301 351

  46. 20 DMS TCP 15 L 0 10 5 0 1 51 101 151 201 251 301 351 40 30 L 1 20 10 0 1 51 101 151 201 251 301 351 60 L 2 40 20 0 1 51 101 151 201 251 301 351 80 60 L 3 40 20 0 1 51 101 151 201 251 301 351

  47. 20 DMS TCP 15 L 0 10 5 0 1 51 101 151 201 251 301 351 40 30 L 1 20 10 0 1 51 101 151 201 251 301 351 60 L 2 40 20 0 1 51 101 151 201 251 301 351 80 60 L 3 40 20 0 1 51 101 151 201 251 301 351

  48. 20 DMS TCP 15 L 0 10 5 0 1 51 101 151 201 251 301 351 40 30 L 1 20 10 0 1 51 101 151 201 251 301 351 60 L 2 40 20 0 1 51 101 151 201 251 301 351 80 60 L 3 40 20 0 1 51 101 151 201 251 301 351

  49. 20 DMS TCP 15 L 0 10 5 0 1 51 101 151 201 251 301 351 40 30 L 1 20 10 0 1 51 101 151 201 251 301 351 60 L 2 40 20 0 1 51 101 151 201 251 301 351 80 60 L 3 40 20 0 1 51 101 151 201 251 301 351

  50. 20 DMS TCP 15 L 0 10 5 0 1 51 101 151 201 251 301 351 40 30 L 1 20 10 0 1 51 101 151 201 251 301 351 60 L 2 40 20 0 1 51 101 151 201 251 301 351 80 60 L 3 40 20 0 1 51 101 151 201 251 301 351

  51. Summary • Distributed media streaming needs task- level congestion control. • Two sub-problems: identify congested links and control sending rates.

  52. If link A and B are congested at the same time, shared congestion at B might not be detected. B A

  53. Throughput control not as accurate when packet losses are bursty. W W/2 Time The window increases by α for every RTT; Packet loss occur every 1/p packet .

  54. Pull-based protocol might not be the right thing to do. Sender 1, please send me data packets x, y, z. Sender 2, ...

  55. The End

Recommend


More recommend