streaming video and tcp friendly congestion control
play

Streaming Video and TCP-Friendly Congestion Control Sugih Jamin - PowerPoint PPT Presentation

Streaming Video and TCP-Friendly Congestion Control Sugih Jamin Department of EECS University of Michigan jamin@eecs.umich.edu Joint work with: Zhiheng Wang (UofM), Sujata Banerjee (HP Labs) Sugih Jamin (jamin@eecs.umich.edu) Video


  1. Streaming Video and TCP-Friendly Congestion Control Sugih Jamin Department of EECS University of Michigan jamin@eecs.umich.edu Joint work with: Zhiheng Wang (UofM), Sujata Banerjee (HP Labs) Sugih Jamin (jamin@eecs.umich.edu)

  2. Video Application on the Internet Adaptive playback streaming: Sender sends data i at time t i Receiver receives at time t i + ∆ , ∆ = propagation delay + queueing delay To smooth out variable queueing delay, receiver buffers some amount of data ( i to i + k, k > 0 ) before playing back data i By the time receiver is ready to play back data i + k , hopefully it would have arrived Otherwise, increase buffering (hence “adaptive”) Sugih Jamin (jamin@eecs.umich.edu)

  3. Video Streaming Two ways to send data: • bulk transfer: transfer before playback • streaming: transfer while playback Why Streaming? • shorter playback start time • smaller receiver buffer requirement • smaller interaction delay requirement Sugih Jamin (jamin@eecs.umich.edu)

  4. Expectations vs. Reality Streaming media service requirements: • resource intensive • smooth (low variance) throughput Internet service characteristics: • shared resource • variable bandwidth • unpredictable network latency • lossy channel Sugih Jamin (jamin@eecs.umich.edu)

  5. Streaming Video over the Internet Effect of transient changes in available bandwidth: • empty buffer on playback • playback pause on rebuffering • larger buffer size increases start time (consider live interactive sessions) Applicable to other streaming data: scientific visualization, massively multiplayer gaming dynamic object, web page download Sugih Jamin (jamin@eecs.umich.edu)

  6. Case Study: Windows Media Player Application characteristics: WM Server sends traffic at a constant bit rate WMP client pauses video playback until sufficient packets have been buffered (rebuffering) WMP client asks for retransmission to recover lost packet. If lost packet cannot be recovered, the whole frame is considered lost WM Server reduces sending rate when lower available bandwidth is detected Sugih Jamin (jamin@eecs.umich.edu)

  7. Streaming Video Quality no�queueing�delay sufficient�bandwidth video�quality case�1 P3 P2 P1 P3 P2 P1 large�queueing�delay not�sufficient�bandwidth video�quality case�2 P P2 P1 P2 P3 P2 P1 1 P packet�loss 3 some�queueing�delay bandwidth�changes video�quality T case�3 � P3 P2 P1 P3 P2 P1 RTT/2 Sugih Jamin (jamin@eecs.umich.edu)

  8. Measuring Streaming Video Quality Metrics: • server transmission rate (service rate) • client rebuffering probability • client rebuffering duration • client frame loss Sugih Jamin (jamin@eecs.umich.edu)

  9. Improving User Perceived Quality User less annoyed with lower but consistent quality than continual rebuffering Changes in available bandwidth cause changes in rebuffering probability and duration Streaming video needs low loss rate and smooth available bandwidth to reduce user annoyance Need: smooth congestion control mechanism Sugih Jamin (jamin@eecs.umich.edu)

  10. TCP-Friendliness TCP is the standard transport protocol TCP does congestion control by linear probing for available bandwidth and multiplicative decrease on congestion detection (packet loss) “A congestion control protocol is TCP-friendly if, in steady state, its bandwidth utilization is no more than required by TCP under similar circumstances” [Floyd et al., 2000] TCP-friendliness in a proposed protocol ensures compatibility with TCP Sugih Jamin (jamin@eecs.umich.edu)

  11. TCP-Friendly Rate Control (TFRC) Goals: • to provide streaming media with steady throughput • to be TCP-friendly Instead of reacting to individual losses, tries to satisfy the TCP throughput function over time: s T = � 2 p � 3 p 8 ) p (1 + 32 p 2 ) 3 + t RTO (3 R T : TCP throughput; s : packet size; p : loss rate R : path RTT; t RTO : re-transmit timeout Sugih Jamin (jamin@eecs.umich.edu)

  12. Terminologies Application� Application� Data Rate� Bandwidth Capacity� OS� Calculated� Fair� OS� Allowed Rate� Share� Throughput� Self-clocked Rate� Sending Rate� Networks� Networks� Sugih Jamin (jamin@eecs.umich.edu)

  13. Terminologies (contd) Data rate: the rate at which an application generates data Sending Rate: the rate at which a connection sends data Self-clocked rate: upper bound on the sending rate calculated by TFRC Fair share: TCP’s throughput during bulk data transfer Fair share load: ratio between the sending rate and the fair share Throughput: the incoming traffic rate measured at the receiver Sugih Jamin (jamin@eecs.umich.edu)

  14. Does TFRC Provide Smoother Throughput? Experiment setup: S(0) D(0) 1.5Mbps/50ms S(1) D(1) R1 R2 . . . . . . . . . . . . . . . . S(M-1) D(M-1) • Data source: CBR-traffic • Background traffic o long/short-lived TCP flows with infinite amount of data o flash crowd: large number of short TCP bursts o long-range dependent traffic: a number of Pareto distributed ON/OFF flows Sugih Jamin (jamin@eecs.umich.edu)

  15. Not that Smooth 350 TCP’s Self-clocked Rate TFRC’s Self-clocked Rate 300 TFRC’s Sending Rate 250 200 KBps 150 100 50 0 0 20 40 60 80 100 Time (sec) • Data rate: 50KBps • Background traffic: 1 long-lived TCP Sugih Jamin (jamin@eecs.umich.edu)

  16. Worse with Bursty Background Traffic 60 Congestion Rate Sending Rate 50 40 KBps 30 20 10 0 0 100 200 300 400 500 600 Time (sec)’ • Data rate: 20KBps • Background traffic: 1 long-lived TCP + 5 ON/OFF flows Sugih Jamin (jamin@eecs.umich.edu)

  17. Internet Experiments A sample path between MI and CA 100 Self-clocked Rate Sending Rate 80 60 KBps 40 20 0 300 350 400 450 500 Round ID • Data rate: 40 KBps • RTT: 67 msec • Loss event rate: 0.24% Sugih Jamin (jamin@eecs.umich.edu)

  18. MARC’s Design Motivation TFRC congestion control is memoryless, whereas: • streaming media is “well-behaved” when there is no congestion, streaming applications cannot always utilize their fair share fully • but during congestion, TFRC applies the same rate reduction principle to streaming media traffic as to bulk data transfer traffic Media Aware Rate Control (MARC) proposition: “Well-behaved” streaming applications should be allowed to reduce their sending rate more slowly during congestion Sugih Jamin (jamin@eecs.umich.edu)

  19. Media-Aware Rate Control (MARC) • Define token value C to keep track of a connection’s fair share utilization C = βC ′ + ( T − W send ) I C : token β : decay factor C ′ : previous token value T : previous calculated self-clocked rate W send : previous sending rate I : feedback interval • We use β = 0 . 9 Sugih Jamin (jamin@eecs.umich.edu)

  20. Media-Aware Rate Control (MARC) Our experiments use δ = 0 . 1 Sugih Jamin (jamin@eecs.umich.edu)

  21. MARC is Effective 350 250 TCP’s Self-clocked Rate Self-clocked Rate TFRC’s Self-clocked Rate Sending Rate 300 TFRC’s Sending Rate 200 250 150 200 KBps KBps 150 100 100 50 50 0 0 0 20 40 60 80 100 0 20 40 60 80 100 Time (sec) Time (sec) TFRC MARC • Data rate: 50KBps • Background traffic: 1 long-lived TCP Sugih Jamin (jamin@eecs.umich.edu)

  22. MARC is TCP-Friendly 70 60 Sending Rate (KBps) 50 40 30 20 MARC-RED TFRC-RED MARC-DropTail 10 TFRC-DropTail Fair Share 0 0 20 40 60 80 100 120 Data Rate (KBps) • Date rate: 10 CBR sources • Background traffic: 10 long-lived TCP Sugih Jamin (jamin@eecs.umich.edu)

  23. Reaction Time to Persistent Congestion • Congestion at 50th sec, RTT: 80 msec • Fair share before congestion: 140 KBps Self-clocked Rate (KBps) Self-clocked Rate (KBps) 140 140 MARC MARC 120 120 TFRC TFRC 100 100 80 80 60 60 x = 51.81 x = 52.44 40 40 20 20 0 0 46 48 50 52 54 46 48 50 52 54 Time (sec) Time (sec) (a) Data rate = 20 KBps (b) Data rate = 40 KBps Self-clocked Rate (KBps) Self-clocked Rate (KBps) 140 140 MARC MARC 120 120 TFRC TFRC 100 100 80 x = 51.33 80 60 60 x = 50.90 40 40 20 20 0 0 46 48 50 52 54 46 48 50 52 54 Time (sec) Time (sec) (c) Data rate = 60 KBps (d) Data rate = 80 KBps Without token, MARC behaves exactly like TFRC. Sugih Jamin (jamin@eecs.umich.edu)

  24. Token Dynamics 600 Sending Rate 1800 Self-clocked Rate 1600 Flash-crowd 500 Token Sending Rate (KBps) 1400 400 Token (KByte) 1200 1000 300 800 200 600 400 100 200 0 0 10 20 30 40 50 60 70 80 90 Time (sec) • 1 long-lived TCP and 1 MARC flows • Data rate: 100 KBps • Flash-crowd (800 short-lived TCP) starts at the 50th second, lasts for 5 seconds Sugih Jamin (jamin@eecs.umich.edu)

  25. MARC Improves User-Perceived Quality 1 Probability density function TCP TFRC MARC 0.8 0.6 0.4 0.2 0 0 1 2 3 4 5 Number of rebuffering events • Data rate: 44KBps • Background traffic: 1 long-lived TCP + 1 ON/OFF flow Sugih Jamin (jamin@eecs.umich.edu)

  26. Future Works • Layered video adaptation with MARC • Analyzing MARC • Streaming media over end-host multicast o multiple receivers - congestion control on end-host multicast o multiple sources - Integrated Flow Control Sugih Jamin (jamin@eecs.umich.edu)

Recommend


More recommend