Limitation of Stop-and-Wait (2) • Example: R=1 Mbps, D = 50 ms, 10kb packets • RTT (Round Trip Time) = 2D = 100 ms • How many packets/sec? • What if R=10 Mbps? CSE 461 University of Washington 44
Sliding Window • Generalization of stop-and-wait • Allows W packets to be outstanding • Can send W packets per RTT (=2D) • Pipelining improves performance • Need W=2BD to fill network path CSE 461 University of Washington 45
Sliding Window (2) • What W will use the network capacity? • Assume 10kb packets • Ex: R=1 Mbps, D = 50 ms • Ex: What if R=10 Mbps? CSE 461 University of Washington 46
Sliding Window (3) • Ex: R=1 Mbps, D = 50 ms • 2BD = 10 6 b/sec x 100. 10 -3 sec = 100 kbit • W = 2BD = 10 packets of 1250 bytes • Ex: What if R=10 Mbps? • 2BD = 1000 kbit • W = 2BD = 100 packets of 1250 bytes CSE 461 University of Washington 47
Sliding Window Protocol • Many variations, depending on how buffers, acknowledgements, and retransmissions are handled • Go-Back-N • Simplest version, can be inefficient • Selective Repeat • More complex, better performance CSE 461 University of Washington 48
Sliding Window – Sender • Sender buffers up to W segments until they are acknowledged • LFS= LAST FRAME SENT , LAR= LAST ACK REC ’ D • Sends while LFS – LAR ≤ W Sliding W=5 Available Window 5 6 7 .. 2 3 4 5 2 3 .. 3 .. Acked Unacked .. Unavailable LAR LFS seq. number CSE 461 University of Washington 49
Sliding Window – Sender (2) • Transport accepts another segment of data from the Application ... • Transport sends it (as LFS – LAR 5) Sliding W=5 Sent Window 5 6 7 .. 2 3 4 5 2 3 .. 3 .. Acked Unacked .. Unavailable LFS LAR seq. number CSE 461 University of Washington 50
Sliding Window – Sender (3) • Next higher ACK arrives from peer… • Window advances, buffer is freed • LFS – LAR 4 (can send one more) Sliding W=5 Available Window 5 6 7 .. 2 3 4 5 2 3 .. 3 Unacked .. Acked .. Unavailable LAR LFS seq. number CSE 461 University of Washington 51
Sliding Window – Go-Back-N • Receiver keeps only a single packet buffer for the next segment • State variable, LAS = LAST ACK SENT • On receive: • If seq. number is LAS+1, accept and pass it to app, update LAS, send ACK • Otherwise discard (as out of order) CSE 461 University of Washington 52
Sliding Window – Selective Repeat • Receiver passes data to app in order, and buffers out-of- order segments to reduce retransmissions • ACK conveys highest in-order segment, plus hints about out- of-order segments • TCP uses a selective repeat design; we’ll see the details later CSE 461 University of Washington 53
Sliding Window – Selective Repeat (2) • Buffers W segments, keeps state variable LAS = LAST ACK SENT • On receive: • Buffer segments [LAS+1, LAS+W] • Send app in-order segments from LAS+1, and update LAS • Send ACK for LAS regardless CSE 461 University of Washington 54
Sliding Window – Retransmissions • Go-Back-N uses a single timer to detect losses • On timeout, resends buffered packets starting at LAR+1 • Selective Repeat uses a timer per unacked segment to detect losses • On timeout for segment, resend it • Hope to resend fewer segments CSE 461 University of Washington 55
Sequence Numbers • Need more than 0/1 for Stop-and- Wait … • But how many? • For Selective Repeat, need W numbers for packets, plus W for acks of earlier packets • 2W seq. numbers • Fewer for Go-Back-N (W+1) • Typically implement seq. number with an N-bit counter that wraps around at 2 N — 1 • E.g., N=8: …, 253, 254, 255, 0, 1, 2, 3, … CSE 461 University of Washington 56
Sequence Time Plot Transmissions (at Sender) Seq. Number Acks (at Receiver) Delay (=RTT/2) Time CSE 461 University of Washington 57
Sequence Time Plot (2) Go-Back-N scenario Seq. Number Time CSE 461 University of Washington 58
Sequence Time Plot (3) Retransmissions Loss Seq. Number Timeout Time CSE 461 University of Washington 59
ACK Clocking
Sliding Window ACK Clock • Each in-order ACK advances the sliding window and lets a new segment enter the network • ACK s “clock” data segments 20 19 18 17 16 15 14 13 12 11 Data Ack 1 2 3 4 5 6 7 8 9 10 CSE 461 University of Washington 61
Benefit of ACK Clocking • Consider what happens when sender injects a burst of segments into the network Queue Slow (bottleneck) link Fast link Fast link CSE 461 University of Washington 62
Benefit of ACK Clocking (2) • Segments are buffered and spread out on slow link Segments “spread out” Fast link Slow (bottleneck) link Fast link CSE 461 University of Washington 63
Benefit of ACK Clocking (3) • ACK s maintain the spread back to the original sender Slow link Acks maintain spread CSE 461 University of Washington 64
Benefit of ACK Clocking (4) • Sender clocks new segments with the spread • Now sending at the bottleneck link without queuing! Segments spread Queue no longer builds Slow link CSE 461 University of Washington 65
Benefit of ACK Clocking (4) • Helps run with low levels of loss and delay! • The network smooths out the burst of data segments • ACK clock transfers this smooth timing back to sender • Subsequent data segments are not sent in bursts so do not queue up in the network CSE 461 University of Washington 66
TCP Uses ACK Clocking • TCP uses a sliding window because of the value of ACK clocking • Sliding window controls how many segments are inside the network • TCP only sends small bursts of segments to let the network keep the traffic smooth CSE 461 University of Washington 67
Problem • Sliding window has pipelining to keep network busy • What if the receiver is overloaded? Arg … Streaming video Big Iron Wee Mobile CSE 461 University of Washington 68
Sliding Window – Receiver • Consider receiver with W buffers • LAS= LAST ACK SENT , app pulls in-order data from buffer with recv() call Sliding W=5 Window 5 6 7 5 5 5 5 5 2 3 .. 3 Acceptable .. Finished .. Too high LAS seq. number CSE 461 University of Washington 69
Sliding Window – Receiver (2) • Suppose the next two segments arrive but app does not call recv() W=5 5 6 7 5 5 5 5 5 2 3 .. 3 Acceptable .. Finished .. Too high LAS seq. number CSE 461 University of Washington 70
Sliding Window – Receiver (3) • Suppose the next two segments arrive but app does not call recv() • LAS rises, but we can’t slide window! W=5 5 6 7 5 5 5 5 5 2 3 .. 3 .. Finished Acked .. Too high LAS seq. number CSE 461 University of Washington 71
Sliding Window – Receiver (4) • Further segments arrive (in order) we fill buffer • Must drop segments until app recvs! Nothing W=5 Acceptable! 5 6 7 5 5 5 5 5 2 3 .. 3 .. Finished Acked .. Too high LAS seq. number CSE 461 University of Washington 72
Sliding Window – Receiver (5) • App recv() takes two segments • Window slides (phew) W=5 Acceptable 5 6 7 5 5 5 5 5 2 3 .. 3 Acked .. Finished .. LAS seq. number CSE 461 University of Washington 73
Flow Control • Avoid loss at receiver by telling sender the available buffer space • WIN =#Acceptable, not W (from LAS) W=5 Acceptable 5 6 7 5 5 5 5 5 2 3 .. 3 Acked .. Finished .. LAS seq. number CSE 461 University of Washington 74
Flow Control (2) • Sender uses lower of the sliding window and flow control window ( WIN ) as the effective window size W=3 Acceptable 5 6 7 5 5 5 5 5 2 3 .. 3 .. Finished Acked .. Too high LAS seq. number CSE 461 University of Washington 75
Flow Control (3) • TCP-style example • SEQ / ACK sliding window • Flow control with WIN • SEQ + length < ACK + WIN • 4KB buffer at receiver • Circular buffer of bytes CSE 461 University of Washington 76
Topic • How to set the timeout for sending a retransmission • Adapting to the network path Lost? Network CSE 461 University of Washington 77
Retransmissions • With sliding window, detecting loss with timeout • Set timer when a segment is sent • Cancel timer when ack is received • If timer fires, retransmit data as lost Retransmit! CSE 461 University of Washington 78
Timeout Problem • Timeout should be “just right” • Too long wastes network capacity • Too short leads to spurious resends • But what is “just right”? • Easy to set on a LAN (Link) • Short, fixed, predictable RTT • Hard on the Internet (Transport) • Wide range, variable RTT CSE 461 University of Washington 79
Example of RTTs BCN SEA BCN 1000 900 Round Trip Time (ms) 800 700 600 500 400 300 200 100 0 0 20 40 60 80 100 120 140 160 180 200 Seconds CSE 461 University of Washington 80
Example of RTTs (2) BCN SEA BCN 1000 900 Variation due to queuing at routers, Round Trip Time (ms) 800 changes in network paths, etc. 700 600 500 400 300 200 Propagation (+transmission) delay ≈ 2D 100 0 0 20 40 60 80 100 120 140 160 180 200 Seconds CSE 461 University of Washington 81
Example of RTTs (3) 1000 Timer too high! 900 Round Trip Time (ms) 800 Need to adapt to the 700 network conditions 600 500 Timer too low! 400 300 200 100 0 0 20 40 60 80 100 120 140 160 180 200 Seconds CSE 461 University of Washington 82
Adaptive Timeout • Smoothed estimates of the RTT (1) and variance in RTT (2) • Update estimates with a moving average 1. SRTT N+1 = 0.9*SRTT N + 0.1*RTT N+1 2. Svar N+1 = 0.9*Svar N + 0.1*|RTT N+1 – SRTT N+1 | • Set timeout to a multiple of estimates • To estimate the upper RTT in practice • TCP Timeout N = SRTT N + 4*Svar N CSE 461 University of Washington 83
Example of Adaptive Timeout 1000 900 800 700 RTT (ms) 600 SRTT 500 400 300 200 Svar 100 0 0 20 40 60 80 100 120 140 160 180 200 Seconds CSE 461 University of Washington 84
Example of Adaptive Timeout (2) 1000 Early 900 timeout Timeout (SRTT + 4*Svar) 800 700 RTT (ms) 600 500 400 300 200 100 0 0 20 40 60 80 100 120 140 160 180 200 Seconds CSE 461 University of Washington 85
Adaptive Timeout (2) • Simple to compute, does a good job of tracking actual RTT • Little “headroom” to lower • Yet very few early timeouts • Turns out to be important for good performance and robustness CSE 461 University of Washington 86
Congestion
TCP to date: • We can set up a connection (connection establishment) • Tear down a connection (connection release) • Keep the sending and receiving buffers from overflowing (flow control) What’s missing?
Network Congestion • A “traffic jam” in the network • Later we will learn how to control it What’s the hold up? Network CSE 461 University of Washington 89
Congestion Collapse in the 1980s • Early TCP used fixed size window (e.g., 8 packets) • Initially fine for reliability • But something happened as the ARPANET grew • Links stayed busy but transfer rates fell by orders of magnitude! CSE 461 University of Washington 90
Nature of Congestion • Routers/switches have internal buffering Input . . . Output . . . . . . . . . Fabric Output Buffer Input Buffer CSE 461 University of Washington 91
Nature of Congestion (2) • Simplified view of per port output queues • Typically FIFO (First In First Out), discard when full Router Router = Queued (FIFO) Queue Packets CSE 461 University of Washington 92
Nature of Congestion (3) • Queues help by absorbing bursts when input > output rate • But if input > output rate persistently, queue will overflow • This is congestion • Congestion is a function of the traffic patterns – can occur even if every link has the same capacity CSE 461 University of Washington 93
Effects of Congestion • What happens to performance as we increase load?
Effects of Congestion (2) • What happens to performance as we increase load?
Effects of Congestion (3) • As offered load rises, congestion occurs as queues begin to fill: • Delay and loss rise sharply with more load • Throughput falls below load (due to loss) • Goodput may fall below throughput (due to spurious retransmissions) • None of the above is good! • Want network performance just before congestion CSE 461 University of Washington 96
Van Jacobson (1950 — ) • Widely credited with saving the Internet from congestion collapse in the late 80s • Introduced congestion control principles • Practical solutions (TCP Tahoe/Reno) • Much other pioneering work: • Tools like traceroute, tcpdump, pathchar • IP header compression, multicast tools CSE 461 University of Washington 97
TCP Tahoe/Reno • TCP extensions and features we will study: • AIMD • Fair Queuing • ACK clocking • Adaptive timeout (mean and variance) • Slow-start • Fast Retransmission • Fast Recovery CSE 461 University of Washington 98
TCP Timeline TCP Reno (Jacobson, ‘90) TCP/IP “flag day” 3-way handshake (BSD Unix 4.2, ‘83) (Tomlinson, ‘75) TCP Tahoe TCP and IP (Jacobson, ’88) (RFC 791/793, ‘81) Origins of “TCP” (Cerf & Kahn, ’74) Congestion collapse 1988 Observed, ‘86 1970 1975 1980 1985 1990 . . . Congestion control Pre-history CSE 461 University of Washington 99
TCP Timeline (2) ECN TCP LEDBAT Background Router support (Floyd, ‘94) (IETF ’08) Delay TCP Vegas Compound TCP based (Brakmo , ‘93) (Windows, ’07) TCP with SACK FAST TCP (Floyd, ‘96) (Low et al., ’04) TCP CUBIC TCP Reno (Linux, ’06) TCP New Reno (Jacobson, ‘90) TCP BIC (Hoe, ‘95) (Linux, ‘04 1990 1995 2000 2005 2010 . . . . . . Classic congestion control Diversification CSE 461 University of Washington 100
Recommend
More recommend