chapter 3 transport layer
play

Chapter 3 Transport Layer 1 Chapter 3 outline 3.1 Transport-layer - PowerPoint PPT Presentation

Chapter 3 Transport Layer 1 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented transport: TCP segment


  1. Chapter 3 Transport Layer 1

  2. Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented transport: TCP segment structure • reliable data transfer • flow control • connection management • 3.6 Principles of congestion control 3.7 TCP congestion control 2

  3. Principles of Congestion Control Congestion: informally: “too many sources sending too much data too • fast for network to handle” different from flow control! • manifestations: • lost packets (buffer overflow at routers) – long delays (queueing in router buffers) – a top-10 problem! • 3

  4. Causes/costs of congestion: scenario 1 Host A λ out λ in : original data • Two senders, two receivers • One router, infinite unlimited shared output Host B link buffers buffers • no retransmission - Large delays when congested - Maximum achievable throughput 4

  5. Causes/costs of congestion: scenario 2 - one router, finite buffers - sender retransmission of timed-out packet application-layer input = application-layer output: λ in = λ out ‘ transport-layer input includes retransmissions : λ in λ in λ in : original data λ out λ ' in : original data, plus retransmitted data Host B Host A finite shared output link buffers 5

  6. Congestion scenario 2a: ideal case - sender sends only when router buffers available λ in : original data λ out copy λ ' in : original data, plus retransmitted data Host B free buffer space! Host A finite shared output link buffers 6

  7. Congestion scenario 2a: ideal case R/2 - sender sends only when router buffers available λ out λ in R/2 λ in : original data λ out λ ' in : original data, plus retransmitted data Host B free buffer space! Host A finite shared output link buffers 7

  8. Congestion scenario 2b: known loss - Packets may get dropped at router due to full buffers - Sometimes lost - Sender only resends if packet known to be lost (admittedly idealized) λ in : original data λ out copy λ ' in : original data, plus retransmitted data Host B Host A 8

  9. Congestion scenario 2b: known loss packets may get dropped at router due to full buffers R/2 sometimes not lost sender only resends if packet known to be when sending at R/2, lost (admittedly idealized) λ some packets are retransmissions but out asymptotic goodput is still R/2 (why?) λ in R/2 λ in : original data λ out λ ' in : original data, plus retransmitted data Host B free buffer space! Host A 9

  10. Congestion scenario 2c: duplicates - packets may get dropped at router due to full buffers - sender times out prematurely, sending two copies, both of which are delivered λ in λ out copy λ ' in Host B free buffer space! Host A 10

  11. Congestion scenario 2c: duplicates - packets may get dropped at router due to full buffers - sender times out prematurely, sending two copies, both of which are delivered λ in λ out copy timeou λ ' in t Host B free buffer space! Host A 11

  12. Congestion scenario 2c: duplicates - packets may get dropped at router due to full buffers - sender times out prematurely, sending two copies, both of which R/4 λ are delivered when sending at out R/2, some packets are retransmissions including duplicated that are delivered! λ in R/2 “costs” of congestion:  more work (retrans) for given “goodput”  unneeded retransmissions: link carries multiple copies of pkt  decreasing goodput 12

  13. Causes/costs of congestion: scenario 3 λ - four senders Q: what happens as - multihop paths in λ and - timeout/retransmit in increase ? λ out Host A λ in : original data λ ' in : original data, plus retransmitted data finite shared output link buffers Host B 13

  14. Causes/costs of congestion: scenario 3 λ H o st o A u t H o st B another “cost” of congestion:  when packet dropped, any “upstream transmission capacity used for that packet was wasted! 14

  15. Approaches towards congestion control Two broad approaches towards congestion control: network-assisted congestion end-end congestion control: control: no explicit feedback from • network routers provide feedback to end • systems congestion inferred from end- • single bit indicating congestion (SNA, system observed loss, delay • DECbit, TCP/IP ECN, ATM) approach taken by TCP • explicit rate sender should send at • 15

  16. Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented transport: TCP segment structure • reliable data transfer • flow control • connection management • 3.6 Principles of congestion control 3.7 TCP congestion control 16

  17. TCP congestion control: additive increase, multiplicative decrease  approach: increase transmission rate (window size), probing for usable bandwidth, until loss occurs  additive increase: increase cwnd by 1 MSS every RTT until loss detected  multiplicative decrease : cut cwnd in half after loss c o n g e s t i o n cwnd : congestion window size w i n d o w 2 4 K b y t e s saw tooth behavior: probing 1 6 K b y t e s for bandwidth 8 K b y t e s time t i m e 17

  18. TCP Congestion Control: details sender limits transmission: How does sender perceive congestion? LastByteSent-LastByteAcked ≤ cwnd loss event = timeout or 3 • roughly, duplicate acks TCP sender reduces rate • ( cwnd ) after loss event cwnd rate = Bytes/sec RTT cwnd is dynamic, function of perceived network congestion 18

  19. TCP Slow Start • when connection begins, Host A Host B increase rate exponentially until first loss event: one segment • initially cwnd = 1 MSS RTT • double cwnd every RTT two segments • done by incrementing cwnd for every ACK received summary: initial rate is slow four segments but ramps up exponentially fast time 19

  20. Refinement Q: when should the exponential increase switch to linear? A: when cwnd gets to 1/2 of its value before timeout. Implementation: variable ssthresh • on loss event, ssthresh is set • to 1/2 of cwnd just before loss event 20

  21. Refinement: inferring loss • after 3 dup ACKs: – cwnd is cut in half Philosophy: – window then grows linearly • but after timeout event:  3 dup ACKs indicates – cwnd instead set to 1 MSS; network capable of – window then grows exponentially delivering some segments – to a threshold, then grows linearly  timeout indicates a “more alarming” congestion scenario 21

  22. Summary: TCP Congestion Control new ACK . duplicate ACK cwnd = cwnd + MSS (MSS/cwnd) new ACK dupACKcount++ dupACKcount = 0 transmit new segment(s), as allowed cwnd = cwnd+MSS dupACKcount = 0 transmit new segment(s), as allowed Λ cwnd = 1 MSS ssthresh = 64 KB cwnd > ssthresh dupACKcount = 0 slow congestion Λ start avoidance timeout ssthresh = cwnd/2 duplicate ACK cwnd = 1 MSS dupACKcount = 0 timeout dupACKcount++ retransmit missing segment ssthresh = cwnd/2 cwnd = 1 MSS dupACKcount = 0 retransmit missing segment timeout ssthresh = cwnd/2 cwnd = 1 New ACK dupACKcount = 0 retransmit missing segment cwnd = ssthresh dupACKcount == 3 dupACKcount == 3 dupACKcount = 0 ssthresh= cwnd/2 ssthresh= cwnd/2 cwnd = ssthresh + 3 cwnd = ssthresh + 3 retransmit missing segment retransmit missing segment fast recovery duplicate ACK cwnd = cwnd + MSS transmit new segment(s), as allowed 22

  23. TCP throughput • what’s the average throughout of TCP as a function of window size and RTT? – ignore slow start • let W be the window size when loss occurs. – when window is W, throughput is W/RTT – just after loss, window drops to W/2, throughput to W/2RTT. – average throughout: .75 W/RTT 23

  24. TCP Futures: TCP over “long, fat pipes” • example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput • requires window size W = 83,333 in-flight segments • throughput in terms of loss rate: 1.22 ⋅ MSS RTT  L • ➜ L = 2·10 -10 Wow – a very small loss rate! • new versions of TCP for high-speed 24

  25. TCP Fairness fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 bottleneck TCP router connection 2 capacity R 25

  26. Why is TCP fair? two competing sessions: additive increase gives slope of 1, as throughout increases • multiplicative decrease decreases throughput proportionally • equal bandwidth share R Connection 2 throughput loss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 1 throughput R 26

  27. Fairness (more) Fairness and parallel TCP Fairness and UDP connections multimedia apps often do • nothing prevents app from not use TCP opening parallel connections – do not want rate throttled by between 2 hosts. congestion control web browsers do this instead use UDP: • – pump audio/video at constant example: link of rate R supporting rate, tolerate packet loss 9 connections; new app asks for 1 TCP, gets rate R/10 new app asks for 11 TCPs, gets R/2 ! 27

Recommend


More recommend