Chair of Network Architectures and Services Department of Informatics Technical University of Munich Evaluation of TCP BBR v2 Congestion Control Using Network Emulation Intermediate talk for the Master’s Thesis by Juliane Aulbach advised by Benedikt Jaeger, Dominik Scholz Monday 25 th May, 2020 Chair of Network Architectures and Services Department of Informatics Technical University of Munich
Congestion Control BDP BDP+BtlneckBufSize loss-based operating point RTT RTprop Kleinrock’s optimal operating point BtlBw Delivery Rate Amount Inflight Kleinrock’s Optimal Operating Point [3] J. Aulbach — TCP BBR v2 2
TCP BBR Bottleneck-Bandwidth & RTT Congestion-based Congestion Control • BBR: Bottleneck Bandwidth and RTT • 2016: Google presents BBR [1] • 2017: BBR becomes part of the Linux kernel (version 4.9 or higher) • 2018: Google announces BBR v2 How does BBR work? • Maximum bandwidth is determined by a single bottleneck • Estimates bandwidth-delay product (BDP) to create a model of the network • BDP = Bottleneck Bandwidth (BtlBw) · Propagation delay (RTprop) • max BtlBw = windowed_max(delivered data / elapsed time) • min RTT = windowed_min(time acknowledged - time send) • Continuosly updates model according to changes in the measurements J. Aulbach — TCP BBR v2 3
TCP BBR BBR Phases Start BBR 20 Sending Rate / Mbit Startup Startup Drain s 15 Probe BW Probe RTT 10 Drain 5 ProbeBW 0 0 10 20 30 Time / s ProbeRTT J. Aulbach — TCP BBR v2 4
TCP BBR Bottleneck-Bandwidth & RTT Issues with initial version of BBR The following list is presented by Neil Cardwell at IETF Meeting 104 [2] • Low throughput for Reno/CUBIC flows sharing a bottleneck with a bulk BBR flow -> Adapt bandwidth probing J. Aulbach — TCP BBR v2 5
TCP BBR Bottleneck-Bandwidth & RTT Issues with initial version of BBR The following list is presented by Neil Cardwell at IETF Meeting 104 [2] • Low throughput for Reno/CUBIC flows sharing a bottleneck with a bulk BBR flow -> Adapt bandwidth probing • Loss-agnostic -> Use packet loss as an explicit signal, with an explicit target loss rate ceiling • ECN-agnostic -> Use DCTCP-style ECN, if available, to help keep queues short J. Aulbach — TCP BBR v2 5
TCP BBR Bottleneck-Bandwidth & RTT Issues with initial version of BBR The following list is presented by Neil Cardwell at IETF Meeting 104 [2] • Low throughput for Reno/CUBIC flows sharing a bottleneck with a bulk BBR flow -> Adapt bandwidth probing • Loss-agnostic -> Use packet loss as an explicit signal, with an explicit target loss rate ceiling • ECN-agnostic -> Use DCTCP-style ECN, if available, to help keep queues short • Problems with ACK aggregation -> Estimate recent degree of aggregation to match CUBIC throughput J. Aulbach — TCP BBR v2 5
TCP BBR Bottleneck-Bandwidth & RTT Issues with initial version of BBR The following list is presented by Neil Cardwell at IETF Meeting 104 [2] • Low throughput for Reno/CUBIC flows sharing a bottleneck with a bulk BBR flow -> Adapt bandwidth probing • Loss-agnostic -> Use packet loss as an explicit signal, with an explicit target loss rate ceiling • ECN-agnostic -> Use DCTCP-style ECN, if available, to help keep queues short • Problems with ACK aggregation -> Estimate recent degree of aggregation to match CUBIC throughput • Throughput variations during ProbeRTT -> Less restrictive constraints for sending rate if entering ProbeRTT J. Aulbach — TCP BBR v2 5
Parameters & Measurements Mininet Setup Sender 1 Receiver 1 Bottleneck Link Sender 2 Receiver 2 . . . . . . Switch 1 Switch 2 Switch 3 Sender n Receiver n Mininet setup with sending and receiving hosts connected via a bottleneck link [3] J. Aulbach — TCP BBR v2 6
Results RTT Unfairness BBR Sending Rate / Mbit s 10 ms RTT 50 ms RTT Fair Share 10 5 0 0 10 20 30 40 50 Time / s J. Aulbach — TCP BBR v2 7
Results RTT Unfairness BBR Sending Rate / Mbit s 10 ms RTT 50 ms RTT Fair Share 10 5 0 0 10 20 30 40 50 Time / s BBR v2 Sending Rate / Mbit s 10 ms RTT 50 ms RTT Fair Share 10 5 0 0 10 20 30 40 50 Time / s J. Aulbach — TCP BBR v2 7
Results Buffersize CUBIC vs BBR CUBIC vs BBR v2 100 100 Bandwidth 80 Bandwidth 80 Avg. [%] Avg. [%] 60 60 CUBIC CUBIC BBR v2 BBR 40 40 20 20 0 0 10 − 1 10 0 10 1 10 − 1 10 0 10 1 Buffersize in multiples of BDP [log] Buffersize in multiples of BDP [log] J. Aulbach — TCP BBR v2 8
Results Buffersize CUBIC vs BBR CUBIC vs BBR v2 100 100 Bandwidth 80 Bandwidth 80 Avg. [%] Avg. [%] 60 60 CUBIC CUBIC BBR v2 BBR 40 40 20 20 0 0 10 − 1 10 0 10 1 10 − 1 10 0 10 1 Buffersize in multiples of BDP [log] Buffersize in multiples of BDP [log] BBR vs BBR v2 100 80 Bandwidth Avg. [%] 60 BBR BBR v2 40 20 0 10 − 1 10 0 10 1 Buffersize in multiples of BDP [log] J. Aulbach — TCP BBR v2 8
Conclusion Challenges • Mininet is not working properly on current Linux kernel • There is no official draft or paper for BBR v2 Status Task Status Find out how BBR and BBR v2 work ◦ � Deploy Linux with BBR v2 � Reproduce results from [3] as ground trouth � Repeat all tests with BBR v2 Compare test results from BBR and BBR v2 � Evaluate improvements and deteriorations ◦ Extend measurements ◦ J. Aulbach — TCP BBR v2 9
Backup Slides BBR Phases BBR 20 Sending Rate / Mbit Startup Drain s 15 Probe BW Probe RTT Start 10 5 Startup 0 0 10 20 30 Time / s Drain BBR v2 20 Sending Rate / Mbit Startup Drain ProbeBW s 15 ProbeBW ProbeRTT 10 ProbeRTT 5 0 0 10 20 30 Time / s J. Aulbach — TCP BBR v2 10
Backup Slides Fairness: Round-trip time CUBIC vs BBR CUBIC vs BBR 1 1 F tp F tp 0.75 0.75 0.5 0.5 CUBIC BBR CUBIC BBR v2 75 75 Bandwidth Avg. [%] Bandwidth Avg. [%] 50 50 25 25 0 0 10 1 10 2 10 3 10 1 10 2 10 3 RTT / ms [log] RTT [log ms] BBR vs BBR2 1 F tp 0.75 0.5 BBR BBR v2 75 Bandwidth Avg. [%] 50 25 0 10 1 10 2 10 3 RTT [log ms] J. Aulbach — TCP BBR v2 11
Backup Slides Performance Fully use bandwidth, despite high loss Fully use bandwidth, despite high loss 6,000 100 Cubic BBR Throughput Mbit/s Latency / ms 4,000 50 2,000 CUBIC 0 BBR 0 10 − 3 10 − 2 10 − 1 10 0 10 1 10 2 0 2,000 4,000 6,000 8,000 10,000 Loss Rate (%) - Log Scale Buffersize / kB Bottleneck Bandwidth = 10Mbit, RTT = 40ms Bottleneck Bandwidth = 100Mbit, RTT = 100ms J. Aulbach — TCP BBR v2 12
Bibliography [1] N. Cardwell and Y. Cheng. Making Linux TCP Fast: 04_Making_Linux_TCP_Fast_netdev_1.2_final, 2016. [2] N. Cardwell, Y. Cheng, S. H. Yeganeh, P . Jha, M. Mathis, and van Jacobson. BBR v2 - A Model-based Congestion Control: slides-104-iccrg-an-update-on-bbr-00, 2019. [3] B. Jaeger, D. Scholz, D. Raumer, F. Geyer, and G. Carle. Reproducible measurements of TCP BBR congestion control. Computer Communications , 144:31–43, 2019. J. Aulbach — TCP BBR v2 13
Recommend
More recommend