Performance and Fairness Evaluation of IW10 and Other Fast Startup Schemes Michael Scharf <michael.scharf@googlemail.com> This work was performed at the Institute of Communication Networks and Computer Engineering (IKR) at the University of Stuttgart. Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes March 2011 1
Disclaimer Individual contribution � Report of old work from the years 2008 and 2009 – Original intention was to show that a network-supported scheme like Quick-Start is indeed required – IW10 was considered as alternative (called Initial-Start) – Quite surprisingly, IW10 outperformed all other variants � First, preliminary results : M. Scharf. Quick-Start, Jump-Start, and other fast startup approaches: Implementation issues and performance. Presentation at 73rd IETF Meeting, ICCRG, Nov. 2008 � Full reference for this work : M. Scharf. Fast Startup Internet Congestion Control for Broadband Interactive Applications. PhD thesis, University of Stuttgart, submitted Nov. 2009 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes March 2011 2
Fast startup congestion control Scope of the study Flow startup approaches End-to-end congestion control Network supported congestion control (implicit feedback) (explicit feedback) Standard Enhanced Optimistic Network assistance Network control Slow-Start Slow-Start fast startup (sporadic feedback) (frequent feedback) Reno, CUBIC, ... Quick-Start eXplicit Control SST Bandwidth No burstiness Using Protocol (XCP), adaptation estimation control rate pacing Rate Control Protocol (RCP), ... Limited SS, ... Paced Start, Larger Swift-Start, Hybrid SS, ... window Jump-Start, Initial-Start Mega-Start, ... TCP's standard Slow-Start with CUBIC ( SS ) � Initial congestion window of 10 MSS , called Initial-Start ( IS ) � Jump-Start of M. Allman et al., slightly modified to reduce aggressiveness ( JS ) � Quick-Start TCP extension according to RFC 4782 ( QS ) � Rate Control Protocol ( RCP ) � � … and others Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes March 2011 3
Fast startup congestion control Evaluation methodology � Simulations RTT of TCP connection(s) group T T 1 1 4 ms – Simulation with Linux code T T T T 2 2 28 ms T T Groups of endsystems using the NSC framework T T 3 3 54 ms T T T T 4 4 74 ms T T – Own Linux patches for all T T 5 5 98 ms T T TCP extensions, and an T T 6 6 124 ms Central bottleneck T T Rate 10 Mbit/s T T 7 7 150 ms own tool for RCP T T Limited buffer size T T 8 8 174 ms T T � Considered scenarios T T 9 9 200 ms T T Access link Access link Rate 1 Gbit/s Rate 1 Gbit/s – Subset of the TCP evaluation suite Group-specific delay Group-specific delay – Dumbbell topology with 450 endsystems and 9 different RTTs – Bottleneck typically 10 Mbit/s, 50 packets buffer, drop tail – Replay of measured Internet traces in a-b-t format as recommended in TCP evaluation suite � Implementations verified by testbed measurements Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes March 2011 4
Selected performance results Possible speedup of the different variants � Simulation with Linux 2.6.18 � Dumbbell topology with 10 Mbit/s bottleneck and 9 different RTTs � 450 clients and 450 servers � Default TCP configuration, except for larger buffer sizes (8 MiB) � Replayed traces in a-b-t format � Mean downlink load 35% � Metric: Epoch duration (a,b,t) epoch Epoch duration Request 1 Request 2 Request 3 329 B 403 B 356 B 0.12s 3.12s Time 403 B 25,821 B 1,196 B Response 1 Response 2 Response 3 Transaction 1 Transaction 2 Performance metric: Response time of a-b-t transfers (“epoch duration”) � Speedup of mid-sized transfers by larger initial window � Overall benefit is rather small : Many short transfers, many small RTTs � Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes March 2011 5
Selected performance results Insight into the workload � Simulation with Linux 2.6.18 � Dumbbell topology with 10 Mbit/s bottleneck and 9 different RTTs � 450 clients and 450 servers � Default TCP configuration, except for larger buffer sizes (8 MiB) � Replayed traces in a-b-t format � Mean downlink load 35% � Metric: Epoch duration (a,b,t) epoch Epoch duration Request 1 Request 2 Request 3 329 B 403 B 356 B 0.12s 3.12s Time 403 B 25,821 B 1,196 B Response 1 Response 2 Response 3 Transaction 1 Transaction 2 � Most TCP connections are rather short in the workload traces Only transfers larger than 10 KB can benefit � Average improvement less than 1 s even for larger transfers � Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes March 2011 6
Selected performance results Trade-off between speedup and packet loss � Simulation with Linux 2.6.18 � Dumbbell topology with 10 Mbit/s bottleneck and 9 different RTTs � 450 clients and 450 servers � Default TCP configuration, except for larger buffer sizes (8 MiB) � Replayed traces in a-b-t format � Variable load up to ca. 40% (due to tool limitation to ca. 1000 stacks) IW10 increases loss probability by 0.5% � Other considered schemes are not faster, but have a larger loss rate � Result: IW10 outperforms other schemes � Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes March 2011 7
Selected performance results Sensitivity to bottleneck buffer size � Simulation with Linux 2.6.18 � Dumbbell topology with 10 Mbit/s bottleneck and 9 different RTTs � 450 clients and 450 servers � Default TCP configuration, except for larger buffer sizes (8 MiB) � Replayed traces in a-b-t format � Mean downlink load 35% Obviously, small buffers (<50 packets) are a problem � Fast startups only moderately increase the packet loss rate if � reasonably sized buffers (50-100 packets, or AQM) present Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes March 2011 8
Selected performance results Fairness to unmodified stacks � Simulation with Linux 2.6.18 � Dumbbell topology with 10 Mbit/s bottleneck and 9 different RTTs � 450 clients and 450 servers 50% CUBIC, 50% fast startup � Default TCP configuration, except for larger buffer sizes (8 MiB) � Synthetic workload model for HTTP/1.0, response sizes from truncated pareto distribution with mean 14 KB, shape parameter 1.1, truncation at 10 MB Scenario : 50% of stacks use fast startup, 50% unchanged (CUBIC) � IW10 is rather fair and hardly impacts other flows � Result: IW10 outperforms other schemes � Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes March 2011 9
Conclusion Results � Moderate benefit of fast startups for larger transfers � IW10 works rather well and is quite fair � More sophisticated schemes tend to be worse � Network support such as Quick-Start can overcome some limitations, but it has problems of its own Recommendations for further work � Study more extensively the use of rate pacing , even if results suggests that it may not be needed for 10 MSS � Rethink error recovery algorithms after fast startup, since there are many degrees of freedom there, too Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes March 2011 10
Selected references Evaluation results of IW10 (amongst others) � M. Scharf. Comparison of end-to-end and network-supported fast startup congestion control schemes. Computer Networks, 2011 M. Scharf. Fast Startup Internet Congestion Control for Broadband Interactive Applications. PhD � thesis, University of Stuttgart, submitted Nov. 2009 � M. Scharf. Performance evaluation of fast startup congestion control schemes. In Proc. IFIP Networking 2009, LNCS 5550, Springer-Verlag, pp. 716–727, 2009 � M. Scharf. Quick-Start, Jump-Start, and other fast startup approaches: Implementation issues and performance. Presentation at 73rd IETF Meeting, ICCRG, Nov. 2008 Studies of network-supported fast startup congestion control schemes � S. Hauger, M. Scharf, J. Kögel, and C. Suriyajan. Evaluation of router implementations of explicit congestion control schemes. Journal of Communication, vol. 5, no. 3, 2010, pp. 197-204. � M. Scharf, M. Eissele, C. Mueller, and T. Ertl. Speeding up the 3D Web: A case for fast startup congestion control. Proc. PFLDNeT, 2009 � M. Proebster, M. Scharf, and S. Hauger. Performance comparison of router assisted congestion control protocols: XCP vs. RCP. Proc. 2nd International Workshop on the Evaluation of Quality of Service through Simulation in the Future Internet, 2009 � M. Scharf and H. Strotbek. Performance evaluation of Quick-Start TCP with a Linux kernel implementation. Proc. IFIP Networking 2008, LNCS 4982, Springer-Verlag, pp. 703–714, 2008 � S. Hauger, M. Scharf, J. Kögel, and C. Suriyajan. Quick-Start and XCP on a network processor: Implementation issues and performance evaluation. Proc. IEEE HPSR 2008, 2008. � M. Scharf, S. Hauger, and J. Kögel. Quick-Start TCP: From theory to practice. Proc. PFLDnet, 2008 � M. Scharf. Performance analysis of the Quick-Start TCP extension. Proc. IEEE Broadnets, 2007 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes March 2011 11
Recommend
More recommend