tcp pacing in data center networks
play

TCP Pacing in Data Center Networks Monia Ghobadi, Yashar Ganjali - PowerPoint PPT Presentation

TCP Pacing in Data Center Networks Monia Ghobadi, Yashar Ganjali Department of Computer Science, University of Toronto {monia, yganjali}@cs.toronto.edu 1 TCP , Oh TCP! 2 TCP , Oh TCP! TCP congestion control 2 TCP , Oh TCP! TCP


  1. Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT No congestion Congestion 300 paced Bottleneck Link Utilization (Mbps) non − paced 250 200 38% 150 100 50 0 0 50 100 150 200 250 300 Time (sec) 1 1 paced paced 0.9 0.9 non − paced non − paced 0.8 0.8 0.7 0.7 1RTT 0.6 0.6 CDF CDF 0.5 0.5 2RTTs 2RTTs 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0.039 0.063 0.1 0.2 0.3 0.4 0.06 0.2 0.5 1 2 3 5 7 9 Flow Completion Time (sec) Flow Completion Time (sec)

  2. Multiple flows: Link Utilization/Drop/Latency Buffer size 1.7% of BDP , varying number of flows 10

  3. Multiple flows: Link Utilization/Drop/Latency Buffer size 1.7% of BDP , varying number of flows N * PoI Bottleneck Link Utilization (Mbps) 1000 800 600 400 paced 200 non − paced 0 0 10 20 30 40 50 60 70 80 90 100 Number of Flows 10

  4. Multiple flows: Link Utilization/Drop/Latency Buffer size 1.7% of BDP , varying number of flows N * PoI Bottleneck Link Utilization (Mbps) 1000 800 600 400 paced 200 non − paced 0 0 10 20 30 40 50 60 70 80 90 100 Number of Flows N * PoI 1 paced Bottleneck Link Drop (%) non − paced 0.8 0.6 0.4 0.2 0 0 10 20 30 40 50 60 70 80 90 100 10 Number of Flows

  5. Multiple flows: Link Utilization/Drop/Latency Buffer size 1.7% of BDP , varying number of flows N * N * PoI PoI Bottleneck Link Utilization (Mbps) 1000 1 paced non − paced 0.8 800 Average FCT (sec) 600 0.6 400 0.4 paced 200 0.2 non − paced 0 0 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Number of Flows Number of Flows N * PoI 1 paced Bottleneck Link Drop (%) non − paced 0.8 0.6 0.4 0.2 0 0 10 20 30 40 50 60 70 80 90 100 10 Number of Flows

  6. Multiple flows: Link Utilization/Drop/Latency Buffer size 1.7% of BDP , varying number of flows N * N * PoI PoI Bottleneck Link Utilization (Mbps) 1000 1 paced non − paced 0.8 800 Average FCT (sec) 600 0.6 400 0.4 paced 200 0.2 non − paced 0 0 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Number of Flows Number of Flows N * N * PoI PoI 1 2 paced paced 99 th Percentile FCT (sec) 1.8 Bottleneck Link Drop (%) non − paced non − paced 0.8 1.6 1.4 0.6 1.2 1 0.4 0.8 0.6 0.2 0.4 0.2 0 0 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 10 Number of Flows Number of Flows

  7. Multiple flows: Link Utilization/Drop/Latency Buffer size 1.7% of BDP , varying number of flows N * N * PoI PoI Bottleneck Link Utilization (Mbps) 1000 1 paced non − paced 0.8 800 Average FCT (sec) 600 0.6 400 0.4 paced 200 0.2 non − paced 0 0 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 • Number of concurrent connections increase beyond a Number of Flows Number of Flows N * N * certain point the benefits of pacing diminish. PoI PoI 1 2 paced paced 99 th Percentile FCT (sec) 1.8 Bottleneck Link Drop (%) non − paced non − paced 0.8 1.6 1.4 0.6 1.2 1 0.4 0.8 0.6 0.2 0.4 0.2 0 0 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 10 Number of Flows Number of Flows

  8. Multiple flows: Link Utilization/Drop/Latency Buffer size 3.4% of BDP , varying number of flows 11

  9. Multiple flows: Link Utilization/Drop/Latency Buffer size 3.4% of BDP , varying number of flows N * N * PoI PoI Bottleneck Link Utilization (Mbps) 1 1000 800 0.8 Average FCT (sec) 600 0.6 400 0.4 200 paced paced 0.2 non − paced non − paced 0 0 10 20 30 40 50 60 70 80 90 100 0 Number of Flows 0 10 20 30 40 50 60 70 80 90 100 Number of Flows N * PoI 0.8 2 paced nonpaced 99 th Percentile FCT (sec) 1.8 0.7 1.6 0.6 Bottleneck link drop(%) 1.4 0.5 1.2 0.4 1 0.8 0.3 0.6 paced 0.2 0.4 non − paced 0.1 0.2 0 0 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 11 Number of flows sharing the bottleneck Number of Flows

  10. Multiple flows: Link Utilization/Drop/Latency Buffer size 3.4% of BDP , varying number of flows N * N * PoI PoI Bottleneck Link Utilization (Mbps) 1 1000 800 0.8 Average FCT (sec) 600 • Aagarwal et al.: Don’t pace! 0.6 400 • 50 flows, BDP 1250 packets and buffer size 0.4 200 paced 312 packets paced 0.2 non − paced non − paced 0 • N* = 8 flows. 0 10 20 30 40 50 60 70 80 90 100 0 Number of Flows 0 10 20 30 40 50 60 70 80 90 100 • Kulik et al.: Pace! Number of Flows N * PoI 0.8 2 • 1 flow, BDP 91 packets, buffer size 10 paced nonpaced 99 th Percentile FCT (sec) 1.8 0.7 1.6 packets. 0.6 Bottleneck link drop(%) 1.4 0.5 • N* = 9 flows. 1.2 0.4 1 0.8 0.3 0.6 paced 0.2 0.4 non − paced 0.1 0.2 0 0 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 11 Number of flows sharing the bottleneck Number of Flows

  11. N* vs. Bu fg er 12

  12. N* vs. Bu fg er Bottleneck Link Utilization (Mbps) 1000 800 600 400 200 paced non − paced 0 50 100 150 200 250 Buffer Size (KB) 12

  13. N* vs. Bu fg er Bottleneck Link Utilization (Mbps) 1000 800 600 400 200 paced non − paced 0 50 100 150 200 250 Buffer Size (KB) 0.8 paced nonpaced 0.7 0.6 Bottleneck link drop(%) 0.5 0.4 0.3 0.2 0.1 0 50 100 150 200 250 12 Buffer size(KB)

  14. N* vs. Bu fg er Bottleneck Link Utilization (Mbps) 1.4 1000 paced 1.2 non − paced 800 CT (sec) 1 600 0.8 Average � 0.6 400 0.4 200 paced 0.2 non − paced 0 0 50 100 150 200 250 50 100 150 200 250 Buffer Size (KB) Buffer Size (KB) 0.8 paced nonpaced 0.7 0.6 Bottleneck link drop(%) 0.5 0.4 0.3 0.2 0.1 0 50 100 150 200 250 12 Buffer size(KB)

  15. N* vs. Bu fg er Bottleneck Link Utilization (Mbps) 1.4 1000 paced 1.2 non − paced 800 CT (sec) 1 600 0.8 Average � 0.6 400 0.4 200 paced 0.2 non − paced 0 0 50 100 150 200 250 50 100 150 200 250 Buffer Size (KB) Buffer Size (KB) 2.8 0.8 paced paced CT (sec) nonpaced 2.4 0.7 non − paced 0.6 2 Bottleneck link drop(%) 99 th Percentile � 0.5 1.6 0.4 1.2 0.3 0.8 0.2 0.4 0.1 0 0 50 100 150 200 250 50 100 150 200 250 12 Buffer size(KB) Buffer Size (KB)

  16. N* vs. Bu fg er Bottleneck Link Utilization (Mbps) 1.4 1000 paced 1.2 non − paced 800 CT (sec) 1 600 0.8 Average � 0.6 400 0.4 200 paced 0.2 non − paced 0 0 50 100 150 200 250 50 100 150 200 250 Buffer Size (KB) Buffer Size (KB) 2.8 0.8 paced paced CT (sec) nonpaced 2.4 0.7 non − paced 0.6 2 Bottleneck link drop(%) 99 th Percentile � 0.5 1.6 0.4 1.2 0.3 0.8 0.2 0.4 0.1 0 0 50 100 150 200 250 50 100 150 200 250 12 Buffer size(KB) Buffer Size (KB)

  17. Clustering Effect: The probability of packets from a flow being followed by packets from other flows 13

  18. Clustering Effect: The probability of packets from a flow being followed by packets from other flows Non-paced: Packets of each flow are clustered together. 13

  19. Clustering Effect: The probability of packets from a flow being followed by packets from other flows Non-paced: Packets of each flow Paced: Packets of different flows are are clustered together. multiplexed. 13

  20. Drop Synchronization: Number of Flows Affected by Drop Event 14

  21. Drop Synchronization: Number of Flows Affected by Drop Event NetFPGA router to count the number of flows affected by drop events. 14

  22. Drop Synchronization: Number of Flows Affected by Drop Event NetFPGA router to count the number of flows affected by drop events. 1 1 1 paced 0.9 0.9 paced paced 0.9 non − paced 0.8 0.8 0.8 non − paced non − paced 0.7 0.7 0.7 0.6 0.6 0.6 CDF CDF CDF 0.5 0.5 0.5 0.4 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0 0 0 0 20 40 60 80 0 50 100 150 200 250 300 350 400 0 10 20 30 40 Number of Flows Affected by Drop Event Number of Flows Affected by Drop Event Number of Flows Affected by Drop Event N: 48 N: 96 N: 384 14

  23. Drop Synchronization: Number of Flows Affected by Drop Event NetFPGA router to count the number of flows affected by drop events. 1 1 1 paced 0.9 0.9 paced paced 0.9 non − paced 0.8 0.8 0.8 non − paced non − paced 0.7 0.7 0.7 0.6 0.6 0.6 CDF CDF CDF 0.5 0.5 0.5 0.4 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0 0 0 0 20 40 60 80 0 50 100 150 200 250 300 350 400 0 10 20 30 40 Number of Flows Affected by Drop Event Number of Flows Affected by Drop Event Number of Flows Affected by Drop Event N: 48 N: 96 N: 384 14

  24. Future Trends for Pacing: per-egress pacing. 15

  25. Future Trends for Pacing: per-egress pacing. N * N * PoI PoI Bottleneck Link Utilization (Mbps) 1 1000 per − flow paced non − paced Average RCT (sec) 0.8 800 per − host + per − flow paced 0.6 600 0.4 400 per − flow paced 200 0.2 non − paced per − host + per − flow paced 0 0 6 12 24 48 96 192 6 12 24 48 96 192 Number of Flows Number of Flows N * PoI 20 per − flow paced 2.2 per − flow paced nonpaced 99 th Percentile RCT (sec) 2 per − host + per − flow paced non − paced Bottleneck Link Drop (%) 1.8 15 per − host + per − flow paced 1.6 1.4 1.2 10 1 0.8 0.6 5 0.4 0.2 0 0 6 12 24 48 96 192 6 12 24 48 96 192 Number of flows Number of Flows 15

  26. Future Trends for Pacing: per-egress pacing. N * N * PoI PoI Bottleneck Link Utilization (Mbps) 1 1000 per − flow paced non − paced Average RCT (sec) 0.8 800 per − host + per − flow paced 0.6 600 0.4 400 per − flow paced 200 0.2 non − paced per − host + per − flow paced 0 0 6 12 24 48 96 192 6 12 24 48 96 192 Number of Flows Number of Flows N * PoI 20 per − flow paced 2.2 per − flow paced nonpaced 99 th Percentile RCT (sec) 2 per − host + per − flow paced non − paced Bottleneck Link Drop (%) 1.8 15 per − host + per − flow paced 1.6 1.4 1.2 10 1 0.8 0.6 5 0.4 0.2 0 0 6 12 24 48 96 192 6 12 24 48 96 192 Number of flows Number of Flows 15

  27. Future Trends for Pacing: per-egress pacing. N * N * PoI PoI Bottleneck Link Utilization (Mbps) 1 1000 per − flow paced non − paced Average RCT (sec) 0.8 800 per − host + per − flow paced 0.6 600 0.4 400 per − flow paced 200 0.2 non − paced per − host + per − flow paced 0 0 6 12 24 48 96 192 6 12 24 48 96 192 Number of Flows Number of Flows N * PoI 20 per − flow paced 2.2 per − flow paced nonpaced 99 th Percentile RCT (sec) 2 per − host + per − flow paced non − paced Bottleneck Link Drop (%) 1.8 15 per − host + per − flow paced 1.6 1.4 1.2 10 1 0.8 0.6 5 0.4 0.2 0 0 6 12 24 48 96 192 6 12 24 48 96 192 Number of flows Number of Flows 15

  28. Future Trends for Pacing: per-egress pacing. N * N * PoI PoI Bottleneck Link Utilization (Mbps) 1 1000 per − flow paced non − paced Average RCT (sec) 0.8 800 per − host + per − flow paced 0.6 600 0.4 400 per − flow paced 200 0.2 non − paced per − host + per − flow paced 0 0 6 12 24 48 96 192 6 12 24 48 96 192 Number of Flows Number of Flows N * PoI 20 per − flow paced 2.2 per − flow paced nonpaced 99 th Percentile RCT (sec) 2 per − host + per − flow paced non − paced Bottleneck Link Drop (%) 1.8 15 per − host + per − flow paced 1.6 1.4 1.2 10 1 0.8 0.6 5 0.4 0.2 0 0 6 12 24 48 96 192 6 12 24 48 96 192 Number of flows Number of Flows 15

  29. Future Trends for Pacing: per-egress pacing. N * N * PoI PoI Bottleneck Link Utilization (Mbps) 1 1000 per − flow paced non − paced Average RCT (sec) 0.8 800 per − host + per − flow paced 0.6 600 0.4 400 per − flow paced 200 0.2 non − paced per − host + per − flow paced 0 0 6 12 24 48 96 192 6 12 24 48 96 192 Number of Flows Number of Flows N * PoI 20 per − flow paced 2.2 per − flow paced nonpaced 99 th Percentile RCT (sec) 2 per − host + per − flow paced non − paced Bottleneck Link Drop (%) 1.8 15 per − host + per − flow paced 1.6 1.4 1.2 10 1 0.8 0.6 5 0.4 0.2 0 0 6 12 24 48 96 192 6 12 24 48 96 192 Number of flows Number of Flows 15

  30. Conclusions and Future work ๏ Re-examine TCP pacing’s effectiveness: ๏ Demonstrate when TCP pacing brings benefits in such environments. ๏ Inter-flow burstiness ๏ Burst-pacing vs. packet-pacing. ๏ Per-egress pacing. 16

  31. Renewed Interest 17

  32. Tra ffj c Burstiness Survey ๏ ‘Bursty’ is a word with no agreed meaning. How do you define a bursty traffic? ๏ If you are involved with a data center, is your data center traffic bursty? ๏ If yes, do you think that it will be useful to supress the burstiness in your traffic? ๏ If no, are you already supressing the burstiness? How? Would you anticipate the traffic becoming burstier in the future? monia@cs.toronto.edu 18

  33. 19

  34. Base-Case Experiment: One RPC vs Two RPCs, 64KB of bu fg ering, Latency 20

  35. Multiple flows: Link Utilization/Drop/ Latency Bu fg er size: 6% of BDP, varying number of 21

  36. Base-Case Experiment: One RPC vs Two RPCs, 64KB of bu fg ering, Latency / Queue Occupancy 22

  37. Base-Case Experiment: One RPC vs Two RPCs, 64KB of bu fg ering, Latency / Queue Occupancy 22

  38. Base-Case Experiment: One RPC vs Two RPCs, 64KB of bu fg ering, Latency / Queue Occupancy 22

  39. Base-Case Experiment: One RPC vs Two RPCs, 64KB of bu fg ering, Latency / Queue Occupancy 22

  40. Functional test 23

  41. Functional test 23

  42. Functional test 23

  43. Functional test 23

  44. RPC vs. Streaming Paced by ack clocking Bursty 10GE 1GE 10GE RTT = 10ms 24

  45. Zooming in more on the paced flow 25

  46. Multiple flows: Link Utilization/Drop/Latency Buffer size 6.8% of BDP , varying number of flows 26

Recommend


More recommend