an experimental study of the learnability of congestion
play

An experimental study of the learnability of congestion control - PowerPoint PPT Presentation

An experimental study of the learnability of congestion control Anirudh Sivaraman, Keith Winstein, Pratiksha Thaker, Hari Balakrishnan MIT CSAIL http://web.mit.edu/remy/learnability August 31, 2014 1 / 17 This talk How easy is it to


  1. An experimental study of the learnability of congestion control Anirudh Sivaraman, Keith Winstein, Pratiksha Thaker, Hari Balakrishnan MIT CSAIL http://web.mit.edu/remy/learnability August 31, 2014 1 / 17

  2. This talk ◮ How easy is it to learn a network protocol to achieve a desired goal, despite a mismatched set of assumptions? 2 / 17

  3. This talk ◮ How easy is it to learn a network protocol to achieve a desired goal, despite a mismatched set of assumptions? ◮ cf. Learning: “Knowledge acquisition without explicit programming” (Valiant 1984) 2 / 17

  4. Preview of key results 3 / 17

  5. Preview of key results ◮ Can tolerate mismatched link-rate assumptions 3 / 17

  6. Preview of key results ◮ Can tolerate mismatched link-rate assumptions ◮ Need precision about the number of senders 3 / 17

  7. Preview of key results ◮ Can tolerate mismatched link-rate assumptions ◮ Need precision about the number of senders ◮ TCP compatibility is a double-edged sword 3 / 17

  8. Preview of key results ◮ Can tolerate mismatched link-rate assumptions ◮ Need precision about the number of senders ◮ TCP compatibility is a double-edged sword ◮ Can tolerate mismatch in the # of bottlenecks 3 / 17

  9. Experimental method 4 / 17

  10. Experimental method 4 / 17

  11. Experimental method < Mbps, ms> 4 / 17

  12. Experimental method < Mbps, ms> 4 / 17

  13. Experimental method < Mbps, ms> 4 / 17

  14. Experimental method < Mbps, ms> 4 / 17

  15. Experimental method < Mbps, ms> 4 / 17

  16. Experimental method Training Networks < Mbps, ms> 5 / 17

  17. Experimental method Objective Function: - log (tpt/delay) - Avg. Flow Completion time Training Networks < Mbps, ms> Learner 5 / 17

  18. Experimental method Objective Function: - log (tpt/delay) - Avg. Flow Completion time Training Networks < Mbps, ms> Congestion Learner Control Algorithm 5 / 17

  19. Experimental method Objective Function: - log (tpt/delay) - Avg. Flow Completion time Training Networks < Mbps, ms> Remy RemyCC (SIGCOMM 13) 5 / 17

  20. Experimental method Objective Function: - log (tpt/delay) - Avg. Flow Completion time T esting Networks Training Networks < Mbps, ms> < Mbps, ms> T est within Remy ns-2 RemyCC (SIGCOMM 13) 5 / 17

  21. Remy compared with an ideal protocol 32 16 8 Throughput (Mbps) 4 2 1 0.5 500 400 300 200 100 0 Queueing delay (ms) 6 / 17

  22. Remy compared with an ideal protocol 32 Ideal 16 8 Throughput (Mbps) 4 2 1 0.5 500 400 300 200 100 0 Queueing delay (ms) 6 / 17

  23. Remy compared with an ideal protocol 32 Ideal RemyCC 16 8 Throughput (Mbps) 4 2 1 0.5 500 400 300 200 100 0 Queueing delay (ms) 6 / 17

  24. Remy compared with an ideal protocol 32 Ideal RemyCC 16 Cubic/sfqCoDel Cubic 8 Throughput (Mbps) 4 2 1 0.5 500 400 300 200 100 0 Queueing delay (ms) 6 / 17

  25. Learning network protocols despite mismatched assumptions 7 / 17

  26. Learning network protocols despite mismatched assumptions ◮ Is there a tradeoff between operating range and generality in link rates? 7 / 17

  27. Learning network protocols despite mismatched assumptions ◮ Is there a tradeoff between operating range and generality in link rates? ◮ Is there a tradeoff between performance and operating range in link rates? 7 / 17

  28. Performance and link-rate operating range 0 -0.5 Objective Function (Normalized) -1 -1.5 1 10 100 1000 Link rate (Mbps) 8 / 17

  29. Performance and link-rate operating range 0 Ideal -0.5 Objective Function (Normalized) -1 -1.5 1 10 100 1000 Link rate (Mbps) 8 / 17

  30. Performance and link-rate operating range 2x range 0 Ideal -0.5 Objective Function (Normalized) -1 -1.5 1 10 100 1000 Link rate (Mbps) 8 / 17

  31. Performance and link-rate operating range 2x range 10x range 0 Ideal -0.5 Objective Function (Normalized) -1 -1.5 1 10 100 1000 Link rate (Mbps) 8 / 17

  32. Performance and link-rate operating range 2x range 10x range 100x range 0 Ideal -0.5 Objective Function (Normalized) -1 -1.5 1 10 100 1000 Link rate (Mbps) 8 / 17

  33. Performance and link-rate operating range 2x range 10x range 100x range 1000x range 0 Ideal -0.5 Objective Function (Normalized) -1 -1.5 1 10 100 1000 Link rate (Mbps) 8 / 17

  34. Performance and link-rate operating range 2x range 10x range 100x range 1000x range 0 Ideal -0.5 Objective Function (Normalized) Cubic-over-sfqCoDel -1 -1.5 C u b i c 1 10 100 1000 Link rate (Mbps) 8 / 17

  35. Performance and link-rate operating range 9 / 17

  36. Performance and link-rate operating range ◮ Very clear generality vs. operating range tradeoff 9 / 17

  37. Performance and link-rate operating range ◮ Very clear generality vs. operating range tradeoff ◮ Only weak evidence of a performance vs. operating range tradeoff 9 / 17

  38. Performance and link-rate operating range ◮ Very clear generality vs. operating range tradeoff ◮ Only weak evidence of a performance vs. operating range tradeoff ◮ Possible to design a forwards-comptabible protocol handling a wide range in link rates 9 / 17

  39. Learning network protocols despite mismatched assumptions Can we learn a protocol that performs well both when there are few senders and when there are many senders? 10 / 17

  40. Imperfections in the number of senders 0.0 − 0.2 Normalized objective function − 0.4 − 0.6 − 0.8 − 1.0 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  41. Imperfections in the number of senders Ideal 0.0 − 0.2 Normalized objective function − 0.4 − 0.6 − 0.8 − 1.0 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  42. Imperfections in the number of senders Ideal 0.0 − 0.2 Normalized objective function − 0.4 − 0.6 − 0.8 − 1.0 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  43. Imperfections in the number of senders Ideal 0.0 − 0.2 Normalized objective function − 0.4 − 0.6 − 0.8 1 - 2 − 1.0 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  44. Imperfections in the number of senders Ideal 0.0 − 0.2 Normalized objective function − 0.4 − 0.6 − 0.8 1 - 2 − 1.0 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  45. Imperfections in the number of senders Ideal 0.0 − 0.2 Normalized objective function − 0.4 − 0.6 − 0.8 1 - 2 − 1.0 1 - 10 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  46. Imperfections in the number of senders Ideal 0.0 − 0.2 Normalized objective function − 0.4 − 0.6 − 0.8 1 - 2 − 1.0 1 - 10 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  47. Imperfections in the number of senders Ideal 0.0 − 0.2 1 - 5 0 Normalized objective function − 0.4 − 0.6 − 0.8 1 - 2 − 1.0 1 - 10 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  48. Imperfections in the number of senders Ideal 0.0 − 0.2 1 - 5 0 Normalized objective function − 0.4 − 0.6 − 0.8 1 - 2 − 1.0 1 - 10 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  49. Imperfections in the number of senders Ideal 0.0 1 - 100 − 0.2 1 - 5 0 Normalized objective function − 0.4 − 0.6 − 0.8 1 - 2 − 1.0 1 - 10 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  50. Imperfections in the number of senders Ideal 0.0 1 - 100 − 0.2 1 - 5 0 Normalized objective function − 0.4 − 0.6 − 0.8 1 - 2 − 1.0 1 - 10 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  51. Imperfections in the number of senders Ideal 0.0 1 - 100 − 0.2 1 - 5 0 Normalized objective function − 0.4 Cubic − 0.6 − 0.8 1 - 2 − 1.0 1 - 10 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  52. Imperfections in the number of senders Ideal 0.0 1 - 100 − 0.2 1 - 5 0 Normalized objective function Cubic-over-sfqCoDel − 0.4 Cubic − 0.6 − 0.8 1 - 2 − 1.0 1 - 10 − 1.2 − 1.4 0 20 40 60 80 100 11 / 17 Number of senders

  53. Imperfections in the number of senders Tradeoff between performance with few senders and performance with many senders 11 / 17

  54. Learning network protocols despite mismatched assumptions What are the costs and benefits of learning a new protocol that shares fairly with a legacy sender? 12 / 17

  55. Imperfect assumptions about the nature of other senders ◮ TCP-Aware RemyCC: Contends with: ◮ TCP-Aware RemyCC half the time ◮ TCP NewReno half the time. 13 / 17

  56. Imperfect assumptions about the nature of other senders ◮ TCP-Aware RemyCC: Contends with: ◮ TCP-Aware RemyCC half the time ◮ TCP NewReno half the time. ◮ TCP-Naive RemyCC: Contends with: ◮ TCP-Naive RemyCC all the time 13 / 17

  57. RemyCC competing against itself 7 6 Throughput (Mbps) RemyCC [TCP-naive] NewReno 5 4 Better 3 128 64 32 16 Queueing delay (ms) 14 / 17

  58. RemyCC competing against itself 7 6 Throughput (Mbps) RemyCC [TCP-naive] NewReno 5 Cost of TCP-awareness 4 Better 3 128 64 32 16 Queueing delay (ms) 14 / 17

Recommend


More recommend