on flow concurrency in the internet and its implications
play

On Flow Concurrency in the Internet and its Implications for - PowerPoint PPT Presentation

On Flow Concurrency in the Internet and its Implications for Capacity Sharing Brian Trammell and Dominik Schatzmann Communications Systems Group, ETH Zrich Whats a measurement guy doing at a capacity-sharing workshop? Concern about


  1. On Flow Concurrency in the Internet and its Implications for Capacity Sharing Brian Trammell and Dominik Schatzmann Communications Systems Group, ETH Zürich

  2. What’s a measurement guy doing at a capacity-sharing workshop? § Concern about flow state requirements in audit § Capacity sharing verification at network interconnection points necessary for algorithms requiring trust between policing and audit § expensive à large links, higher concurrency § Current capacity-sharing approaches don’t need this property § independent edge-policing architecture § and as long as they don’t, remain scalable à small links, lower concurrency § But it’s always nice to have data to back up these assumptions § however … § increasing use of flow-state-keeping devices in the network § Proliferation of protocols requiring middleboxes. (CGN scares me.) § Is flow state a new resource requiring fair allocation? 10 Dec 2012 Flow Concurrency - CSWS - Nice 2

  3. Flow concurrency distribution characteristics 10 Dec 2012 Flow Concurrency - CSWS - Nice 3

  4. Total Flow Concurrency network median 95 th peak all (/11) 148k 322k 436k university (5x /18) 3.2k 4.3k 22.9k § Flow concurrency highly dependent on network type § In general, less variable at higher levels of aggregation § Rule of thumb: ~20k peak per /16 § adjusted for host type / popularity 10 Dec 2012 Flow Concurrency - CSWS - Nice 4

  5. Flow Concurrency per Active Host Host type median 95 th peak clients (13.7k) 5.8 11.8 53.8 servers (13.4k) 10.3 13.4 23.0 CDN (1.25k) 16.5 43.3 49.8 all (2.4M) 5.2 7.7 9.7 § Flow concurrency per active host much more stable per host type § (with some noise: 53.8 peak à outbound scan activity) 5 th percentile client flow concurrency is 3.8 à web client behavior § Client concurrency a function of behavior § Server concurrency a function of popularity § Large-scale rule of thumb: 10 peak per active host. § 10 Dec 2012 Flow Concurrency - CSWS - Nice 5

  6. Dominance of short flows Flow concurrency is an issue § Cumulative distribution of flow count 1.0 because most flows are short. 0.8 Median flow is ~250ms long § 23% single-packet flows § 0.6 very short flows account for 8% § 0.4 of packets and 5% of bytes … … but 52% of flows. § 0.2 0.0 0 10 20 30 40 50 60 Flow duration (s) 10 Dec 2012 Flow Concurrency - CSWS - Nice 6

  7. Use aggressive timeouts! Dominance of short flows § 3e+06 indicates short timeouts can significantly decrease required flows 2e+06 flow state 1e+06 Idle timeouts longer than 15s § ● merely add to state requirements 0e+00 5 10 15 30 60 120 Timeout (s) 10 Dec 2012 Flow Concurrency - CSWS - Nice 7

  8. Development of flow concurrency over time 5e+05 ● Flow concurrency increases with § 4e+05 traffic volume. 3e+05 Large flows contribute far more § flows ● to traffic volume than to flow 2e+05 ● count: ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● § Correlation: 0.668 1e+05 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0e+00 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 2003 2005 2007 2009 2011 year 10 Dec 2012 Flow Concurrency - CSWS - Nice 8

  9. Guidance for edge-policing in capacity sharing § ~10 peak concurrent flows per active host § ~12 peak concurrent flows per active client (excl. scanning) § à 200kB per /24 assuming 64B/flow § Server concurrency depends on popularity § Even with 100x concurrency, 20MB per /24 § à there does not appear to be a problem here § (but make sure you don’t need to police the interconnect) 10 Dec 2012 Flow Concurrency - CSWS - Nice 9

  10. Toward flow-state fairness 10 Dec 2012 Flow Concurrency - CSWS - Nice 10

  11. Decreasing the brittleness of in-network state § Two solutions to increasing flow concurrency for flow-state keeping devices: § Graceful degradation (audit and policing, measurement, etc.) § Massive overprovisioning § Use of devices that must be overprovisioned is increasing in the network § Anything with NAT in the name. § Can capacity-sharing approaches be of use here? 10 Dec 2012 Flow Concurrency - CSWS - Nice 11

  12. Potential flow-state control schemes § Temporal offload: delay SYN before forward § Most peaks are transient à delay can help ride them out § Too much delay leads to retransmit / timeout § (Much less delay than this can impact perceived latency) § Lower-concurrency transport § e.g. SPDY: reduce concurrency by opening fewer flows § Discouragement of flow-state overload § Declare flow length in advance, and incentivize longer flows § Machinery here looks parallel to conex 10 Dec 2012 Flow Concurrency - CSWS - Nice 12

  13. Acknowledgements § Thanks to SWITCH for network flow data examined § mPlane — http://www.ict-mplane.eu/ 10 Dec 2012 Flow Concurrency - CSWS - Nice 13

Recommend


More recommend