Inferring Queue State by Measuring Delay in a WiFi Network David Malone, Douglas J Leith, Ian Dangerfield 11 May 2009
Wifi, Delay, Queue Length and Congestion Control • RTT a proxy for queue length. • Not too crazy in wired networks. • For Wifi? Transmissions (~500us) Counting Down (20us) Someone else transmits, stop counting! Data and then ACK Collision followed by timeout • With fixed traffic, what is impact of random service? • What is impact of variable traffic (not even sharing buffer)? • What will Vegas do in practice? • Want to understand these for future design work.
Sample Previous work • V. Jacobson. pathchar — a tool to infer characteristics of internet paths. MSRI, April 1997. • N Sundaram, WS Conner, and A Rangarajan. Estimation of bandwidth in bridged home networks. WiNMee, 2007. • M Franceschinis, M Mellia, M Meo, and M Munafo. Measuring TCP over WiFi: A real case. WiNMee, April 2005. • G. McCullagh. Exploring delay-based tcp congestion control. Masters Thesis, 2008.
Fixed Traffic: How bad is the problem? 200000 Observed Drain Time 180000 160000 140000 120000 Drain Time (us) 100000 80000 60000 40000 20000 0 0 5 10 15 20 Queue Length (packets)
What do the stats look like? 200000 1400 Drain Time Distribution Mean Drain Time Number of Observations 180000 1200 160000 1000 140000 Number of Observations 120000 Drain Time (us) 800 100000 600 80000 60000 400 40000 200 20000 0 0 0 5 10 15 20 Packets in queue Note: the variance is getting bigger.
How does it grow? 20000 20000 Measured Estimate V sqrt(n) 18000 18000 16000 16000 Drain Time Standard Deviation (us) 14000 14000 Number of Observations 12000 12000 10000 10000 8000 8000 6000 6000 4000 4000 2000 2000 0 2 4 6 8 10 12 14 16 18 20 22 Queue Length (packets) For fixed traffic service time looks uncorrelated.
Queue Length Prediction • Suppose traffic is fixed. • We have collect all statistics. • Given an RTT, can we guess how full queue is? • Easier: more or less than half full?
Results of thresholding 0.6 Delay threshold wrong Delay Threshold wrong by 50% or more 0.5 0.4 Fraction of Packets 0.3 0.2 0.1 0 0 20000 40000 60000 80000 100000 120000 140000 160000 180000 200000 Threshold (us)
Using History • Mistake 10% of time, not good for congestion control. • Only using one sample, what happens if we use history. • Obvious thing to do: filter.
Filters 7/8 Filter srtt ← 7 / 8 srtt + 1 / 8 rtt . Exponential Time srtt ← e − ∆ T / T c srtt + (1 − e − ∆ T / T c ) rtt . Windowed Mean srtt ← mean last RTT rtt . Windowed Min srtt ← min last RTT rtt .
How much better do we do? 0.6 Exp Time Threshold wrong Window Mean Threshold wrong 7/8 Filter Threshold wrong Window Min Threshold wrong Raw Threshold wrong 0.5 Exp Time Threshold wrong by >= 50% Window Mean Threshold wrong by >= 50% 7/8 Filter Threshold wrong by >= 50% Window Min Threshold wrong by >= 50% Raw Threshold wrong by >= 50% 0.4 Fraction of Packets 0.3 0.2 0.1 0 0 20000 40000 60000 80000 100000 120000 140000 160000 180000 200000 Threshold (us)
Variable Network Conditions • Other traffic can change service rate. • Need not even share same buffer. • Nonlinear because of collisions. • What happens when we add/remove competing stations?
Removing stations ( 4 → 1 ) 160000 20 140000 120000 15 Queue Length (packets) 100000 Drain Time (us) 80000 10 60000 40000 5 20000 0 0 230 232 234 236 238 240 242 244 230 232 234 236 238 240 242 244 Time (s) Time (s)
Adding stations ( 4 → 8 , ACK Prio) 700000 20 600000 15 500000 Queue Length (packets) Drain Time (us) 400000 10 300000 200000 5 100000 0 0 100 120 140 160 180 200 100 120 140 160 180 200 Time (s) Time (s) Even base RTT changes.
Vegas in Practice TargetCwnd = cwnd × baseRTT / minRTT Make decision based on TargetCwnd − cwnd. • Will Vegas make right decisions based on current RTT? • Will Vegas get correct base RTT? • Vary delay with dummynet. • Vary BW by adding competing stations.
Vegas with 5ms RTT 450 45 400 40 350 35 300 30 Throughput (pps) cwnd (packets) 250 25 200 20 150 15 100 10 50 5 0 0 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 Time (s) Time (s) Lower bound like 1 − α/ cwnd .
Vegas with 200ms RTT 450 140 400 120 350 100 300 Throughput (pps) cwnd (packets) 80 250 200 60 150 40 100 20 50 0 0 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 Time (s) Time (s) Sees loss and goes into Reno mode.
Conclusion • With fixed traffic, delay is quite variable. • Variability grows with buffer occupancy like √ n . • Obvious filters make things worse. • Need to deal with change in traffic conditions. • Linux Vegas does OK. • Switch to Reno helps. • Vegas insensitive at smaller buffer sizes. • Variability at larger buffer sizes still a problem.
Recommend
More recommend