cs 557 more distance vector routing
play

CS 557 More Distance Vector Routing A Path Finding Algorithm for - PowerPoint PPT Presentation

CS 557 More Distance Vector Routing A Path Finding Algorithm for Loop-Free Routing JJ Garcia-Luna-Aceves and S.Murthy, 1997 The Synchronization of Periodic Routing Messages Sally Floyd and Van Jacobson, 1993 Spring 2013 Distance Vector Routing


  1. CS 557 More Distance Vector Routing A Path Finding Algorithm for Loop-Free Routing JJ Garcia-Luna-Aceves and S.Murthy, 1997 The Synchronization of Periodic Routing Messages Sally Floyd and Van Jacobson, 1993 Spring 2013

  2. Distance Vector Routing • Objective: – Compute the shortest path to every destination in network • Approach: – Each node maintains a table listing: (Destination, Distance, NextHop) – Periodically advertise (Destination, Distance) to neighbors – NOTE THE RESULTING MEMORY IS O(N)

  3. Distance Vector Soft-State • Send updates every 30 seconds – Even if nothing has changed • Time-Out Stale Distances – If 90 seconds elapse no update from NextHop(I,D) then Dist(I,D) = 16 NextHop(I,D) = none • Triggered Updates – If route changes, send update immediately • But also ensure 5 second spacing between triggered updates

  4. Distance Vector Fault Detection • Spilt Horizon with Poison Reverse If Next(I,D) = J, then update to J lists [D,Dist(I,D)=16] • Apply Triangle Check Before Accepting Update – Distances should obey the triangle rule • If update fails the triangle rule, – Send probes to check distance – Apply rules to limit probing

  5. Distance Vector Net Result 1. Link(D,X) fails => Dist(D,X) = 16 Dist(C,X)=2 Dist(B,X)=3 2. Update Dist(A,X)=4 Next(C,X)=D Next(B,X)=C arrives at D B C => Dist(D,X)=11 Next(D,X)=A 3. D sends update to C Dist(D,X) = 11 => Dist(C,X) =12 D A Cost=7 X Dist(D,X)=1 Dist(A,X)=4 Next(D,X)=X Next(A,X)=B

  6. Path Finding Algorithm (1/2) • Objective: – Compute the shortest path to every destination Maintain table size of O(n), update size of O(1), table size of O(n), and same message complexity • Approach: – Each node maintains a table listing: Destination, Distance, NextHop, LastHop – Note changes table size from 3N to 4N – Periodically advertise distances and last hops to your neighbors (Destination, Distance, LastHop ) to neighbors – Note changes update size from 2 to 3. – Same message sending rules. – So no change in message complexity

  7. Path Finding Algorithm • Upon receiving update [Dest,Dist(J,D), LastHop(J,D) ] from node J, node I applies rule: – If NextHop(I,D) = J then Dist(I,D) = Dist(I,J) + Dist(J,D) – Else if Dist(I,J) + Dist(J,D) < Dist(I,D)) then Dist(I,D) = Dist(I,J) + Dist(J,D) NextHop(I,D) = J – Use LastHop to find potential loops

  8. Path Finding Dist(C,X)=2 Find Path from A to X Dist(B,X)=3 Next(C,X)=D Next(B,X)=C Last(C,X)=D Start: Last(B,X)=D Path = (A,???,D,X) B C What is Last(A,D)? Path=(A,???,Last(A,D),D,X) now recurse until whole path has been found! D A Cost=7 X Dist(A,X)=4 Dist(D,X)=1 Next(A,X)=B Next(D,X)=X Last(A,X)=D Last(D,X)=D

  9. Path Finding Summary • Keeps same complexity as original DV – Table size is 4N rather than 3N – Message sizes are 3N rather than 2N • But also any node to infer the complete path – Relies on subpath property of shortest paths • Interesting addition and roughly state of the art in distance vector routing. – Most deployed DV protocols are still variations of basic RIP – No simple with no RIP-TP, no path finding, etc.

  10. Update Synchronization [FJ93b] • Objective: – Examine Synchronization of Weakly Coupled Systems • Approach: – Use a Markov Chain model to explore synchronization and explain network behavior. • Contributions: – Random jitter to break synchronization is now a fundamental part of protocols. – Explains why there is synchronization and how to choose the random jitter amount.

  11. Ping Round-Trip Times Appears to have periodic losses … . Why?

  12. Packet Losses in a RIP Network RIP Network Losses … . Appears to match RIP update frequency

  13. Synchronization (?) • Periodic messages common in many protocols – RIP router sends updates every 30 seconds. • System begins in an unsynchronized state – RIP routers start at different times – Might expect updates are not synchronized … • Example: – Router R1 starts at time 1 and sends updates at time 1, 31, 62,etc. – Router R2 starts at time 3 and sends updates at time 13, 33, 63, etc. – Router R3 starts at time 5 and sends updates at time 5, 35, 65, etc.

  14. Synchronization In Reality • Periodic messages common in many protocols – RIP router sends updates every 30 seconds. • System begins in an unsynchronized state – RIP routers start at different times • But the initial state does not matter – Some systems start out in any state (synchronized or not) and will become unsynchronized over time. – Other systems start out in any state (synchronized or not) and will become synchronized • Transition from unsynchronized to synchronized (or vice versa) is abrupt.

  15. Periodic Message Model 1. Router R prepares and sends its own outgoing message. 2. If R receives an update, R also processes the incoming message 3. Router R sets a new periodic message interval in range [Tp-Tr, Tp+Tr] • In RIP, Tp = 30 seconds, Tr = random jitter amount 4. If incoming message received, it can trigger an update. • In other words, go directly to state 1 without waiting for timer to expire.

  16. Creating and Breaking Clusters

  17. Cluster Size Over Time Note all routers in same cluster (synchronized)

  18. Markov Chain Model • Model system using a Markov Chain. • Each state lists the size of the largest cluster – Assume largest cluster either grows or loses by 1 – Some simplifications from real model

  19. Cluster Behavior • Given a cluster of size I, consider when the timers for nodes in the cluster expire. – Node has interval of time Tp – Node waits i(Tc) to process updates – Node includes some random amount Tr • Assume the random amount is uniform over interval • Expectation for when first timer expires is: Tp + I(Tc) - Tr(i-1)/(i+1) • In other words, the periodic interval time, plus the time to process other cluster updates, plus the expected random variance.

  20. Cluster Breakup • Typical cluster break-up scenario – One node breaks off from the cluster – Results in one cluster of size 1 and one of size i-1 • Let Node A be the node whose timer expires first – To break from cluster, node must compute and send its message before second node in the cluster sends it message. – In other words, node A sends its message and resets its timer before receiving any other message from cluster nodes. • Mathematically, we get that: let L = spacing between message one and two let Tc = compute and send time for first node want L > Tc => prob = (1 - Tc/2Tr)^i

  21. Cluster Addition • Conservative cluster addition scenario – One node joins the cluster • What it means to join the cluster … – Timer offset for the cluster moves to within Tc of the timer offset for some other node (cluster of size 1) – In other words, node A sends its message and updates from the cluster arrive before it is done. • Mathematically, we get that: prob = 1 - e^((N-i+1)/Tp) * ((i-1)Tc -Tr(i-1)/i+1))

  22. Understanding the System • Given a Markov and chain and probabilities for state transitions, we can determine expected behavior – State N = system is completely synchronized – State 1 = system is completely unsynchronized • Function f(i) = rounds until chain first enters state i.

  23. Understanding the System (2) • Function g(i) = rounds until chain first enters state i starting from N.

  24. The Impact of Randomness Tr

  25. Conclusions • Natural tendency for the system to synchronize • Random jitter amount (Tr) breaks clusters and unsynchronizes system – Selecting Tr > Tp/2 should eliminate synch – Selecting Tr > 10 *Tc should quickly break-up any syncrhonization. • Fundamental part of all modern protocols is a random jitter on any periodic timer.

Recommend


More recommend