CSCE423/823 Computer Science & Engineering 423/823 Introduction Design and Analysis of Algorithms Shortest Paths and Matrix Multiplication Lecture 06 — All-Pairs Shortest Paths (Chapter 25) Floyd-Warshall Algorithm Stephen Scott (Adapted from Vinodchandran N. Variyam) Spring 2010 1 / 23
Introduction CSCE423/823 Similar to SSSP, but find shortest paths for all pairs of vertices Given a weighted, directed graph G = ( V, E ) with weight function Introduction w : E → R , find δ ( u, v ) for all ( u, v ) ∈ V × V Shortest Paths One solution: Run an algorithm for SSSP | V | times, treating each and Matrix Multiplication vertex in V as a source Floyd-Warshall If no negative weight edges, use Dijkstra’s algorithm, for time Algorithm complexity of O ( | V | 3 + | V || E | ) = O ( | V | 3 ) for array implementation, O ( | V || E | log | V | ) if heap used If negative weight edges, use Bellman-Ford and get O ( | V | 2 | E | ) time algorithm, which is O ( | V | 4 ) if graph dense Can we do better? Matrix multiplication-style algorithm: Θ( | V | 3 log | V | ) Floyd-Warshall algorithm: Θ( | V | 3 ) Both algorithms handle negative weight edges 2 / 23
Adjacency Matrix Representation CSCE423/823 Will use adjacency matrix representation Assume vertices are numbered: V = { 1 , 2 , . . . , n } Introduction Input to our algorithms will be n × n matrix W : Shortest Paths and Matrix Multiplication 0 if i = j Floyd-Warshall Algorithm w ij = weight of edge ( i, j ) if ( i, j ) ∈ E ∞ if ( i, j ) �∈ E For now, assume negative weight cycles are absent In addition to distance matrices L and D produced by algorithms, can also build predecessor matrix Π , where π ij = predecessor of j on a shortest path from i to j , or nil if i = j or no path exists Well-defined due to optimal substructure property 3 / 23
Printing Shortest Paths CSCE423/823 if i == j then 1 Introduction print i 2 Shortest Paths end 3 and Matrix Multiplication else if π ij == nil then 4 print “no path from ” i “ to ” j “ exists” Floyd-Warshall 5 Algorithm end 6 else 7 Print-All-Pairs-Shortest-Path (Π , i, π ij ) 8 print j 9 end 10 Algorithm 1: Print-All-Pairs-Shortest- Path( Π , i, j ) 4 / 23
Shortest Paths and Matrix Multiplication � � CSCE423/823 Will maintain a series of matrices L ( m ) = ℓ ( m ) , where ℓ ( m ) = the ij ij minimum weight of any path from i to j that uses at most m edges Introduction Special case: ℓ (0) ij = 0 if i = j , ∞ otherwise Shortest Paths and Matrix Multiplication Recursive Solution Bottom-Up Computation Example Improving Running Time Floyd-Warshall Algorithm ℓ (0) 13 = ∞ , ℓ (1) 13 = 8 , ℓ (2) 13 = 7 5 / 23
Recursive Solution CSCE423/823 Can exploit optimal substructure property to get a recursive definition of ℓ ( m ) ij To follow shortest path from i to j using at most m edges, either: Introduction Take shortest path from i to j using ≤ m − 1 edges and stay put, or 1 Shortest Paths and Matrix Take shortest path from i to some k using ≤ m − 1 edges and traverse 2 Multiplication edge ( k, j ) Recursive Solution � �� Bottom-Up � ℓ ( m ) ℓ ( m − 1) ℓ ( m − 1) Computation = min , min + w kj ij ij ik Example 1 ≤ k ≤ n Improving Running Time Since w jj = 0 for all j , simplify to Floyd-Warshall Algorithm � � ℓ ( m ) ℓ ( m − 1) = min + w kj ij ik 1 ≤ k ≤ n If no negative weight cycles, then since all shortest paths have ≤ n − 1 edges, δ ( i, j ) = ℓ ( n − 1) = ℓ ( n ) = ℓ ( n +1) = · · · 6 / 23 ij ij ij
Bottum-Up Computation of L Matrices CSCE423/823 Introduction Start with weight matrix W and compute series of matrices Shortest Paths L (1) , L (2) , . . . , L ( n − 1) and Matrix Multiplication Core of the algorithm is a routine to compute L ( m +1) given L ( m ) and Recursive Solution W Bottom-Up Computation Start with L (1) = W , and iteratively compute new L matrices until Example Improving Running Time we get L ( n − 1) Floyd-Warshall Why is L (1) == W ? Algorithm Can we detect negative-weight cycles with this algorithm? How? 7 / 23
Extend-Shortest-Paths CSCE423/823 // This is L ( m ) n = number of rows of L 1 Introduction // This will be L ( m +1) create new n × n matrix L ′ 2 Shortest Paths for i = 1 to n do 3 and Matrix Multiplication for j = 1 to n do 4 Recursive ℓ ′ ij = ∞ Solution 5 Bottom-Up Computation for k = 1 to n do 6 Example ℓ ′ ij = min � ℓ ′ ij , ℓ ik + w kj � 7 Improving Running Time 8 end Floyd-Warshall Algorithm 9 end 10 end return L ′ 11 Algorithm 2: Extend-Shortest-Paths( L, W ) 8 / 23
Slow-All-Pairs-Shortest-Paths CSCE423/823 Introduction n = number of rows of W 1 Shortest Paths and Matrix L (1) = W 2 Multiplication Recursive for m = 2 to n − 1 do 3 Solution L ( m ) = Extend-Shortest-Paths ( L ( m − 1) , W ) Bottom-Up 4 Computation Example end 5 Improving Running Time return L ( n − 1) 6 Floyd-Warshall Algorithm Algorithm 3: Slow-All-Pairs-Shortest- Paths( W ) 9 / 23
Example CSCE423/823 Introduction Shortest Paths and Matrix Multiplication Recursive Solution Bottom-Up Computation Example Improving Running Time Floyd-Warshall Algorithm 10 / 23
Improving Running Time CSCE423/823 What is time complexity of Slow-All-Pairs-Shortest-Paths ? Can we do better? Introduction Note that if, in Extend-Shortest-Paths , we change + to Shortest Paths multiplication and min to + , get matrix multiplication of L and W and Matrix Multiplication If we let ⊙ represent this “multiplication” operator, then Recursive Solution Slow-All-Pairs-Shortest-Paths computes Bottom-Up Computation Example L (1) ⊙ W � , L (2) W 2 Improving = = Running Time L (2) ⊙ W � , L (3) W 3 = = Floyd-Warshall Algorithm . . . L ( n − 2) ⊙ W � 1 L ( n − 1) W n − = = Thus, we get L ( n − 1) by iteratively “multiplying” W via Extend-Shortest-Paths 11 / 23
Improving Running Time (2) CSCE423/823 But we don’t need every L ( m ) ; we only want L ( n − 1) E.g. if we want to compute 7 64 , we could multiply 7 by itself 64 Introduction times, or we could square it 6 times Shortest Paths In our application, once we have a handle on L (( n − 1) / 2) , we can and Matrix immediately get L ( n − 1) from one call to Multiplication Recursive Solution Extend-Shortest-Paths ( L (( n − 1) / 2) , L (( n − 1) / 2) ) Bottom-Up Computation Of course, we can similarly get L (( n − 1) / 2) from “squaring” Example Improving Running Time L (( n − 1) / 4) , and so on Floyd-Warshall Starting from the beginning, we initialize L (1) = W , then compute Algorithm L (2) = L (1) ⊙ L (1) , L (4) = L (2) ⊙ L (2) , L (8) = L (4) ⊙ L (4) , and so on What happens if n − 1 is not a power of 2 and we “overshoot” it? How many steps of repeated squaring do we need to make? What is time complexity of this new algorithm? 12 / 23
Faster-All-Pairs-Shortest-Paths CSCE423/823 Introduction 1 n = number of rows of W Shortest Paths L (1) = W 2 and Matrix Multiplication 3 m = 1 Recursive 4 while m < n − 1 do Solution L (2 m ) = Extend-Shortest-Paths ( L ( m ) , L ( m ) ) Bottom-Up 5 Computation Example m = 2 m 6 Improving Running Time 7 end Floyd-Warshall return L ( m ) 8 Algorithm Algorithm 4: Faster-All-Pairs-Shortest- Paths( W ) 13 / 23
Floyd-Warshall Algorithm CSCE423/823 Introduction Shaves the logarithmic factor off of the previous algorithm Shortest Paths and Matrix As with previous algorithm, start by assuming that there are no Multiplication negative weight cycles; can detect negative weight cycles the same Floyd-Warshall Algorithm way as before Structure of Shortest Path Considers a different way to decompose shortest paths, based on the Recursive Solution notion of an intermediate vertex Bottom-Up Computation If simple path p = � v 1 , v 2 , v 3 , . . . , v ℓ − 1 , v ℓ � , then the set of Example Transitive intermediate vertices is { v 2 , v 3 , . . . , v ℓ − 1 } Closure 14 / 23
Structure of Shortest Path CSCE423/823 Again, let V = { 1 , . . . , n } , and fix i, j ∈ V Introduction Shortest Paths For some 1 ≤ k ≤ n , consider set of vertices V k = { 1 , . . . , k } and Matrix Multiplication Now consider all paths from i to j whose intermediate vertices come Floyd-Warshall from V k and let p be the minimum-weight path from them Algorithm Structure of Is k ∈ p ? Shortest Path Recursive Solution If not, then all intermediate vertices of p are in V k − 1 , and a SP from i 1 Bottom-Up Computation to j based on V k − 1 is also a SP from i to j based on V k Example p 1 p 2 Transitive If so, then we can decompose p into i � j , where p 1 and p 2 are � k 2 Closure each shortest paths based on V k − 1 15 / 23
Structure of Shortest Path (2) CSCE423/823 Introduction Shortest Paths and Matrix Multiplication Floyd-Warshall Algorithm Structure of Shortest Path Recursive Solution Bottom-Up Computation Example Transitive Closure 16 / 23
Recommend
More recommend