chapter 24 single source shortest paths context g v e w e
play

Chapter 24: Single-Source Shortest Paths. Context: [ G = ( V, E ) , w - PDF document

Chapter 24: Single-Source Shortest Paths. Context: [ G = ( V, E ) , w : E R ] is a weighted, directed graph. Definitions : 1. A path from vertex u to vertex v , denoted p : u v , is a vertex vector p = ( u = x 0 , x 1 , . . . , x k = v ),


  1. Chapter 24: Single-Source Shortest Paths. Context: [ G = ( V, E ) , w : E → R ] is a weighted, directed graph. Definitions : 1. A path from vertex u to vertex v , denoted p : u � v , is a vertex vector p = ( u = x 0 , x 1 , . . . , x k = v ), with ( x i , x i +1 ) ∈ E for 0 ≤ i ≤ k − 1. The weight of path p is � k j =1 w ( x k − 1 , x k ). 2. The shortest weight path from u to v is � min { w ( p ) : p is a path from u to v } , if such paths exist δ ( u, v ) = ∞ , otherwise . The Single-Source-Shortest-Path (SSSP) algorithm accepts a source vertex s ∈ V and calculates v.d = δ ( s, v ) for all v ∈ V . Observations. 1. SSSP also solves the Single-Destination-Shortest-Path (SDSP) problem by run- ning SSSP on G T . 2. SSSP solves the Single-Pair-Shortest-Path (SPSP) for vertices u and v by run- ning SSSP on the entire graph, using u as the source. All known algorithms for SPSP have the same worst-case cost as SSSP. 3. SSSP solves the All-Pairs-Shortest-Path (APSP) by running SSSP once with each vertex as the source. Chapter 25 covers more efficient algorithms. 4. Negative weights. If there exists a negative weight cycle,  −∞ , v is reachable from s via a path touching a negative cycle  δ ( s, v ) = ∞ , v is not reachable from s finite, v is reachable from s and no p : s � v touches a negative cycle  1

  2. Lemma 24.1 (Optimal Substructure) Let G = ( V, E ) , w : E → R be a directed, weighted graph. Let p = ( v 0 , v 1 , . . . , v k ) be a shortest path v 0 � v k . Then p ij : ( v i , v i +1 , . . . , v j ) is a shortest path from v i to v j , for all 0 ≤ i < j ≤ k . Proof: If not, excise the segment v i � v j and replace with the shorter alternative. This operation produces a path p ′ : v 0 � v k that is shorter than p , which is a contradiction. Note that negative weights do not invalidate this argument. 2

  3. Generic Algorithm Initialize( G = ( V, E ) , s ) { Relax(( u, v ) , w ) { for ( v ∈ V ) { if v.d > u.d + w ( u, v ) { v.d = ∞ ; v.d = u.d + w ( u, v ); v.π = null; v.π = u ; } } s.d = 0 } } Different algorithms result by using different strategies to deploy Relax operations. We will examine two such strategies (a) the Bellman-Ford algorithm, and (b) Dijkstra’s algorithm. Conjecture: The edges ( v.π, v ) construct a G π -tree, the predecessor graph , as in the breadth-first and depth-first algorithms. Specifically, G π = ( V π , E π ) V π = { v ∈ V : v.π � = null } ∪ { s } E π = { ( v.π, v ) : v ∈ V π \ s } , and the v.π attributes can be used to determine a shortest-path from any reachable v back to s . In the current context, shortest now means minimal weight sum on the connecting links. 3

  4. Properties : Lemma 24.10 : (Triangle Inequality) ( u, v ) ∈ E implies δ ( s, v ) ≤ δ ( s, u )+ w ( u, v ). Proof: If u is not reachable from s , then δ ( s, u ) = ∞ by definition and the inequality cannot fail. If u is reachable from s , then the path s � u → v is a competitor in the competition over all such paths to establish δ ( s, v ). If we establish s � u via the shortest possible path, the competitor achieves δ ( s, u )+ w ( u, v ) which must be greater than the minimum that gives δ ( s, v ). 4

  5. Lemma 24.11 : (Upper-bound property): (a) v.d ≥ δ ( s, v ) at all times; (b) If v.d = δ ( s, v ) at some point in the algorithm, then it can undergo no further change. Proof: We note that v.d can change only when an edge of the form ( u, v ) is relaxed. We proceed via induction on the number of relax operations. Before the first such relaxation, the initialization routine sets v.d = ∞ for all v � = s . Consequently, v.d ≥ δ ( s, v ) for all v � = s . As for s , the initialization routine sets s.d = 0, whereas δ ( s, s ) = 0 or −∞ , depending on whether or not s lies on a negative cycle. In either case s.d ≥ δ ( s, s ). Relax(( u, v ) , w ) { if v.d > u.d + w ( u, v ) { v.d = u.d + w ( u, v ); v.π = u ; } } Now, when an edge ( u, v ) is relaxed, only v.d is potentially changed. Hence, we assume via induction that u.d ≥ δ ( s, u ). Then, if the relaxation changes v.d , we have v.d = u.d + w ( u, v ) ≥ δ ( s, u ) + w ( u, v ) ≥ δ ( s, v ) , the last by the triangle inequality. We conclude that v.d ≥ δ ( s, v ) for all v ∈ V throughout the operation of the algorithm, which completes the proof of part (a). For part (b), suppose v.d = δ ( s, v ) occurs at some point. We note that the Relax(( u, v ) , w ) code either leaves v.d the same or lowers its to a strictly lower value. If, on entry, Relax(( u, v ) , w ) encounters v.d = δ ( s, v ), then it must leave v.d un- changed, since otherwise it would produce v.d < δ ( s, v ), contradicting (a). 5

  6. Lemma 24.12 : (No-path property): If no path s � v exists, then v.d = ∞ at all times. Proof: As a trivial path exists from s to s , the property is true when v = s . If v � = s and no path exists s � v , then δ ( s, v ) = ∞ by definition. Moreover, the initialization routine sets v.d = ∞ . As the Upper-bound property assures that v.d ≥ δ ( s, v ) at all times, we conclude that v.d = ∞ persists throughout the algorithm. Lemma 24.14 : (Convergence property): If p : s � u → v is a shortest path from s to v , and u.d = δ ( s, u ) at any time prior to relaxing edge ( u, v ), then v.d = δ ( s, v ) at all times after that relaxation. Proof: Given the hypothesis, we have u.d = δ ( s, u ) prior to relaxing edge ( u, v ). In that relaxation, one of two actions is taken. One possibility occurs when v.d ≤ u.d + w ( u, v ) when the relaxation starts. The other possibility occurs when the relaxation sets v.d = u.d + w ( u, v ). In either case, after the relaxation, we have v.d ≤ u.d + w ( u, v ) = δ ( s, u ) + w ( u, v ) = δ ( s, v ) , where the last equality follows because s � u → v is a shortest path, and therefore its length must be then be δ ( s, v ). Hence v.d ≤ δ ( s, v ). Since the upper-bound property forces v.d ≥ δ ( s, v ) at all times, we must have v.d = δ ( s, v ). Part (b) of the upper-bound property then insists that v.d = δ ( s, v ) at all subsequent times. 6

  7. Lemma 24.15 : (Path-relaxation property): If p = ( s = v 0 , v 1 , . . . , v k ) is a shortest path from s to v k , and edges ( s = v 0 , v 1 ) , ( v 1 , v 2 ) , . . . , ( v k − 1 , v k ) are relaxed in this order, then v k .d = δ ( s, v k ) at all times after this relaxation sequence. The property holds regardless of any other relaxations that are interleaved with the ordered sequence. Proof: We achieve the final result by showing the v i .d = δ ( s, v i ) after the i th relaxation. We note that the existence of a shortest path p : s � v k implies that no negative cycles are touched on any path from s to v k . Hence δ ( s, s ) = 0 = s.d after initialization, and via the upper-bound property, at all times thereafter. We then have the desired result for i = 0. Proceeding by induction, we assume that v i − 1 .d = δ ( s, v i − 1 ) when we perform a subsequent relaxation of edge ( v i − 1 , v i ). That relaxation must occur after unpacking v i from the adjacency list of v i − 1 . Then, since v i − 1 = δ ( s, v i − 1 ) at that time, the convergence property forces v i .d = δ ( s, v i ) after the relaxation. Moreover, relaxations interleaved between the point where v i − 1 .d = δ ( s, v i − 1 ) and the specific relaxation that forces v i .d = δ ( s, v i ) have no bearing on this argument. Lemma 24.17 : (Predecessor subgraph property): In the absence of negative cy- cles, once v.d = δ ( s, v ) is established for all v ∈ V , then the G π graph is a shortest- path tree rooted at s . Specifically G π = ( V π , E π ) satisfies 1. V π = { v ∈ V : v is reachable from s } . 2. G π = ( V π , E π ) is a tree rooted at s . 3. For all v ∈ V π , the unique simple path p : s � v is a shortest path from s to v . Proof: deferred to chapter’s end. 7

  8. Bellman-Ford( G = ( V, E ) , w : E → R , s ∈ V ) { Initialize( G, s ); { for ( i = 1 to V − 1) for ( u, v ) ∈ E Relax(( u, v ) , w ); for ( u, v ) ∈ E if v.d > u.d + w ( u, v ) return false; return true; } Observations. 1. A false return implies negative cycle reachable from s : to be proved 2. Running time is Θ( EV + E ) = Θ( EV ): obvious. 8

  9. Bellman-Ford( G = ( V, E ) , w : E → R , s ∈ V ) { Initialize( G, s ); { for ( i = 1 to V − 1) for ( u, v ) ∈ E Relax(( u, v ) , w ); for ( u, v ) ∈ E if v.d > u.d + w ( u, v ) return false; return true; } Lemma 24.2 : Let [ G = ( V, E ) , w : E → R ] be a weighted, directed graph with no negative cycles reachable from s . After termination of the first for-loop in Bellman-Ford, we have v.d = δ ( s, v ) for all vertices v reachable from s . Proof: Suppose v is reachable via p = ( s = v 0 , v 1 , . . . , v k = v ), a shortest path. As every edge is relaxed in each iteration: i = 1 , 2 , . . . , | V | − 1, the edges of path p will appear as Initialize . . . Relax(( v 0 , v 1 ) , w ) . . . Relax(( v 1 , v 2 ) , w ) . . . Relax(( v 2 , v 3 ) , w ) . . . . . . Relax(( v k − 1 , v k ) , w ) By the path relaxation property, v i .d = δ ( s, v i ) for i = 0 , 1 , 2 , . . . , k at conclusion. But k ≤ V − 1, since any shortest path has V − 1 edges or less. 9

Recommend


More recommend