single source shortest paths sssp
play

Single Source Shortest Paths (SSSP) Directed graph Edge weights - PowerPoint PPT Presentation

Single Source Shortest Paths (SSSP) Directed graph Edge weights Shortest path from to : Path = 0 , 1 , , of minimum weight () 5 3 where 0 = , = and 4 7 4 1


  1. Single Source Shortest Paths (SSSP) Directed graph Edge weights Shortest path from 𝑑 to 𝑀 : Path π‘ž = 𝑀 0 , 𝑀 1 , … , 𝑀 𝑙 of minimum weight π‘₯(π‘ž) 5 𝑦 𝑒 3 where 𝑀 0 = 𝑑 , 𝑀 𝑙 = 𝑀 and 4 7 4 1 1 𝑑 3 7 𝑧 𝑨 6 SSSP problem Compute a shortest path from source 𝑑 to all vertices 𝑀 𝑒(𝑀) = distance (i.e., shortest path weight) from 𝑑 to 𝑀

  2. Single Source Shortest Paths (SSSP) Directed graph Edge weights Shortest path from 𝑑 to 𝑀 : 𝑒(𝑦) = 8 𝑒(𝑒) = 3 Path π‘ž = 𝑀 0 , 𝑀 1 , … , 𝑀 𝑙 of minimum weight π‘₯(π‘ž) 5 𝑦 𝑒 3 where 𝑀 0 = 𝑑 , 𝑀 𝑙 = 𝑀 and 4 7 4 1 1 𝑑 3 7 𝑧 𝑨 6 𝑒(𝑧) = 4 𝑒(𝑧) = 10 SSSP problem Compute a shortest path from source 𝑑 to all vertices 𝑀 𝑒(𝑀) = distance (i.e., shortest path weight) from 𝑑 to 𝑀

  3. Single Source Shortest Paths (SSSP) Parent of a vertex π‘ž 𝑀 = vertex just before 𝑀 on the shortest path from 𝑑 π‘ž(𝑀) 𝑀 𝑑 Shortest paths tree 5 Formed by the edges (π‘ž(𝑀), 𝑀) 𝑦 𝑒 3 4 π‘ž(𝑑) = - 7 4 1 1 𝑑 π‘ž(𝑒) = 𝑑 π‘ž(𝑧) = 𝑒 3 7 𝑧 𝑨 π‘ž(𝑦) = 𝑒 π‘ž(𝑨) = 𝑧 6

  4. Single Source Shortest Paths (SSSP) Temporary distances 𝑒(𝑀) = upper bound for the weight of the shortest path from 𝑑 to 𝑀 π‘ž(𝑀) ← null , 𝑒(𝑀) ← ∞ for all 𝑀 β‰  𝑑 Initialize π‘ž(𝑑) ← null , 𝑒(𝑑) ← 0 Edge relaxation relax (𝑣, 𝑀) 2 2 if if 𝑒(𝑀) > 𝑒(𝑣) + π‘₯(𝑣, 𝑀) 𝑣 𝑀 𝑣 𝑀 then { th 𝑒(𝑀) = 8 𝑒(𝑀) = 6 𝑒(𝑣) = 5 𝑒(𝑣) = 5 𝑒(𝑀) ← 𝑒(𝑣) + π‘₯(𝑣, 𝑀) π‘ž(𝑀) ← 𝑣 } 2 2 𝑀 𝑀 𝑣 𝑣 𝒆(π’˜) = πŸ– 𝑒(𝑀) = 6 𝑒(𝑣) = 5 𝑒(𝑣) = 5

  5. Single Source Shortest Paths (SSSP) Dijkstra’s Algorithm Used when edge weights are non-negative It maintains a set of vertices 𝑇 βŠ† π‘Š for which a shortest path has been computed, i.e., the value of 𝑒(𝑀) is the exact weight of the shortest path to 𝑀 . Each iteration selects a vertex 𝑣 ∈ π‘Š\S with minimum distance 𝑒(𝑣) . Then we set S ← 𝑇 βˆͺ 𝑣 and relax all edges (𝑣, π‘₯) To find 𝑣 with min 𝑒(𝑣) : Use a priority queue 𝑅 with keys

  6. Single Source Shortest Paths (SSSP) Dijkstra’s Algorithm Initialization π‘ž(𝑀) ← null , 𝑒(𝑀) ← ∞ for all 𝑀 β‰  𝑑 π‘ž(𝑑) ← null , 𝑒(𝑑) ← 0 set 𝑇 ← βˆ… insert all vertices 𝑀 into priority queue 𝑅 with key 𝑒(𝑀) Main Loop while 𝑅 is not empty { 𝑣 ← Q. delMin() 𝑇 ← 𝑇 βˆͺ 𝑣 for all edges (𝑣, 𝑀) { relax(𝑣, 𝑀) } }

  7. Single Source Shortest Paths (SSSP) Dijkstra’s Algorithm Initialization π‘ž(𝑀) ← null , 𝑒(𝑀) ← ∞ for all 𝑀 β‰  𝑑 π‘ž(𝑑) ← null , 𝑒(𝑑) ← 0 set 𝑇 ← βˆ… insert all vertices 𝑀 into priority queue 𝑅 with key 𝑒(𝑀) Main Loop priority queue 𝑅 running time while 𝑅 is not empty { O(π‘œ 2 ) array 𝑣 ← Q. delMin() 𝑇 ← 𝑇 βˆͺ 𝑣 O(𝑛 log π‘œ) binary heap for all edges (𝑣, 𝑀) { O(𝑛 + π‘œ log π‘œ) Fibonacci heap relax(𝑣, 𝑀) } }

  8. Single Source Shortest Paths (SSSP) in Map-Reduce ➒ Not easy to parallelize Dijkstra’s algorithm ➒ Use an iterative approach instead β€’ The distance 𝑒(𝑀) from 𝑑 to 𝑀 is updated by the distances of all 𝑣 with 𝑣, 𝑀 ∈ 𝐹 . 𝑦 𝑧 π‘₯(𝑦, 𝑀) π‘₯(𝑧, 𝑀) 𝑒(𝑀) ← min 𝑒 𝑣 + π‘₯ 𝑣, 𝑀 | (𝑣, 𝑀) ∈ 𝐹 𝑀 𝑨 π‘₯(𝑨, 𝑀) β€’ Need to communicate both distances and adjacency lists.

  9. Single Source Shortest Paths (SSSP) in Map-Reduce Mapper: emits distances and graph structure 𝑒 𝑀 + π‘₯(𝑀, 𝑏) 𝑏 𝑀 𝑐 𝑒 𝑀 + π‘₯(𝑀, 𝑐) 𝑑 𝑒 𝑀 + π‘₯(𝑀, 𝑑) Reducer: updates distances and emits graph structure 𝑦 𝑧 π‘₯(𝑦, 𝑀) π‘₯(𝑧, 𝑀) 𝑒(𝑀) ← min 𝑒 𝑣 + π‘₯ 𝑣, 𝑀 | (𝑣, 𝑀) ∈ 𝐹 𝑀 𝑨 π‘₯(𝑨, 𝑀)

  10. Single Source Shortest Paths (SSSP) in Map-Reduce ➒ Not easy to parallelize Dijkstra’s algorithm ➒ Use an iterative approach instead β€’ The distance 𝑒(𝑀) from 𝑑 to 𝑀 is updated by the distances of all 𝑣 with 𝑣, 𝑀 ∈ 𝐹 . β€’ Need to communicate both distances and adjacency lists. β€’ Repeat round until all distances are fixed. β€’ Number of rounds = π‘œ βˆ’ 1 in the worst case. β€’ If all weights are equal then we compute the Breadth-First Search (BFS) tree. Number of rounds = graph diameter.

  11. BFS in Map-Reduce

  12. Single Source Shortest Paths (SSSP) in Map-Reduce Remarks on Map-Reduce SSSP algorithm β€’ Essentially a brute -force algorithm. β€’ Performs many unnecessary computations. β€’ No global data structure.

  13. PageRank in Map-Reduce Recall the formula for the PageRank 𝑆(𝑣) of a webpage 𝑣 𝑆(𝑀) 𝑆 𝑣 = 𝑑 ෍ + (1 βˆ’ 𝑑)𝐹 𝑣 𝑂 𝑀 π‘€βˆˆπΆ 𝑣 𝐢 𝑣 = set of pages that point to 𝑣 𝐺 𝑣 = set of pages that 𝑣 points to 𝐺 𝑣 = 𝑂 𝑣 = number of links from 𝑣 𝐹 𝑣 = probabilities over web pages 𝐹 𝑣 and 𝑑 are user designed parameters

  14. PageRank in Map-Reduce Iterative computation start with seed values 𝑆 0 (𝑀) for each page 𝑀 each page 𝑀 receives credit each page 𝑀 distributes credit from the pages in 𝐢 𝑀 to the pages in 𝐺 𝑀 and computes 𝑆 𝑗+1 (𝑀)

  15. PageRank in Map-Reduce

  16. Algorithms and Complexity in MapReduce (and related models) Sorting, Searching, and Simulation in the MapReduce Framework M. T. Goodrich, N. Sitchinava, and Q. Zhang ISAAC 2011 Fast Greedy Algorithms in MapReduce and Streaming R. Kumar, B. Moseley, S. Vassilvitskii, and A. Vattani SPAA 2013 On the Computational Complexity of MapReduce B. Fish, J. Kun, A. D. Lelkes, L. Reyzin, and G. Turan DISC 2015

  17. BSP model L. G. Valiant, A Bridging Model for Parallel Computation, Communications of the ACM, 1990 Computational model of parallel computation BSP is a parallel programming model based on Synchronizer Automata. The model consists of: β€’ Set of processor-memory pairs. β€’ Communications network that delivers messages in a point-to-point manner. β€’ Mechanism for the efficient barrier synchronization for all or a subset of the processes. β€’ No special combining, replicating, or broadcasting facilities.

  18. BSP model β€’ Vertical Structure Virtual Processors Supersteps: Local – Local computation Computation – Process Communication – Barrier Synchronization β€’ Horizontal Structure – Concurrency among a fixed Global number of virtual processors. Communication – Processes do not have a particular order. – Locality plays no role in the placement of processes on Barrier Synchronization processors. Implementation: BSPlib

  19. MapReduce simulation of a BSP program Simulation on MapReduce: 1. Create a tuple for each memory cell and processor. 2. Map each message to the destination processor label. 3. Reduce by performing one step of a processor, outputting the messages for next round. Theorem [Goodrich et al.]: Given a BSP algorithm 𝐡 that runs in π‘ˆ supersteps with a total memory size 𝑂 using 𝑄 ≀ 𝑂 processors, we can simulate 𝐡 using O(π‘ˆ) rounds and message complexity O(π‘ˆπ‘‚) in the memory-bound MapReduce framework with reducer memory size bounded by 𝑂/𝑄 .

  20. MapReduce simulation of a BSP program Simulation on MapReduce: 1. Create a tuple for each memory cell and processor. 2. Map each message to the destination processor label. 3. Reduce by performing one step of a processor, outputting the messages for next round. Theorem [Goodrich et al.]: Given a BSP algorithm 𝐡 that runs in π‘ˆ supersteps with a total memory size 𝑂 using 𝑄 ≀ 𝑂 processors, we can simulate 𝐡 using O(π‘ˆ) rounds and message complexity O(π‘ˆπ‘‚) in the memory-bound MapReduce framework with reducer memory size bounded by 𝑂/𝑄 . A corollary of the above: Given the optimal BSP algorithm of [Goodrich, 99], we can sort 𝑂 values in the MapReduce framework in 𝑃(𝑙) rounds and 𝑃(𝑙𝑂) message complexity.

  21. Algorithms and Complexity in MapReduce (and related models) Theorem [Fish et al.] : Any problem requiring sublogarithmic space, 𝑝(log π‘œ ) , can be solved in MapReduce in two rounds. The proof is constructive : Given a problem that classically takes less than logarithmic space, there is an automatic algorithm to implement it in MapReduce

Recommend


More recommend