nisarg shah
play

Nisarg Shah 373F19 - Nisarg Shah 1 Recap Dynamic Programming - PowerPoint PPT Presentation

CSC373 Week 4: Dynamic Programming (contd) Network Flow (start) Nisarg Shah 373F19 - Nisarg Shah 1 Recap Dynamic Programming Basics Optimal substructure property Bellman equation Top-down (memoization) vs bottom-up


  1. CSC373 Week 4: Dynamic Programming (contd) Network Flow (start) Nisarg Shah 373F19 - Nisarg Shah 1

  2. Recap • Dynamic Programming Basics ➢ Optimal substructure property ➢ Bellman equation ➢ Top-down (memoization) vs bottom-up implementations • Dynamic Programming Examples ➢ Weighted interval scheduling ➢ Knapsack problem ➢ Single-source shortest paths ➢ Chain matrix product ➢ Edit distance (aka sequence alignment) 373F19 - Nisarg Shah 2

  3. This Lecture • Some more DP ➢ Traveling salesman problem (TSP) • Start of network flow ➢ Problem statement ➢ Ford-Fulkerson algorithm ➢ Running time ➢ Correctness 373F19 - Nisarg Shah 3

  4. Traveling Salesman • Input ➢ Directed graph 𝐻 = (𝑊, 𝐹) ➢ Distance 𝑒 𝑗,𝑘 is the distance from node 𝑗 to node 𝑘 • Output ➢ Minimum distance which needs to be traveled to start from some node 𝑤 , visit every other node exactly once, and come back to 𝑤 o That is, the minimum cost of a Hamiltonian cycle 373F19 - Nisarg Shah 4

  5. Traveling Salesman • Approach ➢ Let’s start at node 𝑤 1 = 1 o It’s a cycle, so the starting point does not matter ➢ Want to visit the other nodes in some order, say 𝑤 2 , … , 𝑤 𝑜 ➢ Total distance is 𝑒 1,𝑤 2 + 𝑒 𝑤 2 ,𝑤 3 + ⋯ + 𝑒 𝑤 𝑜−1 ,𝑤 𝑜 + 𝑒 𝑤 𝑜 ,1 o Want to minimize this distance • Naïve solution ➢ Check all possible orderings 𝑜 𝑜 ➢ 𝑜 − 1 ! = Θ 𝑜 ⋅ (Stirling’s approximation) 𝑓 373F19 - Nisarg Shah 5

  6. Traveling Salesman • DP Approach ➢ Consider 𝑤 𝑜 (the last node before returning to 𝑤 1 = 1 ) o If 𝑤 𝑜 = 𝑑 • Find optimal order of visiting nodes in 2, … , 𝑜 ∖ 𝑑 and then ending at 𝑑 • Need to keep track of the subset of nodes visited and the end node ➢ 𝑃𝑄𝑈 𝑇, 𝑑 = minimum total travel distance when starting at 1 , visiting each node in 𝑇 exactly once, and ending at 𝑑 ∈ 𝑇 ➢ The original answer is min 𝑑∈𝑇 𝑃𝑄𝑈 𝑇, 𝑑 + 𝑒 𝑑,1 , where 𝑇 = {2, … , 𝑜} 373F19 - Nisarg Shah 6

  7. Traveling Salesman • DP Approach ➢ To compute 𝑃𝑄𝑈[𝑇, 𝑑] , we condition over the vertex which is visited right before 𝑑 • Bellman equation 𝑃𝑄𝑈 𝑇, 𝑑 = 𝑛∈𝑇∖ 𝑑 𝑃𝑄𝑈 𝑇 ∖ 𝑑 , 𝑛 + 𝑒 𝑛,𝑑 min Final solution = 𝑑∈ 2,…,𝑜 𝑃𝑄𝑈 2, … , 𝑜 , 𝑑 + 𝑒 𝑑,1 min • Time: 𝑃(𝑜 ⋅ 2 𝑜 ) calls, 𝑃(𝑜) time per call ⇒ 𝑃 𝑜 2 ⋅ 2 𝑜 𝑜 𝑓 𝑜 Τ ➢ Much better than the naïve solution which has 373F19 - Nisarg Shah 7

  8. Traveling Salesman • Bellman equation 𝑃𝑄𝑈 𝑇, 𝑑 = 𝑛∈𝑇∖ 𝑑 𝑃𝑄𝑈 𝑇 ∖ 𝑑 , 𝑛 + 𝑒 𝑛,𝑑 min Final solution = 𝑑∈ 2,…,𝑜 𝑃𝑄𝑈 2, … , 𝑜 , 𝑑 + 𝑒 𝑑,1 min • Space complexity: 𝑃 𝑜 ⋅ 2 𝑜 ➢ But computing the optimal solution with 𝑇 = 𝑙 only requires storing the optimal solutions with 𝑇 = 𝑙 − 1 • Question: ➢ Using this observation, how much can we reduce the space complexity? 373F19 - Nisarg Shah 8

  9. DP Concluding Remarks • Key steps in designing a DP algorithm ➢ “Generalize” the problem first o E.g. instead of computing edit distance between strings 𝑌 = 𝑦 1 , … , 𝑦 𝑛 and 𝑍 = 𝑧 1 , … , 𝑧 𝑜 , we compute 𝐹[𝑗, 𝑘] = edit distance between 𝑗 -prefix of 𝑌 and 𝑘 -prefix of 𝑍 for all (𝑗, 𝑘) o The right generalization is often obtained by looking at the structure of the “subproblem” which must be solved optimally to get an optimal solution to the overall problem ➢ Remember the difference between DP and divide-and- conquer ➢ Sometimes you can save quite a bit of space by only storing solutions to those subproblems that you need in the future 373F19 - Nisarg Shah 9

  10. Network Flow 373F19 - Nisarg Shah 10

  11. Network Flow • Input ➢ A directed graph 𝐻 = (𝑊, 𝐹) ➢ Edge capacities 𝑑 ∶ 𝐹 → ℝ ≥0 ➢ Source node 𝑡 , target node 𝑢 • Output ➢ Maximum “flow” from 𝑡 to 𝑢 373F19 - Nisarg Shah 11

  12. Network Flow • Assumptions ➢ For simplicity, assume that… ➢ No edges enter 𝑡 ➢ No edges leave 𝑢 ➢ Edge capacity 𝑑(𝑓) is a non- negative integer o Later, we’ll see what happens when 𝑑(𝑓) can be a rational number 373F19 - Nisarg Shah 12

  13. Network Flow • Flow ➢ An 𝑡 - 𝑢 flow is a function 𝑔: 𝐹 → ℝ ≥0 ➢ Intuitively, 𝑔(𝑓) is the “amount of material” carried on edge 𝑓 373F19 - Nisarg Shah 13

  14. Network Flow • Constraints on flow 𝑔 1. Respecting capacities Flow in = flow out at every node other than 𝑡 and 𝑢 ∀𝑓 ∈ 𝐹 ∶ 0 ≤ 𝑔 𝑓 ≤ 𝑑(𝑓) 2. Flow conservation ∀𝑤 ∈ 𝑊 ∖ 𝑡, 𝑢 ∶ σ 𝑓 entering 𝑤 𝑔 𝑓 = σ 𝑓 leaving 𝑤 𝑔 𝑓 Flow out at 𝑡 = flow in at 𝑢 373F19 - Nisarg Shah 14

  15. Network Flow • 𝑔 𝑗𝑜 𝑤 = σ 𝑓 entering 𝑤 𝑔 𝑓 • 𝑔 𝑝𝑣𝑢 𝑤 = σ 𝑓 leaving 𝑤 𝑔 𝑓 • Value of flow 𝑔 is 𝑤 𝑔 = 𝑔 𝑝𝑣𝑢 𝑡 = 𝑔 𝑗𝑜 (𝑢) • Restating the problem: ➢ Given a directed graph 𝐻 = (𝑊, 𝐹) with edge capacities 𝑑: 𝐹 → ℝ ≥0 , find a flow 𝑔 ∗ with the maximum value. 373F19 - Nisarg Shah 15

  16. First Attempt • A natural greedy approach Start from zero flow ( 𝑔 𝑓 = 0 for each 𝑓 ). 1. 2. While there exists an 𝑡 - 𝑢 path 𝑄 in 𝐻 such that 𝑔 𝑓 < 𝑑(𝑓) for each 𝑓 ∈ 𝑄 a. Find one such path 𝑄 Increase the flow on each edge 𝑓 ∈ 𝑄 by min 𝑓∈𝑄 𝑑 𝑓 − 𝑔 𝑓 b. • Let’s run it on an example! 373F19 - Nisarg Shah 16

  17. First Attempt 373F19 - Nisarg Shah 17

  18. First Attempt 373F19 - Nisarg Shah 18

  19. First Attempt 373F19 - Nisarg Shah 19

  20. First Attempt 373F19 - Nisarg Shah 20

  21. First Attempt 373F19 - Nisarg Shah 21

  22. First Attempt 373F19 - Nisarg Shah 22

  23. First Attempt 373F19 - Nisarg Shah 23

  24. First Attempt • Q: Why does the simple greedy approach fail? • A: Because once it increases the flow on an edge, it is not allowed to decrease it. • Need a way to “reverse” bad decisions 373F19 - Nisarg Shah 24

  25. Reversing Bad Decisions Suppose we start by sending But the optimal configuration requires 20 units of flow along this path 10 fewer units of flow on 𝑣 → 𝑤 u u 𝟑𝟏/20 0/10 𝟑𝟏/20 𝟐𝟏/10 𝟑𝟏/30 𝟐𝟏/30 s t s t 0/10 𝟑𝟏/20 𝟐𝟏/10 𝟑𝟏/20 v v 373F19 - Nisarg Shah 25

  26. Reversing Bad Decisions We can essentially send a “reverse” So now we get this optimal flow flow of 10 units along 𝑤 → 𝑣 u u 𝟑𝟏/20 𝟐𝟏/10 𝟑𝟏/20 𝟐𝟏/10 𝟑𝟏/30 𝟐𝟏/30 𝟐𝟏 s t s t 𝟐𝟏/10 𝟑𝟏/20 𝟐𝟏/10 𝟑𝟏/20 v v 373F19 - Nisarg Shah 26

  27. Residual Graph • Suppose the current flow is 𝑔 • Define the residual graph 𝐻 𝑔 of flow 𝑔 ➢ 𝐻 𝑔 has the same vertices as 𝐻 ➢ For each edge e = (𝑣, 𝑤) in 𝐻 , 𝐻 𝑔 has at most two edges o Forward edge 𝑓 = (𝑣, 𝑤) with capacity 𝑑 𝑓 − 𝑔 𝑓 • We can send this much additional flow on 𝑓 o Reverse edge 𝑓 𝑠𝑓𝑤 = (𝑤, 𝑣) with capacity 𝑔(𝑓) • The maximum “reverse” flow we can send is the maximum amount by which we can reduce flow on 𝑓 , which is 𝑔(𝑓) o We only add edge of capacity > 0 373F19 - Nisarg Shah 27

  28. Residual Graph • Example! Flow 𝑔 Residual graph 𝐻 𝑔 u u 20/20 0/10 𝟑𝟏 𝟐𝟏 20/30 𝟐𝟏 𝟑𝟏 s t s t 0/10 20/20 𝟐𝟏 𝟑𝟏 v v 373F19 - Nisarg Shah 28

  29. Augmenting Paths • Let 𝑄 be an 𝑡 - 𝑢 path in the residual graph 𝐻 𝑔 • Let bottleneck(𝑄, 𝑔) be the smallest capacity across all edges in 𝑄 • “ Augment ” flow 𝑔 by “sending” bottleneck 𝑄, 𝑔 units of flow along 𝑄 ➢ What does it mean to send 𝑦 units of flow along 𝑄 ? ➢ For each forward edge 𝑓 ∈ 𝑄 , increase the flow on 𝑓 by 𝑦 ➢ For each reverse edge 𝑓 𝑠𝑓𝑤 ∈ 𝑄 , decrease the flow on 𝑓 by 𝑦 373F19 - Nisarg Shah 29

  30. Residual Graph • Example! Flow 𝑔 Residual graph 𝐻 𝑔 u u 20/20 0/10 𝟑𝟏 𝟐𝟏 20/30 𝟐𝟏 𝟑𝟏 s t s t 0/10 20/20 𝟐𝟏 𝟑𝟏 v v Path 𝑸 → send flow = bottleneck = 10 373F19 - Nisarg Shah 30

  31. Residual Graph • Example! New flow 𝑔 New residual graph 𝐻 𝑔 u u 20/20 10/10 𝟑𝟏 𝟐𝟏 10/30 𝟑𝟏 𝟐𝟏 s t s t 10/10 20/20 𝟐𝟏 𝟑𝟏 v v No 𝒕 - 𝒖 path because no outgoing edge from 𝒕 373F19 - Nisarg Shah 31

  32. Augmenting Paths • Let’s argue that the new flow is a valid flow • Capacity constraints (easy): ➢ If we increase flow on 𝑓 , we can do so by at most the capacity of forward edge 𝑓 in 𝐻 𝑔 , which is 𝑑 𝑓 − 𝑔 𝑓 o So the new flow can be at most 𝑔 𝑓 + 𝑑 𝑓 − 𝑔 𝑓 = 𝑑(𝑓) ➢ If we decrease flow on 𝑓 , we can do so by at most the capacity of reverse edge 𝑓 𝑠𝑓𝑤 in 𝐻 𝑔 , which is 𝑔 𝑓 o So the new flow is at least 𝑔 𝑓 − 𝑔 𝑓 = 0 373F19 - Nisarg Shah 32

Recommend


More recommend