approximation and randomized algorithms
play

Approximation and Randomized Algorithms Lecturer: Shi Li Department - PowerPoint PPT Presentation

CSE 431/531: Analysis of Algorithms Approximation and Randomized Algorithms Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Outline Approximation Algorithms 1 Approximation Algorithms for Traveling


  1. 2-Approximation Algorithm for TSP TSP1 ( G, w ) a MST ← the minimum 1 spanning tree of G w.r.t b c weights w , returned by either Kruskal’s algorithm or Prim’s algorithm. d e f g Output tour formed by 2 making two copies of each edge in MST . h i j k 6/58

  2. 2-Approximation Algorithm for TSP TSP1 ( G, w ) a MST ← the minimum 1 spanning tree of G w.r.t b c weights w , returned by either Kruskal’s algorithm or Prim’s algorithm. d e f g Output tour formed by 2 making two copies of each edge in MST . h i j k 6/58

  3. 2-Approximation Algorithm for TSP TSP1 ( G, w ) a MST ← the minimum 1 spanning tree of G w.r.t b c weights w , returned by either Kruskal’s algorithm or Prim’s algorithm. d e f g Output tour formed by 2 making two copies of each edge in MST . h i j k 6/58

  4. 2-Approximation Algorithm for TSP TSP1 ( G, w ) a MST ← the minimum 1 spanning tree of G w.r.t b c weights w , returned by either Kruskal’s algorithm or Prim’s algorithm. d e f g Output tour formed by 2 making two copies of each edge in MST . h i j k 6/58

  5. 2-Approximation Algorithm for TSP TSP1 ( G, w ) a MST ← the minimum 1 spanning tree of G w.r.t b c weights w , returned by either Kruskal’s algorithm or Prim’s algorithm. d e f g Output tour formed by 2 making two copies of each edge in MST . h i j k 6/58

  6. 2-Approximation Algorithm for TSP TSP1 ( G, w ) a MST ← the minimum 1 spanning tree of G w.r.t b c weights w , returned by either Kruskal’s algorithm or Prim’s algorithm. d e f g Output tour formed by 2 making two copies of each edge in MST . h i j k 6/58

  7. 2-Approximation Algorithm for TSP TSP1 ( G, w ) a MST ← the minimum 1 spanning tree of G w.r.t b c weights w , returned by either Kruskal’s algorithm or Prim’s algorithm. d e f g Output tour formed by 2 making two copies of each edge in MST . h i j k 6/58

  8. 2-Approximation Algorithm for TSP TSP1 ( G, w ) a MST ← the minimum 1 spanning tree of G w.r.t b c weights w , returned by either Kruskal’s algorithm or Prim’s algorithm. d e f g Output tour formed by 2 making two copies of each edge in MST . h i j k 6/58

  9. 2-Approximation Algorithm for TSP Lemma Algorithm TSP1 is a 2-approximation algorithm for TSP. Proof 7/58

  10. 2-Approximation Algorithm for TSP Lemma Algorithm TSP1 is a 2-approximation algorithm for TSP. Proof mst = cost of the minimum spanning tree 7/58

  11. 2-Approximation Algorithm for TSP Lemma Algorithm TSP1 is a 2-approximation algorithm for TSP. Proof mst = cost of the minimum spanning tree tsp = cost of the optimum travelling salesman tour 7/58

  12. 2-Approximation Algorithm for TSP Lemma Algorithm TSP1 is a 2-approximation algorithm for TSP. Proof mst = cost of the minimum spanning tree tsp = cost of the optimum travelling salesman tour then mst ≤ tsp, since removing one edge from the optimum travelling salesman tour results in a spanning tree 7/58

  13. 2-Approximation Algorithm for TSP Lemma Algorithm TSP1 is a 2-approximation algorithm for TSP. Proof mst = cost of the minimum spanning tree tsp = cost of the optimum travelling salesman tour then mst ≤ tsp, since removing one edge from the optimum travelling salesman tour results in a spanning tree sol = cost of tour given by algorithm TSP1 7/58

  14. 2-Approximation Algorithm for TSP Lemma Algorithm TSP1 is a 2-approximation algorithm for TSP. Proof mst = cost of the minimum spanning tree tsp = cost of the optimum travelling salesman tour then mst ≤ tsp, since removing one edge from the optimum travelling salesman tour results in a spanning tree sol = cost of tour given by algorithm TSP1 sol = 2 · mst ≤ 2 · tsp. � 7/58

  15. 1.5-Approximation for TSP Def. Given G = ( V, E ) , a set U ⊆ V of even number of vertices in V , a matching M over U in G is a set of | U | / 2 paths in G , such that every vertex in U is one end point of some path. Def. The cost of the matching M , denoted as cost ( M ) is the total cost of all edges in the | U | / 2 paths (counting multiplicities). Theorem Given G = ( V, E ) , a set U ⊆ V of even number of verticies, the minimum cost matching over U in G can be found in polynomial time. 8/58

  16. 1.5-Approximation for TSP Lemma Let T be a spanning tree of G = ( V, E ) ; let U be the set of odd-degree vertices in MST ( | U | must be even, why?). Let M be a matching over U , then, T ⊎ M gives a traveling salesman’s tour. Proof. Every vertex in T ⊎ M has even degree and T ⊎ M is connected (since it contains the spanning tree). Thus T ⊎ M is an Eulerian graph and we can find a tour that visits every edge in T ⊎ M exactly once. 9/58

  17. 1.5-Approximation for TSP Lemma Let U be a set of even number of vertices in G . Then optimum TSP the cost of the cheapest matching over U in G is at most points in U 1 2 tsp. Proof. Take the optimum TSP 10/58

  18. 1.5-Approximation for TSP Lemma Let U be a set of even number of vertices in G . Then optimum TSP the cost of the cheapest matching over U in G is at most points in U 1 2 tsp. Proof. Take the optimum TSP Breaking into read matching and blue matching over U 10/58

  19. 1.5-Approximation for TSP Lemma Let U be a set of even number of vertices in G . Then optimum TSP the cost of the cheapest matching over U in G is at most points in U 1 2 tsp. Proof. Take the optimum TSP Breaking into read matching and blue matching over U cost ( blue matching )+ cost ( red matching ) = tsp 10/58

  20. 1.5-Approximation for TSP Lemma Let U be a set of even number of vertices in G . Then optimum TSP the cost of the cheapest matching over U in G is at most points in U 1 2 tsp. Proof. Take the optimum TSP Breaking into read matching and blue matching over U cost ( blue matching )+ cost ( red matching ) = tsp Thus, cost ( blue matching ) ≤ 1 2 tsp or cost ( red matching ) ≤ 1 2 tsp 10/58

  21. 1.5-Approximation for TSP Lemma Let U be a set of even number of vertices in G . Then optimum TSP the cost of the cheapest matching over U in G is at most points in U 1 2 tsp. Proof. Take the optimum TSP Breaking into read matching and blue matching over U cost ( blue matching )+ cost ( red matching ) = tsp Thus, cost ( blue matching ) ≤ 1 2 tsp or cost ( red matching ) ≤ 1 2 tsp cost ( cheapeast matching ) ≤ 1 2 tsp 10/58

  22. Outline Approximation Algorithms 1 Approximation Algorithms for Traveling Salesman Problem 2 2-Approximation Algorithm for Vertex Cover 3 7 8 -Approximation Algorithm for Max 3-SAT 4 Randomized Quicksort 5 Recap of Quicksort Randomized Quicksort Algorithm 2-Approximation Algorithm for (Weighted) Vertex Cover Via 6 Linear Programming Linear Programming 2-Approximation for Weighted Vertex Cover 11/58

  23. Vertex Cover Problem Def. Given a graph G = ( V, E ) , a vertex cover of G is a subset S ⊆ V such that for every ( u, v ) ∈ E then u ∈ S or v ∈ S . 12/58

  24. Vertex Cover Problem Def. Given a graph G = ( V, E ) , a vertex cover of G is a subset S ⊆ V such that for every ( u, v ) ∈ E then u ∈ S or v ∈ S . Vertex-Cover Problem Input: G = ( V, E ) Output: a vertex cover S with minimum | S | 12/58

  25. First Try: Greedy Algorithm Greedy Algorithm for Vertex-Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let v be the vertex of the maximum degree in ( V, E ′ ) 3 S ← S ∪ { v } , 4 remove all edges incident to v from E ′ 5 output S 6 13/58

  26. First Try: Greedy Algorithm Greedy Algorithm for Vertex-Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let v be the vertex of the maximum degree in ( V, E ′ ) 3 S ← S ∪ { v } , 4 remove all edges incident to v from E ′ 5 output S 6 Theorem Greedy algorithm is an O (lg n ) -approximation for vertex-cover. 13/58

  27. First Try: Greedy Algorithm Greedy Algorithm for Vertex-Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let v be the vertex of the maximum degree in ( V, E ′ ) 3 S ← S ∪ { v } , 4 remove all edges incident to v from E ′ 5 output S 6 Theorem Greedy algorithm is an O (lg n ) -approximation for vertex-cover. We are not going to prove the theorem 13/58

  28. First Try: Greedy Algorithm Greedy Algorithm for Vertex-Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let v be the vertex of the maximum degree in ( V, E ′ ) 3 S ← S ∪ { v } , 4 remove all edges incident to v from E ′ 5 output S 6 Theorem Greedy algorithm is an O (lg n ) -approximation for vertex-cover. We are not going to prove the theorem Instead, we show that the O (lg n ) -approximation ratio is tight for the algorithm 13/58

  29. Bad Example for Greedy Algorithm | L | = n ′ L : n ′ vertices 14/58

  30. Bad Example for Greedy Algorithm | L | = n ′ R 2 L : n ′ vertices R 2 : ⌊ n ′ / 2 ⌋ vertices, each connected to 2 vertices in L 14/58

  31. Bad Example for Greedy Algorithm | L | = n ′ R 2 R 3 L : n ′ vertices R 2 : ⌊ n ′ / 2 ⌋ vertices, each connected to 2 vertices in L R 3 : ⌊ n ′ / 3 ⌋ vertices, each connected to 3 vertices in L 14/58

  32. Bad Example for Greedy Algorithm | L | = n ′ R 2 R 3 R 4 L : n ′ vertices R 2 : ⌊ n ′ / 2 ⌋ vertices, each connected to 2 vertices in L R 3 : ⌊ n ′ / 3 ⌋ vertices, each connected to 3 vertices in L R 4 : ⌊ n ′ / 4 ⌋ vertices, each connected to 4 vertices in L 14/58

  33. Bad Example for Greedy Algorithm | L | = n ′ R 2 R 3 R 4 R 5 R n ′ L : n ′ vertices R 2 : ⌊ n ′ / 2 ⌋ vertices, each connected to 2 vertices in L R 3 : ⌊ n ′ / 3 ⌋ vertices, each connected to 3 vertices in L R 4 : ⌊ n ′ / 4 ⌋ vertices, each connected to 4 vertices in L · · · R n ′ : 1 vertex, connected to n ′ vertices in L 14/58

  34. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ L : n ′ vertices R 2 : ⌊ n ′ / 2 ⌋ vertices, each connected to 2 vertices in L R 3 : ⌊ n ′ / 3 ⌋ vertices, each connected to 3 vertices in L R 4 : ⌊ n ′ / 4 ⌋ vertices, each connected to 4 vertices in L · · · R n ′ : 1 vertex, connected to n ′ vertices in L R = R 2 ∪ R 3 ∪ · · · ∪ R n ′ 14/58

  35. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ 15/58

  36. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ Optimum solution is L , where | L | = n ′ 15/58

  37. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ Optimum solution is L , where | L | = n ′ Greedy algorithm picks R n ′ , R n ′ − 1 , · · · , R 2 in this order 15/58

  38. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ Optimum solution is L , where | L | = n ′ Greedy algorithm picks R n ′ , R n ′ − 1 , · · · , R 2 in this order Thus, greedy algorithm outputs R 15/58

  39. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ Optimum solution is L , where | L | = n ′ Greedy algorithm picks R n ′ , R n ′ − 1 , · · · , R 2 in this order Thus, greedy algorithm outputs R n n � n ′ n ′ � i − n ′ − ( n ′ − 1) � � | R | = ≥ i i =2 i =1 = n ′ H ( n ′ ) − (2 n ′ − 1) = Ω( n ′ lg n ′ ) 15/58

  40. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ Optimum solution is L , where | L | = n ′ Greedy algorithm picks R n ′ , R n ′ − 1 , · · · , R 2 in this order Thus, greedy algorithm outputs R n n � n ′ n ′ � i − n ′ − ( n ′ − 1) � � | R | = ≥ i i =2 i =1 = n ′ H ( n ′ ) − (2 n ′ − 1) = Ω( n ′ lg n ′ ) where H ( n ′ ) = 1 + 1 2 + 1 3 + · · · + 1 n ′ = Θ(lg n ′ ) is the n ′ -th number in the harmonic sequence. 15/58

  41. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ 16/58

  42. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ Let n = | L ∪ R | = Θ( n ′ lg n ′ ) 16/58

  43. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ Let n = | L ∪ R | = Θ( n ′ lg n ′ ) Then lg n = Θ(lg n ′ ) 16/58

  44. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ Let n = | L ∪ R | = Θ( n ′ lg n ′ ) Then lg n = Θ(lg n ′ ) | L | = Ω( n ′ lg n ′ ) | R | = Ω(lg n ′ ) = Ω(lg n ) . n ′ 16/58

  45. Bad Example for Greedy Algorithm | L | = n ′ R R 2 R 3 R 4 R 5 R n ′ Let n = | L ∪ R | = Θ( n ′ lg n ′ ) Then lg n = Θ(lg n ′ ) | L | = Ω( n ′ lg n ′ ) | R | = Ω(lg n ′ ) = Ω(lg n ) . n ′ Thus, greedy algorithm does not do better than O (lg n ) . 16/58

  46. Greedy algorithm is a very natural algorithm, which might be the first algorithm some one can come up with 17/58

  47. Greedy algorithm is a very natural algorithm, which might be the first algorithm some one can come up with However, the approximation ratio is not so good 17/58

  48. Greedy algorithm is a very natural algorithm, which might be the first algorithm some one can come up with However, the approximation ratio is not so good We now give a somewhat “counter-intuitive” algorithm, 17/58

  49. Greedy algorithm is a very natural algorithm, which might be the first algorithm some one can come up with However, the approximation ratio is not so good We now give a somewhat “counter-intuitive” algorithm, for which we can prove a 2 -approximation ratio. 17/58

  50. 2-Approximation Algorithm for Vertex Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let ( u, v ) be any edge in E ′ 3 S ← S ∪ { u, v } , 4 remove all edges incident to u and v from E ′ 5 output S 6 18/58

  51. 2-Approximation Algorithm for Vertex Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let ( u, v ) be any edge in E ′ 3 S ← S ∪ { u, v } , 4 remove all edges incident to u and v from E ′ 5 output S 6 The counter-intuitive part: adding both u and v to S seems to be wasteful 18/58

  52. 2-Approximation Algorithm for Vertex Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let ( u, v ) be any edge in E ′ 3 S ← S ∪ { u, v } , 4 remove all edges incident to u and v from E ′ 5 output S 6 The counter-intuitive part: adding both u and v to S seems to be wasteful Intuition for the 2-approximation ratio: the optimum solution must cover the edge ( u, v ) , using either u or v . If we select both, we are always ahead of the optimum solution. The approximation factor we lost is at most 2 . 18/58

  53. 2-Approximation Algorithm for Vertex Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let ( u, v ) be any edge in E ′ 3 S ← S ∪ { u, v } , 4 remove all edges incident to u and v from E ′ 5 output S 6 19/58

  54. 2-Approximation Algorithm for Vertex Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let ( u, v ) be any edge in E ′ 3 S ← S ∪ { u, v } , 4 remove all edges incident to u and v from E ′ 5 output S 6 Let E ∗ be the set of edges ( u, v ) considered in Statement 3 19/58

  55. 2-Approximation Algorithm for Vertex Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let ( u, v ) be any edge in E ′ 3 S ← S ∪ { u, v } , 4 remove all edges incident to u and v from E ′ 5 output S 6 Let E ∗ be the set of edges ( u, v ) considered in Statement 3 Observation: E ∗ is a matching and | S | = 2 | E ∗ | 19/58

  56. 2-Approximation Algorithm for Vertex Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let ( u, v ) be any edge in E ′ 3 S ← S ∪ { u, v } , 4 remove all edges incident to u and v from E ′ 5 output S 6 Let E ∗ be the set of edges ( u, v ) considered in Statement 3 Observation: E ∗ is a matching and | S | = 2 | E ∗ | To cover all edges in E ∗ , the optimum solution needs | E ∗ | vertices 19/58

  57. 2-Approximation Algorithm for Vertex Cover E ′ ← E, S ← ∅ 1 while E ′ � = ∅ 2 let ( u, v ) be any edge in E ′ 3 S ← S ∪ { u, v } , 4 remove all edges incident to u and v from E ′ 5 output S 6 Let E ∗ be the set of edges ( u, v ) considered in Statement 3 Observation: E ∗ is a matching and | S | = 2 | E ∗ | To cover all edges in E ∗ , the optimum solution needs | E ∗ | vertices Theorem The algorithm is a 2-approximation algorithm for vertex-cover. 19/58

  58. Outline Approximation Algorithms 1 Approximation Algorithms for Traveling Salesman Problem 2 2-Approximation Algorithm for Vertex Cover 3 7 8 -Approximation Algorithm for Max 3-SAT 4 Randomized Quicksort 5 Recap of Quicksort Randomized Quicksort Algorithm 2-Approximation Algorithm for (Weighted) Vertex Cover Via 6 Linear Programming Linear Programming 2-Approximation for Weighted Vertex Cover 20/58

  59. Max 3-SAT Input: n boolean variables x 1 , x 2 , · · · , x n m clauses, each clause is a disjunction of 3 literals from 3 distinct variables Output: an assignment so as to satisfy as many clauses as possible Example: clauses: x 2 ∨ ¬ x 3 ∨ ¬ x 4 , x 2 ∨ x 3 ∨ ¬ x 4 , ¬ x 1 ∨ x 2 ∨ x 4 , x 1 ∨ ¬ x 2 ∨ x 3 , ¬ x 1 ∨ ¬ x 2 ∨ ¬ x 4 We can satisfy all the 5 clauses: x = (1 , 1 , 1 , 0 , 1) 21/58

  60. Randomized Algorithm for Max 3-SAT Simple idea: randomly set each variable x u = 1 with probability 1/2, independent of other variables 22/58

  61. Randomized Algorithm for Max 3-SAT Simple idea: randomly set each variable x u = 1 with probability 1/2, independent of other variables Lemma Let m be the number of clauses. Then, in expectation, 7 8 m number of clauses will be satisfied. 22/58

  62. Randomized Algorithm for Max 3-SAT Simple idea: randomly set each variable x u = 1 with probability 1/2, independent of other variables Lemma Let m be the number of clauses. Then, in expectation, 7 8 m number of clauses will be satisfied. Proof. 22/58

  63. Randomized Algorithm for Max 3-SAT Simple idea: randomly set each variable x u = 1 with probability 1/2, independent of other variables Lemma Let m be the number of clauses. Then, in expectation, 7 8 m number of clauses will be satisfied. Proof. for each clause C j , let Z j = 1 if C j is satisfied and 0 otherwise 22/58

  64. Randomized Algorithm for Max 3-SAT Simple idea: randomly set each variable x u = 1 with probability 1/2, independent of other variables Lemma Let m be the number of clauses. Then, in expectation, 7 8 m number of clauses will be satisfied. Proof. for each clause C j , let Z j = 1 if C j is satisfied and 0 otherwise Z = � m j =1 Z j is the total number of satisfied clauses 22/58

  65. Randomized Algorithm for Max 3-SAT Simple idea: randomly set each variable x u = 1 with probability 1/2, independent of other variables Lemma Let m be the number of clauses. Then, in expectation, 7 8 m number of clauses will be satisfied. Proof. for each clause C j , let Z j = 1 if C j is satisfied and 0 otherwise Z = � m j =1 Z j is the total number of satisfied clauses E [ Z j ] = 7 / 8 : out of 8 possible assignments to the 3 variables in C j , 7 of them will make C j satisfied 22/58

  66. Randomized Algorithm for Max 3-SAT Simple idea: randomly set each variable x u = 1 with probability 1/2, independent of other variables Lemma Let m be the number of clauses. Then, in expectation, 7 8 m number of clauses will be satisfied. Proof. for each clause C j , let Z j = 1 if C j is satisfied and 0 otherwise Z = � m j =1 Z j is the total number of satisfied clauses E [ Z j ] = 7 / 8 : out of 8 possible assignments to the 3 variables in C j , 7 of them will make C j satisfied �� m � = � m j =1 E [ Z j ] = � m 7 8 = 7 E [ Z ] = E j =1 Z j 8 m , by j =1 linearity of expectation. 22/58

  67. Randomized Algorithm for Max 3-SAT Lemma Let m be the number of clauses. Then, in expectation, 7 8 m number of clauses will be satisfied. 23/58

  68. Randomized Algorithm for Max 3-SAT Lemma Let m be the number of clauses. Then, in expectation, 7 8 m number of clauses will be satisfied. Since the optimum solution can satisfy at most m clauses, lemma gives a randomized 7 / 8 -approximation for Max-3-SAT. 23/58

  69. Randomized Algorithm for Max 3-SAT Lemma Let m be the number of clauses. Then, in expectation, 7 8 m number of clauses will be satisfied. Since the optimum solution can satisfy at most m clauses, lemma gives a randomized 7 / 8 -approximation for Max-3-SAT. Theorem ([Hastad 97]) Unless P = NP, there is no ρ -approximation algorithm for MAX-3-SAT for any ρ > 7 / 8 . 23/58

Recommend


More recommend