CS 473: Algorithms, Fall 2016 Heuristics, Approximation Algorithms Lecture 24 Nov 18, 2016 Chandra & Ruta (UIUC) CS473 1 Fall 2016 1 / 34
Part I Heuristics Chandra & Ruta (UIUC) CS473 2 Fall 2016 2 / 34
Coping with Intractability Question: Many useful/important problems are NP-Hard or worse. How does one cope with them? Chandra & Ruta (UIUC) CS473 3 Fall 2016 3 / 34
Coping with Intractability Question: Many useful/important problems are NP-Hard or worse. How does one cope with them? Some general things that people do. Consider special cases of the problem which may be tractable. 1 Run inefficient algorithms (for example exponential time 2 algorithms for NP-Hard problems) augmented with (very) clever heuristics stop algorithm when time/resources run out 1 use massive computational power 2 Exploit properties of instances that arise in practice which may 3 be much easier. Give up on hard instances, which is OK. Settle for sub-optimal (aka approximate) solutions, especially for 4 optimization problems Chandra & Ruta (UIUC) CS473 3 Fall 2016 3 / 34
NP and EXP EXP : all problems that have an exponential time algorithm. Proposition NP ⊆ EXP . Proof. Let X ∈ NP with certifier C . To prove X ∈ EXP , here is an algorithm for X . Given input s , For every t , with | t | ≤ p ( | s | ) run C ( s , t ) ; answer “yes” if any 1 one of these calls returns “yes”, otherwise say “no”. Every problem in NP has a brute-force “try all possibilities” algorithm that runs in exponential time. Chandra & Ruta (UIUC) CS473 4 Fall 2016 4 / 34
Examples SAT : try all possible truth assignment to variables. 1 Independent set : try all possible subsets of vertices. 2 Vertex cover : try all possible subsets of vertices. 3 Chandra & Ruta (UIUC) CS473 5 Fall 2016 5 / 34
Improving brute-force via intelligent backtracking Backtrack search: enumeration with bells and whistles to 1 “heuristically” cut down search space. Works quite well in practice for several problems, especially for 2 small enough problem sizes. Chandra & Ruta (UIUC) CS473 6 Fall 2016 6 / 34
Backtrack Search Algorithm for SAT Input: CNF Formula ϕ on n variables x 1 , . . . , x n and m clauses Output: Is ϕ satisfiable or not. Pick a variable x i 1 ϕ ′ is CNF formula obtained by setting x i = 0 and simplifying 2 Run a simple (heuristic) check on ϕ ′ : returns “yes”, “no” or 3 “not sure” If “not sure” recursively solve ϕ ′ 1 If ϕ ′ is satisfiable, return “yes” 2 ϕ ′′ is CNF formula obtained by setting x i = 1 4 Run simple check on ϕ ′′ : returns “yes”, “no” or “not sure” 5 If “not sure” recursively solve ϕ ′′ 1 If ϕ ′′ is satisfiable, return “yes” 2 Return “no” 6 Certain part of the search space is pruned. Chandra & Ruta (UIUC) CS473 7 Fall 2016 7 / 34
Example ( w ∨ x ∨ y ∨ z ) , ( w ∨ x ) , ( x ∨ y ) , ( y ∨ z ) , ( z ∨ w ) , ( w ∨ z ) w = 0 w = 1 ( x ∨ y ) , ( y ∨ z ) , ( z ) , ( z ) ( x ∨ y ∨ z ) , ( x ) , ( x ∨ y ) , ( y ∨ z ) z = 0 z = 1 x = 0 x = 1 ( y ∨ z ) , ( y ) , ( y ∨ z ) () , ( y ∨ z ) ( x ∨ y ) , () ( x ∨ y ) , ( y ) , () y = 0 y = 1 ( z ) , ( z ) () z = 0 z = 1 () () Figure: Backtrack search. Formula is not satisfiable. Figure taken from Dasgupta etal book. Chandra & Ruta (UIUC) CS473 8 Fall 2016 8 / 34
Backtrack Search Algorithm for SAT How do we pick the order of variables? Chandra & Ruta (UIUC) CS473 9 Fall 2016 9 / 34
Backtrack Search Algorithm for SAT How do we pick the order of variables? Heuristically! Examples: pick variable that occurs in most clauses first 1 pick variable that appears in most size 2 clauses first 2 . . . 3 Chandra & Ruta (UIUC) CS473 9 Fall 2016 9 / 34
Backtrack Search Algorithm for SAT How do we pick the order of variables? Heuristically! Examples: pick variable that occurs in most clauses first 1 pick variable that appears in most size 2 clauses first 2 . . . 3 What are quick tests for Satisfiability? Chandra & Ruta (UIUC) CS473 9 Fall 2016 9 / 34
Backtrack Search Algorithm for SAT How do we pick the order of variables? Heuristically! Examples: pick variable that occurs in most clauses first 1 pick variable that appears in most size 2 clauses first 2 . . . 3 What are quick tests for Satisfiability? Depends on known special cases and heuristics. Examples. Obvious test: return “no” if empty clause, “yes” if no clauses 1 left and otherwise “not sure” Run obvious test and in addition if all clauses are of size 2 then 2 run 2-SAT polynomial time algorithm . . . 3 Chandra & Ruta (UIUC) CS473 9 Fall 2016 9 / 34
Branch-and-Bound Backtracking for optimization problems Intelligent backtracking can be used also for optimization problems. Consider a minimization problem. Notation: for instance I , opt ( I ) is optimum value on I . P 0 initial instance of given problem. Keep track of the best solution value B found so far. Initialize 1 B to be crude upper bound on opt ( I ) . Let P be a subproblem at some stage of exploration. 2 If P is a complete solution, update B . 3 Else use a lower bounding heuristic to quickly/efficiently find a 4 lower bound b on opt ( P ) . If b ≥ B then prune P 1 Else explore P further by breaking it into subproblems and 2 recurse on them. Output best solution found. 5 Chandra & Ruta (UIUC) CS473 10 Fall 2016 10 / 34
Example: Vertex Cover Given G = ( V , E ) , find a minimum sized vertex cover in G . Initialize B = n − 1 . 1 Pick a vertex u . Branch on u : either choose u or discard it. 2 Let b 1 be a lower bound on G 1 = G − u . 3 If 1 + b 1 < B , recursively explore G 1 4 Let b 2 be a lower bound on G 2 = G − u − N ( u ) where N ( u ) 5 is the set of neighbors of u . If | N ( u ) | + b 2 < B , recursively explore G 2 6 Output B . 7 Chandra & Ruta (UIUC) CS473 11 Fall 2016 11 / 34
Example: Vertex Cover Given G = ( V , E ) , find a minimum sized vertex cover in G . Initialize B = n − 1 . 1 Pick a vertex u . Branch on u : either choose u or discard it. 2 Let b 1 be a lower bound on G 1 = G − u . 3 If 1 + b 1 < B , recursively explore G 1 4 Let b 2 be a lower bound on G 2 = G − u − N ( u ) where N ( u ) 5 is the set of neighbors of u . If | N ( u ) | + b 2 < B , recursively explore G 2 6 Output B . 7 How do we compute a lower bound? One possibility: solve an LP relaxation. Chandra & Ruta (UIUC) CS473 11 Fall 2016 11 / 34
Local Search Local Search: a simple and broadly applicable heuristic method Start with some arbitrary solution s 1 Let N ( s ) be solutions in the “neighborhood” of s obtained from 2 s via “local” moves/changes If there is a solution s ′ ∈ N ( s ) that is better than s , move to 3 s ′ and continue search with s ′ Else, stop search and output s . 4 Chandra & Ruta (UIUC) CS473 12 Fall 2016 12 / 34
Local Search Main ingredients in local search: Initial solution. 1 Definition of neighborhood of a solution. 2 Efficient algorithm to find a good solution in the neighborhood. 3 Chandra & Ruta (UIUC) CS473 13 Fall 2016 13 / 34
Example: TSP TSP: Given a complete graph G = ( V , E ) with c ij denoting cost of edge ( i , j ) , compute a Hamiltonian cycle/tour of minimum edge cost. Chandra & Ruta (UIUC) CS473 14 Fall 2016 14 / 34
Example: TSP TSP: Given a complete graph G = ( V , E ) with c ij denoting cost of edge ( i , j ) , compute a Hamiltonian cycle/tour of minimum edge cost. 2-change local search: Start with an arbitrary tour s 0 1 For a solution s define s ′ to be a neighbor if s ′ can be obtained 2 from s by replacing two edges in s with two other edges. For a solution s at most O ( n 2 ) neighbors and one can try all of 3 them to find an improvement. Chandra & Ruta (UIUC) CS473 14 Fall 2016 14 / 34
TSP: 2-change example = ⇒ Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 34
TSP: 2-change example = ⇒ = ⇒ Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 34
TSP: 2-change example = ⇒ = ⇒ Figure below shows a bad local optimum for 2 -change heuristic... Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 34
TSP: 2-change example = ⇒ = ⇒ Figure below shows a bad local optimum for 2 -change heuristic... = ⇒ Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 34
TSP: 2-change example = ⇒ = ⇒ Figure below shows a bad local optimum for 2 -change heuristic... = ⇒ Chandra & Ruta (UIUC) CS473 15 Fall 2016 15 / 34
TSP: 3-change example 3-change local search: swap 3 edges out. ⇒ = Neighborhood of s has now increased to a size of Ω( n 3 ) Can define k -change heuristic where k edges are swapped out. Increases neighborhood size and makes each local improvement step less efficient. Chandra & Ruta (UIUC) CS473 16 Fall 2016 16 / 34
TSP: 3-change example 3-change local search: swap 3 edges out. ⇒ = Neighborhood of s has now increased to a size of Ω( n 3 ) Can define k -change heuristic where k edges are swapped out. Increases neighborhood size and makes each local improvement step less efficient. Chandra & Ruta (UIUC) CS473 16 Fall 2016 16 / 34
Recommend
More recommend