HEURISTIC OPTIMIZATION SLS Methods: An Overview adapted from slides for SLS:FA, Chapter 2 Outline 1. Constructive Heuristics (Revisited) 2. Iterative Improvement (Revisited) 3. ‘Simple’ SLS Methods 4. Hybrid SLS Methods 5. Population-based SLS Methods Heuristic Optimization 2015 2
Constructive Heuristics (Revisited) Constructive heuristics I search space = partial candidate solutions I search step = extension with one or more solution components Constructive Heuristic (CH): s = ; While s is not a complete solution do | | choose a solution component c b s := s + c Heuristic Optimization 2015 3 Greedy construction heuristics I rate the quality of solution components by a heuristic function I choose at each step a best rated solution component I possible tie-breaking often either randomly; rarely by a second heuristic function I for some polynomially solvable problems “exact” greedy heuristics exist, e.g. Kruskal’s algorithm for spanning trees I static vs. adaptive greedy information in constructive heuristics I static: greedy values independent of partial solution I adaptive: greedy values depend on partial solution Heuristic Optimization 2015 4
Example: set covering problem I given : I A = { a 1 , . . . , a m } I family F = { A 1 , . . . , A n } of subsets A i ✓ A that covers A I w : F 7! R + , weight function that assigns to each set of F a cost value I goal : find C ⇤ that covers all items of A with minimal total weight I i.e. , C ⇤ 2 argmin C 0 2 Covers ( A , F ) w ( C 0 ) I w ( C 0 ) of C 0 is defined as P A 0 2 C 0 w ( A 0 ) I Example I A = { a , b , c , d , e , f , g } I F = { A 1 = { a , b , d , g } , A 2 = { a , b , c } , A 3 = { e , f , g } , A 4 = { f , g } , A 5 = { d , e } , A 6 = { c , d }} I w ( A 1 ) = 6, w ( A 2 ) = 3, w ( A 3 ) = 5, w ( A 4 ) = 4, w ( A 5 ) = 5, w ( A 6 ) = 4 I Heuristics: see lecture Heuristic Optimization 2015 5 The SCP instance a" b" c" d" e" f" g" ★ " ★ " ★ " ★ " A 1" 6" ★ " ★ " ★ " A 2" 3" ★ " ★ " ★ " A 3" 5" ★ " ★ " A 4" 4" ★ " ★ " A 5" 5" ★ " ★ " A 6" 4" Heuristic Optimization 2015 6
Constructive heuristics for TSP I ’simple’ SLS algorithms that quickly construct reasonably good tours I are often used to provide an initial search position for more advanced SLS algorithms I various types of constructive search algorithms exist I iteratively extend a connected partial tour I iteratively build tour fragments and patch them together into a complete tour I algorithms based on minimum spanning trees Heuristic Optimization 2015 7 Nearest neighbour (NN) construction heuristics: I start with single vertex (chosen uniformly at random) I in each step, follow minimal-weight edge to yet unvisited, next vertex I complete Hamiltonian cycle by adding initial vertex to end of path I results on length of NN tours I for TSP instances with triangle inequality NN tour is at most 1 / 2 · ( d log 2 ( n ) e + 1) worse than an optimal one Heuristic Optimization 2015 8
Two examples of nearest neighbour tours for TSPLIB instances left: pcb1173 ; right: fl1577 : I for metric and TSPLIB instances, nearest neighbour tours are typically 20–35% above optimal I typically, NN tours are locally close to optimal but contain few long edges Heuristic Optimization 2015 9 Insertion heuristics: I insertion heuristics iteratively extend a partial tour p by inserting a heuristically chosen vertex such that the path length increases minimally I various heuristics for the choice of the next vertex to insert I nearest insertion I cheapest insertion I farthest insertion I random insertion I nearest and cheapest insertion guarantee approximation ratio of two for TSP instances with triangle inequality I in practice, farthest and random insertion perform better; typically, 13 to 15% above optimal for metric and TSPLIB instances Heuristic Optimization 2015 10
Greedy, Quick-Bor˚ uvka and Savings heuristic: I greedy heuristic I first sort edges in graph according to increasing weight I scan list and add feasible edges to partial solution I complete Hamiltonian cycle by adding initial vertex to end of path I greedy tours are at most (1 + log n ) / 2 longer than optimal for TSP instances with triangle inequality I Quick-Bor˚ uvka I inspired by minimum spanning tree algorithm of Bor˚ uvka, 1926 I first, sort vertices in arbitrary order I for each vertex in this order insert a feasible minimum weight edge I two such scans are done to generate a tour Heuristic Optimization 2015 11 I savings heuristic I based on savings heuristic for the vehicle routing problem I choose a base vertex u b and n � 1 cyclic paths ( u b , u i , u b ) I at each step, remove an edge incident to u b in two path p 1 and p 2 and create a new cyclic path p 12 I edges removed are chosen as to maximise cost reduction I savings tours are at most (1 + log n ) / 2 longer than optimal for TSP instances with triangle inequality I empirical results I savings produces better tours than greedy or Quick-Bor˚ uvka I on RUE instances approx. 12% above optimal (savings), 14% (greedy) and 16% (Quick-Bor˚ uvka) I computation times are modest ranging from 22 seconds (Quick-Bor˚ uvka) to around 100 seconds (Greedy, Savings) for 1 million RUE instances on 500MHz Alpha CPU (see Johnson and McGeoch, 2002) Heuristic Optimization 2015 12
Construction heuristics based on minimum spanning trees: I minimum spanning tree heuristic I compute a minimum spanning tree (MST) t I double each edge in t obtaining a graph G 0 I compute an Eulerian tour p in G 0 I convert p into a Hamiltonian cycle by short-cutting subpaths of p I for TSP instances with triangle inequality the result is at most twice as long as the optimal tour I Christofides heuristic I similar to algorithm above but computes a minimum weight perfect matching of the odd–degree vertices of the MST I this converts MST into an Eulerian graph, i.e. , a graph with an Eulerian tour I for TSP instances with triangle inequality the result is at most 1.5 times as long as the optimal tour I very good performance w.r.t. solution quality if heuristics are used for converting Eulerian tour into a Hamiltonian cycle Heuristic Optimization 2015 13 Iterative Improvement (Revisited) Iterative Improvement (II): determine initial candidate solution s While s is not a local optimum: | choose a neighbour s 0 of s such that g ( s 0 ) < g ( s ) | b s := s 0 Heuristic Optimization 2015 14
In II, various mechanisms ( pivoting rules ) can be used for choosing improving neighbour in each step: I Best Improvement (aka gradient descent , greedy hill-climbing ): Choose maximally improving neighbour, i.e. , randomly select from I ⇤ ( s ) := { s 0 2 N ( s ) | g ( s 0 ) = g ⇤ } , where g ⇤ := min { g ( s 0 ) | s 0 2 N ( s ) } . Note: Requires evaluation of all neighbours in each step. I First Improvement: Evaluate neighbours in fixed order, choose first improving step encountered. Note: Can be much more e ffi cient than Best Improvement; order of evaluation can have significant impact on performance. Heuristic Optimization 2015 15 procedure iterative best-improvement while improvement improvement false for i 1 to n do for j 1 to n do CheckMove( i , j ) ; if move is new best improvement then ( k , l ) MemorizeMove( i , j ) ; improvement true endfor end ApplyBestMove( k , l ) ; until ( improvement = false ) end iterative best-improvement Heuristic Optimization 2015 16
procedure iterative first-improvement while improvement improvement false for i 1 to n do for j 1 to n do CheckMove( i , j ) ; if move improves then ApplyMove( i , j ) ; improvement true endfor end until ( improvement = false ) end iterative first-improvement Heuristic Optimization 2015 17 Example: Random-order first improvement for the TSP (1) I given: TSP instance G with vertices v 1 , v 2 , . . . , v n . I search space: Hamiltonian cycles in G ; use standard 2-exchange neighbourhood I initialisation: search position := fixed canonical path ( v 1 , v 2 , . . . , v n , v 1 ) P := random permutation of { 1,2, . . . , n } I search steps: determined using first improvement w.r.t. g ( p ) = weight of path p , evaluating neighbours in order of P (does not change throughout search) I termination: when no improving search step possible (local minimum) Heuristic Optimization 2015 18
Example: Random-order first improvement for the TSP (2) Empirical performance evaluation: I perform 1000 runs of algorithm on benchmark instance pcb3038 I record relative solution quality (= percentage deviation from known optimum) of final tour obtained in each run I plot cumulative distribution function of relative solution quality over all runs. Heuristic Optimization 2015 19 example: Random-order first improvement for the TSP (3) result: substantial variability in solution quality between runs. 1 0.9 cumulative frequency 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 7 7.5 8 8.5 9 9.5 10 10.5 relative solution quality [%] Heuristic Optimization 2015 20
Recommend
More recommend