Outline DM811 HEURISTICS AND LOCAL SEARCH ALGORITHMS FOR COMBINATORIAL OPTIMZATION 1. Stochastic Local Search Methods (Metaheuristics) Randomized Iterative Improvement Lecture 12 Attribute Based Hill Climber Stochastic Local Search Methods Dynamic Local Search Iterated Local Search Tabu Search Marco Chiarandini 2 Outline ‘Simple’ SLS Methods Goal: Effectively escape from local minima of given evaluation function. General approach: 1. Stochastic Local Search Methods (Metaheuristics) For fixed neighborhood, use step function that permits Randomized Iterative Improvement worsening search steps . Attribute Based Hill Climber Dynamic Local Search Specific methods: Iterated Local Search Tabu Search ◮ Randomized Iterative Improvement ◮ (Simulated Annealing) ◮ Attribute Based Hill Climber ◮ Dynamic Local Search ◮ Iterated Local Search ◮ Tabu Search 3 4
Min-Conflict Heuristics Randomized Iterative Improvement Key idea: In each search step, with a fixed probability perform an uninformed random walk step instead of an iterative improvement step. Randomized Iterative Improvement (RII): determine initial candidate solution s while termination condition is not satisfied do With probability wp : choose a neighbor s ′ of s uniformly at random Otherwise: choose a neighbor s ′ of s such that g ( s ′ ) < g ( s ) or, if no such s ′ exists, choose s ′ such that g ( s ′ ) is minimal s := s ′ 6 7 Note: Example: Randomized Iterative Improvement for GCP ◮ No need to terminate search when local minimum is encountered procedure RIIGCP ( F ,wp,maxSteps ) Instead: Impose limit on number of search steps or CPU time, input: a graph G and k , probability wp , integer maxSteps from beginning of search or after last improvement. output: a proper coloring ϕ for G or ∅ ◮ Probabilistic mechanism permits arbitrary long sequences choose coloring ϕ of G uniformly at random; steps := 0; of random walk steps while not ( ϕ is not proper) and ( steps < maxSteps ) do Therefore: When run sufficiently long, RII is guaranteed with probability wp do select v in V and c in Γ uniformly at random; to find (optimal) solution to any problem instance with otherwise arbitrarily high probability. select v in V c and c in Γ uniformly at random from those that ◮ A variant of RII has successfully been applied to SAT maximally decrease number of edge violations; change color of v in ϕ ; (GWSAT algorithm) steps := steps + 1 ; ◮ A variant of GUWSAT, GWSAT [Selman et al., 1994], end if ϕ is proper for G then return ϕ was at some point state-of-the-art for SAT. else return ∅ ◮ Generally, RII is often outperformed by more complex end LS methods. end RIIGCP 8 9
Novelty Attribute Based Hill Climber Key idea: combine Randomized Iterative Improvement with Min-Conflicts Example on GCP ◮ attributes are solution elements that change in a move ◮ each attribute has associated a value: ◮ the value of the best solution visited that contains it 1−wp wp ◮ infinity, otherwise select v and c select v in Vc ◮ at each step, a solution in N is acceptable iff it contains an attribute randomly randomly that has never been seen in a solution of such high quality before many colours only one colour with best improvement with best improvement N ′ ( s ) = { s ′ ∈ N ( s ) : ∃ a ∈ s ′ s.t. f ( s ′ ) < φ ( a ) } select one, most recent not most recent not most recent colour randomly where � if V ∩ S a = ∅ select best colour 1−p p ∞ φ ( a ) = min { f ( s ) : s ∈ V ∩ S a } otherwise with V set of visited solutions and S a set of solutions that contains a . select select the best colour second best colour 10 12 Examples ◮ TSP: A TSP = { ( i, j ) | ( i, j ) ∈ E } ◮ QAP: A QAP = { ( i, j ) | 1 ≤ i ≤ n, 1 ≤ j ≤ n } representing ( ϕ ( i ) , j ) 13 14
Dynamic Local Search Dynamic Local Search (continued) ◮ Modified evaluation function: ◮ Key Idea: Modify the evaluation function whenever a local optimum is encountered. � g ′ ( π, s ) := g ( π, s ) + penalty ( i ) , ◮ Associate weights ( penalties ) with solution components; these determine i ∈ SC ( π ′ ,s ) impact of components on evaluation function value. ◮ Perform Iterative Improvement; when in local minimum, increase where SC ( π ′ , s ) is the set of solution components penalties of some solution components until improving steps become of problem instance π ′ used in candidate solution s . available. ◮ Penalty initialization: For all i : penalty ( i ) := 0 . Dynamic Local Search (DLS): determine initial candidate solution s initialize penalties ◮ Penalty update in local minimum s : Typically involves penalty increase while termination criterion is not satisfied do of some or all solution components of s ; often also occasional penalty compute modified evaluation function g ′ from g decrease or penalty smoothing . based on penalties perform subsidiary local search on s ◮ Subsidiary local search: Often Iterative Improvement . using evaluation function g ′ update penalties based on s 16 17 Example: Guided Local Search (GLS) for the TSP Potential problem: [Voudouris and Tsang 1995; 1999] Solution components required for (optimal) solution may also be present in many local minima. ◮ Given: TSP instance G Possible solutions: ◮ Search space: Hamiltonian cycles in G with n vertices; ◮ Neighborhood: 2-edge-exchange; A: Occasional decreases/smoothing of penalties. ◮ Solution components edges of G ; B: Only increase penalties of solution components that are g e ( G, p ) := w ( e ) ; least likely to occur in (optimal) solutions. ◮ Penalty initialization: Set all edge penalties to zero. Implementation of B : ◮ Subsidiary local search: Iterative First Improvement. Only increase penalties of solution components i with maximal utility [Voudouris and Tsang, 1995]: ◮ Penalty update: Increment penalties for all edges with maximal utility by g i ( π, s ′ ) λ := 0.3 · w ( s 2-opt ) util ( s ′ , i ) := 1 + penalty ( i ) n where s 2-opt = 2-optimal tour. where g i ( π, s ′ ) is the solution quality contribution of i in s ′ . 18 19
Hybrid Methods Iterated Local Search Key Idea: Use two types of LS steps: ◮ subsidiary local (local) search steps for reaching local optima as efficiently as possible (intensification) Combination of ‘simple’ methods often yields substantial performance improvements. ◮ perturbation steps for effectively escaping from local optima (diversification). Simple examples: Also: Use acceptance criterion to control diversification vs intensification behavior. ◮ Commonly used restart mechanisms can be seen Iterated Local Search (ILS): as hybridisations with Uninformed Random Picking determine initial candidate solution s ◮ Iterative Improvement + Uninformed Random Walk perform subsidiary local search on s = Randomized Iterative Improvement while termination criterion is not satisfied do r := s perform perturbation on s perform subsidiary local search on s based on acceptance criterion , keep s or revert to s := r 21 22 Note: Subsidiary local search: (1) ◮ Subsidiary local search results in a local minimum. ◮ More effective subsidiary local search procedures lead to better ILS ◮ ILS trajectories can be seen as walks in the space of performance. local minima of the given evaluation function. Example: 2-opt vs 3-opt vs LK for TSP. ◮ Perturbation phase and acceptance criterion may use aspects of search ◮ Often, subsidiary local search = iterative improvement, history ( i.e. , limited memory). but more sophisticated LS methods can be used. ( e.g. , Tabu Search). ◮ In a high-performance ILS algorithm, subsidiary local search , perturbation mechanism and acceptance criterion need to complement each other well. 23 24
Recommend
More recommend