Learning a Reactive Restart Strategy to Improve Stochastic Search Serdar Kadıoğlu, Meinolf Sellmann, Markus Wagner Learning and Intelligent Optimization, LION 2017 Serdar Kadıoğlu Manager of Research and Development | Oracle Visiting Assistant Professor | Brown University Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Houston, we have a problem.. Restarts to the rescue! How to fix a computer? Reinstall the OS Run a virus scan Restart the computer Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Background Restarted Search Ø Become integral part of combinatorial search Ø Stochastic search algorithms and randomized heuristics Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Background Restarted Search Ø Become integral part of combinatorial search Ø Stochastic search algorithms and randomized heuristics Ø Complete methods: avoid heavy-tailed distribution (Gomes et al. JAR’00) Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Background Restarted Search Ø Become integral part of combinatorial search Ø Stochastic search algorithms and randomized heuristics Ø Complete methods: avoid heavy-tailed distribution (Gomes et al. JAR’00) Ø Incomplete methods: diversification technique Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Agenda for Today Restart Strategies 1 Reactive Restarts 2 Numerical Results 3 Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Part – I Brief Background Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Background Restart Strategies Ø Complexity of designing appropriate restart strategy Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Background Restart Strategies Ø Complexity of designing appropriate restart strategy Ø Two common approaches: 1. Use restarts with a certain probability 2. Employ fixed schedule of restarts p / f RESTART Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Background Restart Strategies – Feasibility Ø Theoretical work on fixed-schedule restart strategies (Luby et al.’93) Ø Practical studies with SAT and CP solvers Ø Geometrically growing restarts limits (Wu et al. CP’07) Ø (Audemard et al. CP’12) argued fixed schedules are sub-optimal for SAT Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Background Restart Strategies – Optimization Ø Classical optimization algorithms are often deterministic Ø As such, does not really benefit from restarts Ø Modern optimization algorithms have randomized components Ø Memory constraints & parallel computation introduce new characteristics Ø (Ruiz et al.’16) different mathematical programming formulations to provide different starting points for the solver Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Limited Runtime Budget Restart Strategies Ø Assume we are given a time budget t to run an algorithm Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Limited Runtime Budget Restart Strategies Ø Assume we are given a time budget t to run an algorithm Ø Two natural options: 1. Single–run strategy: use all of the time t for a single run of the algorithm 2. Multi–run strategy: make k runs each with runtime t/k Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Limited Runtime Budget Restart Strategies Ø Assume we are given a time budget t to run an algorithm Ø Two natural options: 1. Single–run strategy: use all of the time t for a single run of the algorithm 2. Multi–run strategy: make k runs each with runtime t/k Ø (Fischetti et al.’14) generalizes this strategy into Bet–And–Run for MIPs Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Bet–And–Run Sampling Phase + Long Run Phase – I: Phase – II: Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Bet–And–Run Sampling Phase + Long Run Phase – I: Phase – II: perform k runs for some (short) time limit t 1 with t 1 ≤ t/k Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Bet–And–Run Sampling Phase + Long Run Phase – I: Phase – II: perform k runs for some remaining time t 2 = t - kt 1 (short) time limit t 1 with continue only the best run t 1 ≤ t/k from the first phase until timeout Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Bet–And–Run Sampling Phase + Long Run Phase – I: Phase – II: perform k runs for some remaining time t 2 = t - kt 1 (short) time limit t 1 with continue only the best run t 1 ≤ t/k from the first phase until timeout Ø Single–run strategy: special case for k = 0 Ø Multi–run strategy: special case for t 1 = t/k Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Bet–And–Run Sampling Phase + Long Run Ø ( Fischetti et al.’OR14 ) introduced diversity in starting conditions of MIP Experimentally good results with k = 5 Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Bet–And–Run Sampling Phase + Long Run Ø (Fischetti et al.’OR14) introduced diversity in starting conditions of MIP Experimentally good results with k = 5 Ø ( Friedrich et al. AAAI’17 ) studied TSP and MVC Experimentally good results with Restarts 1% 40 Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Bet–And–Run Sampling Phase + Long Run Ø (Fischetti et al.’OR14) introduced diversity in starting conditions of MIP Experimentally good results with k = 5 Ø (Friedrich et al. AAAI’17) studied TSP and MVC Experimentally good results with Restarts 1% 40 Ø ( Lissovoi et al. GECCO’17 ) theoretical results for a family of PBO non-trivial k and t1 are necessary to find the global optimum efficiently Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Bet–And–Run Sampling Phase + Long Run Ø (Fischetti et al.’OR14) introduced diversity in starting conditions of MIP Experimentally good results with k = 5 Ø (Friedrich et al. AAAI’17) studied TSP and MVC Experimentally good results with Restarts 1% 40 Ø (Lissovoi et al. GECCO’17) theoretical results for a family of PBO non-trivial k and t1 are necessary to find the global optimum efficiently Issue: need to set k and t1 appropriately Issue: general inflexibility Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
This Paper: Reactive Restarts Hyper-Parameterized Restart Strategy Ø General methodology for any black-box optimization solver Ø Embedding into an adaptive stochastic restart framework Ø Evolution of the objective function of solutions found Ø Monitor key performance metrics Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Part – II Reactive Restarts [Features – Scoring – Framework] Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Reactive Restarts Hyper-Parameterized Restart Strategy Ø Combine the two core ideas: 1. Consider a batch of runs with the option to continue some of them 2. Automatically learn which run to continue, or whether to start a new run based on the observed performance characteristics of past runs Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Reactive Restarts Hyper-Parameterized Restart Strategy Ø Combine the two core ideas: 1. Consider a batch of runs with the option to continue some of them 2. Automatically learn which run to continue, or whether to start a new run based on the observed performance characteristics of past runs Ø Relevant approaches – Reactive tabu search (Battiti and Tecchiolli, IJOC’94) – SATenstein (KhudaBukhsh et al. IJCAI’09) – Reactive dialectic search (Ansotegui et al. AAAI’17) Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Reactive Restarts Hyper-Parameterized Restart Strategy Ø Based on observed performance metrics Ø Adaptively compute scores to compute likelihood of decisions: a) Continue current run beyond fail limit? b) Continue the best run so far? c) Discard the run and start a new one ? Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Reactive Restarts Hyper-Parameterized Restart Strategy Ø Based on observed performance metrics Ø Adaptively compute scores to compute likelihood of decisions: a) Continue current run beyond fail limit? b) Continue the best run so far? c) Discard the run and start a new one? Ø Automatic parameter tuner to learn how to adapt these probabilities Ø Using tuners for parameter training (Bezerra et al.’16, Stützle et al.’16, Ansotegui et al.’17) Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Reactive Restarts Features Continue current run beyond fail limit? Continue the best run so far? Discard the run and start a new one ? Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Reactive Restarts Features Continue current run beyond fail limit? Continue the best run so far? Discard the run and start a new one ? Solver Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017
Recommend
More recommend