surrogate models for single and multi objective
play

Surrogate models for Single and Multi-Objective Stochastic - PowerPoint PPT Presentation

Covariance Matrix Adaptation-Evolution Strategy Support Vector Machines Comparison-Based Surrogate Model for CMA-ES Dominance-based Surrogate Model for Multi-Objective Optimization Surrogate models for Single and Multi-Objective Stochastic


  1. Covariance Matrix Adaptation-Evolution Strategy Support Vector Machines Comparison-Based Surrogate Model for CMA-ES Dominance-based Surrogate Model for Multi-Objective Optimization Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating Support Vector Machines and Covariance-Matrix Adaptation-ES Ilya Loshchilov, Marc Schoenauer, Michèle Sebag TAO CNRS − INRIA − Univ. Paris-Sud May 23rd, 2011 Michèle Sebag Surrogate optimization: SVM for CMA 1/ 47

  2. Covariance Matrix Adaptation-Evolution Strategy Support Vector Machines Comparison-Based Surrogate Model for CMA-ES Dominance-based Surrogate Model for Multi-Objective Optimization Motivations Find Argmin {F : X �→ I R } Context: ill-posed optimization problems continuous R d Function F (fitness function) on X ⊂ I Gradient not available or not useful F available as an oracle (black box) Build { x 1 , x 2 , . . . } → Argmin ( F ) Black-box approaches + Applicable + Robust comparison-based approaches are invariant − High computational costs: number of function evaluations Michèle Sebag Surrogate optimization: SVM for CMA 2/ 47

  3. Covariance Matrix Adaptation-Evolution Strategy Support Vector Machines Comparison-Based Surrogate Model for CMA-ES Dominance-based Surrogate Model for Multi-Objective Optimization Surrogate optimization Principle training set Gather E = { ( x i , F ( x i )) } learn surrogate model Build ˆ F from E Use surrogate model ˆ F for some time: Optimization: use ˆ F instead of true F in std algo Filtering: select promising x i based on ˆ F in population-based algo. Compute F ( x i ) for some x i Update ˆ F Iterate Michèle Sebag Surrogate optimization: SVM for CMA 3/ 47

  4. Covariance Matrix Adaptation-Evolution Strategy Support Vector Machines Comparison-Based Surrogate Model for CMA-ES Dominance-based Surrogate Model for Multi-Objective Optimization Surrogate optimization, cont Issues Learning Hypothesis space (polynoms, neural nets, Gaussian processes,...) Selection of training set (prune, update, ...) What is the learning target ? Interaction of Learning & Optimization modules Schedule (when to relearn) ∗ How to use ˆ F to support optimization search ∗∗ How to use search results to support learning ˆ F This talk ∗ Using Covariance-Matrix Estimation within Support Vector Machines ∗∗ Using SVM for multi-objective optimization Michèle Sebag Surrogate optimization: SVM for CMA 4/ 47

  5. Covariance Matrix Adaptation-Evolution Strategy Support Vector Machines Comparison-Based Surrogate Model for CMA-ES Dominance-based Surrogate Model for Multi-Objective Optimization Content Covariance Matrix Adaptation-Evolution Strategy 1 Evolution Strategies CMA-ES The state-of-the-art of (Stochastic) Optimization Support Vector Machines 2 Statistical Machine Learning Linear classifiers The kernel trick Comparison-Based Surrogate Model for CMA-ES 3 Previous Work Mixing Rank-SVM and Local Information Experiments Dominance-based Surrogate Model for Multi-Objective Optimization 4 Background Dominance-based Surrogate Experimental Validation Michèle Sebag Surrogate optimization: SVM for CMA 5/ 47

  6. Covariance Matrix Adaptation-Evolution Strategy Evolution Strategies Support Vector Machines CMA-ES Comparison-Based Surrogate Model for CMA-ES The state-of-the-art of (Stochastic) Optimization Dominance-based Surrogate Model for Multi-Objective Optimization Stochastic Search A black box search template to minimize f : R n → R Initialize distribution parameters θ , set sample size λ ∈ N While not terminate Sample distribution P ( x | θ ) → x 1 , . . . , x λ ∈ R n 1 Evaluate x 1 , . . . , x λ on f 2 Update parameters θ ← F θ ( θ , x 1 , . . . , x λ , f ( x 1 ) , . . . , f ( x λ )) 3 Covers Deterministic algorithms, Evolutionary Algorithms, PSO, DE P implicitly defined by the variation operators Estimation of Distribution Algorithms Michèle Sebag Surrogate optimization: SVM for CMA 6/ 47

  7. Covariance Matrix Adaptation-Evolution Strategy Evolution Strategies Support Vector Machines CMA-ES Comparison-Based Surrogate Model for CMA-ES The state-of-the-art of (Stochastic) Optimization Dominance-based Surrogate Model for Multi-Objective Optimization The ( µ, λ ) − Evolution Strategy Gaussian Mutations x i ∼ m + σ N i ( 0 , C ) for i = 1 , . . . , λ where x i , m ∈ R n , σ ∈ R + , and C ∈ R n × n as perturbations of m where the mean vector m ∈ R n represents the favorite solution the so-called step-size σ ∈ R + controls the step length the covariance matrix C ∈ R n × n determines the shape of the distribution ellipsoid How to update m , σ , and C ? Michèle Sebag Surrogate optimization: SVM for CMA 7/ 47

  8. Covariance Matrix Adaptation-Evolution Strategy Evolution Strategies Support Vector Machines CMA-ES Comparison-Based Surrogate Model for CMA-ES The state-of-the-art of (Stochastic) Optimization Dominance-based Surrogate Model for Multi-Objective Optimization History The one-fifth rule Rechenberg, 73 One single parameter σ for the whole population Measure empirical success rate Increase σ if too large, decrease σ if too small Often wrong in non-smooth landscapes Self-adaptive mutations Schwefel, 81 Each individual carries its own mutation parameter from 1 to n 2 − n 2 Log-normal mutation of mutation parameters (Normal) mutation of individual Adaptation is slow for full covariance case Michèle Sebag Surrogate optimization: SVM for CMA 8/ 47

  9. Covariance Matrix Adaptation-Evolution Strategy Evolution Strategies Support Vector Machines CMA-ES Comparison-Based Surrogate Model for CMA-ES The state-of-the-art of (Stochastic) Optimization Dominance-based Surrogate Model for Multi-Objective Optimization Cumulative Step-Size Adaptation (CSA) m + σ y i = x i ← m + σ y w m Measure the length of the evolution path the pathway of the mean vector m in the generation sequence ↓ ↓ decrease σ increase σ loosely speaking steps are perpendicular under random selection (in expectation) perpendicular in the desired situation (to be most efficient) Michèle Sebag Surrogate optimization: SVM for CMA 9/ 47

  10. Covariance Matrix Adaptation-Evolution Strategy Evolution Strategies Support Vector Machines CMA-ES Comparison-Based Surrogate Model for CMA-ES The state-of-the-art of (Stochastic) Optimization Dominance-based Surrogate Model for Multi-Objective Optimization Covariance Matrix Adaptation Rank-One Update y w = � µ m ← m + σ y w , i =1 w i y i : λ , y i ∼ N i ( 0 , C ) initial distribution, C = I new distribution: C ← 0 . 8 × C + 0 . 2 × y w y T w ruling principle: the adaptation increases the probability of successful steps, y w , to appear again Michèle Sebag Surrogate optimization: SVM for CMA 10/ 47

  11. Covariance Matrix Adaptation-Evolution Strategy Evolution Strategies Support Vector Machines CMA-ES Comparison-Based Surrogate Model for CMA-ES The state-of-the-art of (Stochastic) Optimization Dominance-based Surrogate Model for Multi-Objective Optimization Covariance Matrix Adaptation Rank-One Update y w = � µ m ← m + σ y w , i =1 w i y i : λ , y i ∼ N i ( 0 , C ) y w , movement of the population mean m (disregarding σ ) new distribution: C ← 0 . 8 × C + 0 . 2 × y w y T w ruling principle: the adaptation increases the probability of successful steps, y w , to appear again Michèle Sebag Surrogate optimization: SVM for CMA 10/ 47

  12. Covariance Matrix Adaptation-Evolution Strategy Evolution Strategies Support Vector Machines CMA-ES Comparison-Based Surrogate Model for CMA-ES The state-of-the-art of (Stochastic) Optimization Dominance-based Surrogate Model for Multi-Objective Optimization Covariance Matrix Adaptation Rank-One Update y w = � µ m ← m + σ y w , i =1 w i y i : λ , y i ∼ N i ( 0 , C ) mixture of distribution C and step y w , C ← 0 . 8 × C + 0 . 2 × y w y T w new distribution: C ← 0 . 8 × C + 0 . 2 × y w y T w ruling principle: the adaptation increases the probability of successful steps, y w , to appear again Michèle Sebag Surrogate optimization: SVM for CMA 10/ 47

  13. Covariance Matrix Adaptation-Evolution Strategy Evolution Strategies Support Vector Machines CMA-ES Comparison-Based Surrogate Model for CMA-ES The state-of-the-art of (Stochastic) Optimization Dominance-based Surrogate Model for Multi-Objective Optimization Covariance Matrix Adaptation Rank-One Update y w = � µ m ← m + σ y w , i =1 w i y i : λ , y i ∼ N i ( 0 , C ) new distribution (disregarding σ ) new distribution: C ← 0 . 8 × C + 0 . 2 × y w y T w ruling principle: the adaptation increases the probability of successful steps, y w , to appear again Michèle Sebag Surrogate optimization: SVM for CMA 10/ 47

  14. Covariance Matrix Adaptation-Evolution Strategy Evolution Strategies Support Vector Machines CMA-ES Comparison-Based Surrogate Model for CMA-ES The state-of-the-art of (Stochastic) Optimization Dominance-based Surrogate Model for Multi-Objective Optimization Covariance Matrix Adaptation Rank-One Update y w = � µ m ← m + σ y w , i =1 w i y i : λ , y i ∼ N i ( 0 , C ) movement of the population mean m new distribution: C ← 0 . 8 × C + 0 . 2 × y w y T w ruling principle: the adaptation increases the probability of successful steps, y w , to appear again Michèle Sebag Surrogate optimization: SVM for CMA 10/ 47

Recommend


More recommend