Benchmarking the SMS-EMOA with Self-adaptation on the bbob-biobj Test Suite Simon Wessing Chair of Algorithm Engineering Computer Science Department Technische Universität Dortmund 16 July 2017
Introduction ◮ Evolutionary multiobjective optimization ◮ Continuous decision variables ◮ ( 1 + 1 ) -SMS-EMOA is algorithmically equivalent to single-objective ( 1 + 1 ) -EA ⇒ Theory about optimal step size from single-objective optimization applies Benchmarking the SMS-EMOA with Self-adaptation 2 / 18
Introduction ◮ Evolutionary multiobjective optimization ◮ Continuous decision variables ◮ ( 1 + 1 ) -SMS-EMOA is algorithmically equivalent to single-objective ( 1 + 1 ) -EA ⇒ Theory about optimal step size from single-objective optimization applies ◮ Situation for ( µ + 1 ) , ( µ + λ ) unknown ◮ How to define step size optimality? ◮ How to adapt step size if not with very sophisticated MO-CMA-ES? Benchmarking the SMS-EMOA with Self-adaptation 2 / 18
Development of Control Mechanism ◮ Idea: use self-adaptation from single-objective optimization Benchmarking the SMS-EMOA with Self-adaptation 3 / 18
Development of Control Mechanism ◮ Idea: use self-adaptation from single-objective optimization ◮ Mutation of genome: y = x + σ N ( 0 , I ) ◮ Mutation of step size: σ = ˜ σ · exp ( τ N ( 0 , 1 )) ◮ Learning parameter τ ∝ 1 / √ n Benchmarking the SMS-EMOA with Self-adaptation 3 / 18
Development of Control Mechanism ◮ Idea: use self-adaptation from single-objective optimization ◮ Mutation of genome: y = x + σ N ( 0 , I ) ◮ Mutation of step size: σ = ˜ σ · exp ( τ N ( 0 , 1 )) ◮ Learning parameter τ ∝ 1 / √ n ◮ Not state of the art any more ◮ Behavior is emergent ◮ Theoretical analysis is difficult ◮ Application to multiobjective optimization is scarce ⇒ Experiment to find good parameter configurations Benchmarking the SMS-EMOA with Self-adaptation 3 / 18
Experimental Setup Factor Type Symbol Levels { 2 , 3 , 5 , 10 , 20 } Number variables observable n { 2 − 2 , 2 − 1 , 2 0 , 2 1 , 2 2 , 2 3 } Learning param. control c constant { 10 , 50 } Population size control µ { 1 , µ, 5 µ } Number offspring control λ Recombination control { discrete, intermediate, arithmetic, none } ◮ Full factorial design ◮ 15 unimodal problems of BBOB-BIOBJ 2016 (only first instance) ◮ Budget: 10 4 n function evaluations ◮ Assessment: rank-transformed HV values of whole EA runs Benchmarking the SMS-EMOA with Self-adaptation 4 / 18
Other Factors Held Constant ◮ Initial mutation strength σ init = 0 . 025 ◮ Repair method for bound violations: Lamarckian reflection (search space [ − 100 , 100 ] n , scaled to unit hypercube) ◮ Selection: iteratively removes worst individual, until µ reached (backward elimination) ⇒ Might have to reconsider in the future Benchmarking the SMS-EMOA with Self-adaptation 5 / 18
Pseudocode Input: population size µ , initial population P 0 , number of offspring λ 1: t ← 0 2: while stopping criterion not fulfilled do O t ← createOffspring( P t ) // create λ offspring 3: evaluate( O t ) // calculate objective values 4: Q t ← P t ∪ O t 5: r ← createReferencePoint ( Q t ) 6: while | Q t | > µ do 7: { F 1 , . . . , F w } ← nondominatedSort ( Q t ) // sort in fronts 8: x ∗ ← argmin x ∈ F w (∆ s ( x , F w , r )) // x ∗ with smallest contr. 9: Q t ← Q t \ { x ∗ } // remove worst individual 10: end while 11: P t + 1 ← Q t 12: t ← t + 1 13: 14: end while Benchmarking the SMS-EMOA with Self-adaptation 6 / 18
Main Effect: Learning Parameters τ = c / √ n 140 120 100 Average Rank 80 60 40 20 0 c = 2 − 2 c = 2 − 1 c = 2 0 c = 2 1 c = 2 2 c = 2 3 ◮ c = 2 − 2 is always the worst choice ⇒ Exclude c = 2 − 2 from further analysis Benchmarking the SMS-EMOA with Self-adaptation 7 / 18
Mutation Strength vs. Generation 10 0 10 0 10 − 1 10 − 1 Avg. step size ¯ σ Avg. step size ¯ σ 10 − 2 10 − 2 10 − 3 10 − 3 10 − 4 10 − 4 10 − 5 10 − 5 10 0 10 1 10 2 10 3 10 0 10 1 10 2 10 3 Generation Generation (a) τ = 2 − 2 / √ n . (b) τ = 2 0 / √ n . 10 0 10 0 10 − 1 10 − 1 Avg. step size ¯ σ Avg. step size ¯ σ 10 − 2 10 − 2 10 − 3 10 − 3 10 − 4 10 − 4 10 − 5 10 − 5 10 0 10 1 10 2 10 3 10 0 10 1 10 2 10 3 Generation Generation (c) τ = 2 2 / √ n . (d) τ = 2 3 / √ n . Benchmarking the SMS-EMOA with Self-adaptation 8 / 18
Main Effect: Selection Variants 100 80 Average Rank 60 40 20 (10 + 1) (10 + 10) (10 + 50) (50 + 1) (50 + 50)(50 + 250) Benchmarking the SMS-EMOA with Self-adaptation 9 / 18
Main and Interaction Effects: Recombination & Selection 100 80 Average Rank 60 40 20 arithmetic discrete intermediate none (10 + 1) 46.97 85.43 82.53 78.95 (10 + 10) 51.29 72.55 83.48 68.34 (10 + 50) 47.69 62.90 82.25 42.50 (50 + 1) 61.93 63.21 84.93 40.95 (50 + 50) 58.23 55.88 84.06 30.43 (50 + 250) 53.77 51.34 78.82 27.14 Benchmarking the SMS-EMOA with Self-adaptation 10 / 18
Interaction Effect: Learning Parameter vs. Recombination arithmetic discrete intermediate none 2 − 1 / √ n 49.96 66.60 79.90 40.82 2 0 / √ n 57.01 53.97 83.87 44.49 2 1 / √ n 55.65 65.43 82.33 52.42 2 2 / √ n 48.70 66.57 80.38 50.98 2 3 / √ n 55.25 73.53 86.90 51.54 Benchmarking the SMS-EMOA with Self-adaptation 11 / 18
Comparison with ( 50 + 250 ) SBX on bbob-biobj 2016 1.0 1.0 ES ES bbob-biobj - f1-f55, 2-D bbob-biobj - f1-f55, 5-D Proportion of function+target pairs Proportion of function+target pairs 5, 5 instances 5, 5 instances 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0.0 SBX 0.0.0 SBX 0.0 0.0 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) log10 of (# f-evals / dimension) 1.0 1.0 ES ES bbob-biobj - f1-f55, 10-D bbob-biobj - f1-f55, 20-D Proportion of function+target pairs Proportion of function+target pairs 5, 5 instances 5, 5 instances 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 SBX SBX 0.0 0.0.0 0.0 0.0.0 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) log10 of (# f-evals / dimension) Benchmarking the SMS-EMOA with Self-adaptation 12 / 18
Comparison with ( 50 + 250 ) SBX on bbob-biobj 2016 11 sep. Ellipsoid/sep. Ellipsoid 18 sep. Ellipsoid/Schwefel 1.0 1.0 SBX ES bbob-biobj - f11, 5-D bbob-biobj - f18, 3-D Proportion of function+target pairs Proportion of function+target pairs 5, 5 instances 5, 5 instances 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0.0 ES 0.0.0 SBX 0.0 0.0 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) log10 of (# f-evals / dimension) ◮ SBX is better/competitive on separable problems Benchmarking the SMS-EMOA with Self-adaptation 13 / 18
Discussion ◮ Self-adaptive step size adaptation works in both directions (increasing/decreasing) ◮ Best configuration for budget of 10 4 n : ◮ No recombination ◮ τ = 2 0 / √ n ◮ ( 50 + 250 ) -selection ◮ Surprisingly similar to single-objective case ◮ Only arithmetic and no recombination seem to be worth investigating further Benchmarking the SMS-EMOA with Self-adaptation 14 / 18
Application to bbob-biobj 2017 Modifications to previous experiments: ◮ Initialization in [ 0 . 475 , 0 . 525 ] n (normalized), corresponding to [ − 5 , 5 ] n in original problem space ◮ Budget of 10 5 n ◮ Comparison to ( µ + 1 ) -SMS-EMOA from bbob-biobj 2016 ◮ DE variation ◮ SBX/PM variation Benchmarking the SMS-EMOA with Self-adaptation 15 / 18
Some Results 5-D separable-separable separable-moderate 1.0 1.0 best 2016 best 2016 bbob-biobj - f1, f2, f11, 5-D bbob-biobj - f3, f4, f12, f13, 5-D 58 targets in 1..-1.0e-4 58 targets in 1..-1.0e-4 Proportion of function+target pairs Proportion of function+target pairs 10 instances 10 instances 0.8 0.8 SMS-ES SMS-ES 0.6 0.6 0.4 0.4 SMS-PM SMS-DE 0.2 0.2 v2.1, hv-hash=ff0e71e8cd978373 SMS-DE v2.1, hv-hash=ff0e71e8cd978373 SMS-PM 0.0 0.0 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) log10 of (# f-evals / dimension) multimodal-multimodal multimodal-weakstructure 1.0 1.0 best 2016 best 2016 bbob-biobj - f46, f47, f50, 5-D bbob-biobj - f48, f49, f51, f52, 5-D 58 targets in 1..-1.0e-4 58 targets in 1..-1.0e-4 Proportion of function+target pairs Proportion of function+target pairs 10 instances 10 instances 0.8 0.8 SMS-DE SMS-DE 0.6 0.6 0.4 0.4 SMS-PM SMS-PM 0.2 0.2 v2.1, hv-hash=ff0e71e8cd978373 SMS-ES v2.1, hv-hash=ff0e71e8cd978373 SMS-ES 0.0 0.0 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) log10 of (# f-evals / dimension) Benchmarking the SMS-EMOA with Self-adaptation 16 / 18
Recommend
More recommend