DYNAMIC RESAMPLING FOR GUIDED EVOLUTIONARY MULTI-OBJECTIVE OPTIMIZATION OF STOCHASTIC SYSTEMS Florian Siegmund, Amos H.C. Ng University of Skövde, Sweden Kalyanmoy Deb Indian Institute of Technology Kanpur, India
Outline • Background Guided Search • Algorithm independent resampling techniques • Distance-based Resampling • Numerical Experiments • Conclusions/Future Work
Background Guided Multi-objective Search • Precondition: Limited simulation budget • High budget required to explore Pareto-front • High-dimensional objective spaces • Costly evaluation of stochastic models • Focus on interesting areas in objective space • R-NSGA-II: Reference point
Background: Reference point-based NSGA-II • Evolutionary Optimization Algorithm • Deb et al. (2006)
NSGA-II-Selection step
NSGA-II
NSGA-II
R-NSGA-II-Selection step
Diversity Control • R-NSGA-II on deterministic benchmark problem ZDT1
R-NSGA-II-Interactive
R-NSGA-II-Interactive
R-NSGA-II-Interactive
Stochastic Simulation Resampling • Noisy problem: • Simulation model has varying output for same input if evaluatated multiple times • True output values are unknown • Performance degradation • Handle noisy problem: Use sample mean and sample standard deviation
Resampling • Common: Static Resampling • Limited Budget • Trade-off: Exploration vs. Exploitation • Sampling allocation strategy needed
Resampling strategies • Selection Sampling • Accuracy Sampling x, y • Selection Sampling for single-objective problems: OCBA • R-NSGA-II uses scalar fitness criterion, but only secondary • complex
Resampling techniques • Selection Sampling for EMO: complex • Approximations for similar effect • Strategy: Good knowledge of solutions close to R will support the algorithm
Criteria for resampling • General • Time • Dominance-relation • Variance • Constraint Violation • Reference point • Distance to reference point • Progress
Basic resampling techniques • Time-based resampling • Dominance-based • Pareto-rank-based • Standard Error Dynamic Resampling
Time-based Resampling • Transformation function • NumSamples ϵ {Min,..,Max} • Sampling need ϵ [0,1] • Mapping: Need NumSamples
Time-based resampling • Linear allocation
Time-based resampling • Delayed allocation
Standard Error Dynamic Resampling • Based on variance information • Single-objective version by Di Pietro (2004) • Multi-objective: • Adds k samples at a time i • Checks whether max threshold se i H
Distance-based Resampling • Infeasible case • Feasible case
Infeasible case • R is not attainable by any solution • R is ” below ” Pareto-front • Minimum distance
Infeasible case • R is not attainable by any solution • Max number of samples never assigned • Adapt transformation function
Infeasible case Use until a solution is found that dominates R
Feasible case • R is ”over” the Pareto-front • Solutions can be found that dominate R
Feasible case • Problem: Better solutions have higher distance to R • Define Virtual Reference Point VR as the solution that is 1. Non-dominated 2. And closest to R
Feasible case
Numerical experiments • Benchmark functions • Production line scheduling • Performance Measurement: • Measure average distance to R • From the α % closest solutions of the population • After every generation
Numerical experiments • Benchmark functions • ZDT1 • Additive noise
R-NSGA-II, Static Resampling • Benchmark function ZDT1, addtive noise σ =0.15 • R-NSGA-II, 5000 evals, R = (0.5, 0)
Distance-based Resampling • Benchmark function ZDT1, addtive noise σ =0.15 • R-NSGA-II, 5000 evals, R = (0.5, 0)
R-NSGA-II, Time-based Resampling • Benchmark function ZDT1, addtive noise σ =0.15 • R-NSGA-II, R = (0.5, 0)
STANDARD ERROR DYNAMIC RESAMPLING • Benchmark function ZDT1, addtive noise σ =0.15 • R-NSGA-II, R = (0.5, 0), SEDR
Numerical experiments • Production line model • Minimize Work in progress • Maximize Throughput • High noise problem, CV = 1.5 • 1 to 30 samples per solution
Results • R = (WiP, TH) = (8, 0.8)
Results • R = (WiP, TH) = (8, 0.8)
Conclusions • Distance-based resampling is effective for R-NSGA-II • It performs better than algorithm independent strategies
Future work • Evaluate on different noisy industrial SBO problems • Computing cluster with 100 workstations • Cooperation with Volvo • Obtain real output values for performance measurement • Accuracy Sampling • Selection Sampling • Distance to R is scalar value • Apply existing ranking and selection method, like OCBA • Simplify algorithm, only one fitness criterion
Questions? • Thank you!
Recommend
More recommend