dynamic resampling for guided evolutionary multi
play

DYNAMIC RESAMPLING FOR GUIDED EVOLUTIONARY MULTI-OBJECTIVE - PowerPoint PPT Presentation

DYNAMIC RESAMPLING FOR GUIDED EVOLUTIONARY MULTI-OBJECTIVE OPTIMIZATION OF STOCHASTIC SYSTEMS Florian Siegmund, Amos H.C. Ng University of Skvde, Sweden Kalyanmoy Deb Indian Institute of Technology Kanpur, India Outline Background


  1. DYNAMIC RESAMPLING FOR GUIDED EVOLUTIONARY MULTI-OBJECTIVE OPTIMIZATION OF STOCHASTIC SYSTEMS Florian Siegmund, Amos H.C. Ng University of Skövde, Sweden Kalyanmoy Deb Indian Institute of Technology Kanpur, India

  2. Outline • Background Guided Search • Algorithm independent resampling techniques • Distance-based Resampling • Numerical Experiments • Conclusions/Future Work

  3. Background Guided Multi-objective Search • Precondition: Limited simulation budget • High budget required to explore Pareto-front • High-dimensional objective spaces • Costly evaluation of stochastic models •  Focus on interesting areas in objective space • R-NSGA-II: Reference point

  4. Background: Reference point-based NSGA-II • Evolutionary Optimization Algorithm • Deb et al. (2006)

  5. NSGA-II-Selection step

  6. NSGA-II

  7. NSGA-II

  8. R-NSGA-II-Selection step

  9. Diversity Control • R-NSGA-II on deterministic benchmark problem ZDT1

  10. R-NSGA-II-Interactive

  11. R-NSGA-II-Interactive

  12. R-NSGA-II-Interactive

  13. Stochastic Simulation  Resampling • Noisy problem: • Simulation model has varying output for same input if evaluatated multiple times • True output values are unknown •  Performance degradation • Handle noisy problem: Use sample mean and sample standard deviation

  14. Resampling • Common: Static Resampling • Limited Budget • Trade-off: Exploration vs. Exploitation •  Sampling allocation strategy needed

  15. Resampling strategies • Selection Sampling • Accuracy Sampling x, y • Selection Sampling for single-objective problems: OCBA • R-NSGA-II uses scalar fitness criterion, but only secondary •  complex

  16. Resampling techniques • Selection Sampling for EMO: complex •  Approximations for similar effect • Strategy: Good knowledge of solutions close to R will support the algorithm

  17. Criteria for resampling • General • Time • Dominance-relation • Variance • Constraint Violation • Reference point • Distance to reference point • Progress

  18. Basic resampling techniques • Time-based resampling • Dominance-based • Pareto-rank-based • Standard Error Dynamic Resampling

  19. Time-based Resampling • Transformation function • NumSamples ϵ {Min,..,Max} • Sampling need ϵ [0,1] • Mapping: Need  NumSamples

  20. Time-based resampling • Linear allocation

  21. Time-based resampling • Delayed allocation

  22. Standard Error Dynamic Resampling • Based on variance information • Single-objective version by Di Pietro (2004) • Multi-objective: • Adds k samples at a time    i • Checks whether max threshold se i H

  23. Distance-based Resampling • Infeasible case • Feasible case

  24. Infeasible case • R is not attainable by any solution • R is ” below ” Pareto-front  • Minimum distance

  25. Infeasible case • R is not attainable by any solution •  Max number of samples never assigned •  Adapt transformation function

  26. Infeasible case Use until a solution is found that dominates R

  27. Feasible case • R is ”over” the Pareto-front • Solutions can be found that dominate R

  28. Feasible case • Problem: Better solutions have higher distance to R • Define Virtual Reference Point VR as the solution that is 1. Non-dominated 2. And closest to R

  29. Feasible case

  30. Numerical experiments • Benchmark functions • Production line scheduling • Performance Measurement: • Measure average distance to R • From the α % closest solutions of the population • After every generation

  31. Numerical experiments • Benchmark functions • ZDT1 • Additive noise

  32. R-NSGA-II, Static Resampling • Benchmark function ZDT1, addtive noise σ =0.15 • R-NSGA-II, 5000 evals, R = (0.5, 0)

  33. Distance-based Resampling • Benchmark function ZDT1, addtive noise σ =0.15 • R-NSGA-II, 5000 evals, R = (0.5, 0)

  34. R-NSGA-II, Time-based Resampling • Benchmark function ZDT1, addtive noise σ =0.15 • R-NSGA-II, R = (0.5, 0)

  35. STANDARD ERROR DYNAMIC RESAMPLING • Benchmark function ZDT1, addtive noise σ =0.15 • R-NSGA-II, R = (0.5, 0), SEDR

  36. Numerical experiments • Production line model • Minimize Work in progress • Maximize Throughput • High noise problem, CV = 1.5 • 1 to 30 samples per solution

  37. Results • R = (WiP, TH) = (8, 0.8)

  38. Results • R = (WiP, TH) = (8, 0.8)

  39. Conclusions • Distance-based resampling is effective for R-NSGA-II • It performs better than algorithm independent strategies

  40. Future work • Evaluate on different noisy industrial SBO problems • Computing cluster with 100 workstations • Cooperation with Volvo • Obtain real output values for performance measurement • Accuracy Sampling • Selection Sampling • Distance to R is scalar value • Apply existing ranking and selection method, like OCBA • Simplify algorithm, only one fitness criterion

  41. Questions? • Thank you!

Recommend


More recommend