enhancing experimental design and understanding with deep
play

Enhancing Experimental Design and Understanding with Deep - PowerPoint PPT Presentation

Enhancing Experimental Design and Understanding with Deep Learning/AI Vic Castillo, Ph. D. Computational Engineering Lawrence Livermore National Laboratory March 28, 2018 LLNL-PRES-748201 This work was performed under the auspices of the U.S.


  1. Enhancing Experimental Design and Understanding with Deep Learning/AI Vic Castillo, Ph. D. Computational Engineering Lawrence Livermore National Laboratory March 28, 2018 LLNL-PRES-748201 This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE- AC52-07NA27344. Lawrence Livermore National Security, LLC

  2. Enhancing Experimental Design • Complex Systems such as manufacturing process, energy systems, fusion reactors have a large design space. • Intelligent sampling / Experimental design can help. • Simulation, like experiments, can be expensive. • DNN’s can make good, fast -running surrogate models. Main Idea: Leverage ML and GPUs to help the designer navigate a complex design space at rapid cadence by giving quicker feedback. This leads to more agile development and enhances creativity. LLNL-PRES-748201

  3. Example Application: Glass production LLNL-PRES-748201

  4. Generating the f fast- runnin ing flo low vis isualizer • Deep Convolutional autoencoder used to reduce simulation state space to a reduced latent space • Fully-connected neural network to correlate control and design parameters to latent space • Projection to full state space is from the decoder LLNL-PRES-748201

  5. Parameter Space Sampling This is a review of a framework for developing and evaluating speculative sampling methods. Classical statistical sampling is used as a baseline and incorporated. A simple coupled oscillator model with a six-dimensional design space is used as a prototype. A recent development from Google Deep Mind is discussed. Example runs are used for discussion. LLNL-PRES-748201

  6. Sim imple Example: Coupled Oscil illa lator Design Parameters: • Two masses {m 1 , m 2 } • Connected with a damping spring {k 12 , c 12 } • Tethered to the origin with damping springs {k 1 , c 1 , k 2 , c 2 } • Subject to a time-dependent body force {F 1 , F 2 , t 1 , t 2 , t 3 } Coupled Oscillator State Space: 𝑦 1 , ሶ 𝑦 2 }(t) {x 1 , x 2 , ሶ Simple, explainable system with a reasonably- complex Design Space: {m 1 , m 2 , k 1 , c 1 , k 2 , c 2 , k 12 , c 12 , F 1 , F 2 , t 1 , t 2 , t 3 } LLNL-PRES-748201

  7. Parameter Space Let’s consider 6 parameters to vary in our design space: {c 1 , c 12 , c 2 , k 1 , k 12 , k 2 } In a “coded” space, each parameter can be varied from -1 to +1. A scale can be assigned to describe regions Scaled regions are at centers and extremes: p i ~ {-1, 0, +1} In the figure, p i ∈ 0 +/- 0.05, ∀ i LLNL-PRES-748201

  8. Mapping parameter space For convenience, we map test regions that span a 6D hypercube to a 2D grid (3 3 x3 3 ) Each region has a scale Regions span the space but may not completely cover it \01 Mapping Parameter Space LLNL-PRES-748201

  9. Box-Behnken (1960) Classic Statistical Sampling provides a baseline Guarantees a smooth quadratic fit in high- dimensional space D samples Designs are rotatable 3 12+center 6D -> 720 permutations 4 24+center 5 40+center 6 48+center 7 56+center 8 112+center 9 96+center 10 160+center 11 176+center 12 192+center \02 BoxBehnken 16 384+center LLNL-PRES-748201

  10. Learning the Transition Function A Neural Network is used to learn the transition function: Design Parameters: {c 1 , c 12 , c 2 , k 1 , k 12 , k 2 } + Body Force: F(F 1 ,F 2 ,t) 𝑦 1 , ሶ 𝑦 2 }(t) + State: {x 1 , x 2 , ሶ → NewState: {x 1 , x 2 , ሶ 𝑦 1 , ሶ 𝑦 2 }(t+ ∆ t) Learned Dynamics \03 Learning Transition Function LLNL-PRES-748201

  11. Predic iction Error System was trained from region {0,0,0,0,0,0} with a scale of 0.10 Prediction is done in region {-1,0,0,0,0,0} with a scale of 0.01 Predictor has not seen this dynamic, but tries! Error is calculated as the integrated L1 norm: Extrapolated Dynamics 𝑢 𝑔 𝑄𝑠𝑓𝑒 𝑢 − 𝐷𝑏𝑚𝑑(𝑢) Error = ׬ 𝑢 0 LLNL-PRES-748201

  12. Mapping predic iction error Error for each region is mapped to grid for each state component Patterns can reveal parameter interactions Error is calculated as the integrated L1 norm: 𝑢 𝑔 𝑄𝑠𝑓𝑒 𝑢 − 𝐷𝑏𝑚𝑑(𝑢) Error = ׬ 𝑢 0 LLNL-PRES-748201

  13. Population-based Training New method by Google Deep Mind (28 November 2017) Used to search out optimal hyper- parameters for DNNs Can be used as a sampling method Leverages Explore vs. Exploit LLNL-PRES-748201

  14. Exp xperiment: Agent-based Sampli ling Six agents explore center region {0,0,0,0,0,0} with scale = 0.10 State transitions are stored in DB Random transitions are used to train Prediction 0 Local Error in all regions is calculated Agents move to a random Box-Behnken region (scale = 0.10) and explore Initial local error is calculated (current prediction) Agents with low error move to help others Simulations continue LLNL-PRES-748201

  15. Exp xperiment: Agent-based Sampli ling Six agents explore center region {0,0,0,0,0,0} with scale = 0.10 State transitions are stored in DB Random transitions are used to train Prediction 0 Local Error in all regions is calculated Agents move to a random Box-Behnken region (scale = 0.10) and explore Initial local error is calculated (current prediction) Agents with low error move to help others Simulations continue LLNL-PRES-748201

  16. Exp xperiment: Agent-based Sampli ling Six agents explore center region {0,0,0,0,0,0} with scale = 0.10 State transitions are stored in DB Random transitions are used to train Prediction 0 Local Error in all regions is calculated Agents move to a random Box-Behnken region (scale = 0.10) and explore Initial local error is calculated (current prediction) Agents with low error move to help others Simulations continue LLNL-PRES-748201

  17. Specula lativ ive Sampling LLNL-PRES-748201

  18. Simple Sampling Speculative Sampling Round 0 Sampling Center Region LLNL-PRES-748201

  19. Simple Sampling Speculative Sampling Round 1 Sampling 7/729 Regions LLNL-PRES-748201

  20. Simple Sampling Speculative Sampling Round 2 Sampling 13/729 Regions LLNL-PRES-748201

  21. Simple Sampling Speculative Sampling Round 3 Sampling 19/729 Regions LLNL-PRES-748201

  22. Simple Sampling Speculative Sampling Round 4 Sampling 25/729 Regions LLNL-PRES-748201

  23. Simple Sampling Speculative Sampling Round 5 Sampling 31/729 Regions LLNL-PRES-748201

  24. Simple Sampling Speculative Sampling Round 6 Sampling 37/729 Regions LLNL-PRES-748201

  25. Simple Sampling Speculative Sampling Round 7 Sampling 43/729 Regions LLNL-PRES-748201

  26. Simple Sampling Speculative Sampling Round 8 Sampling 49/729 Regions LLNL-PRES-748201

  27. TensorFlow im imple lementation TensorFlow can be used for the solver Allows hardware control: Solver/Simulator -> CPU Learning -> GPU Inference -> TPU/neuromorphic Allows algorithm instrumentation LLNL-PRES-748201

  28. Discussion • Simple system for prototyping – Coupled Oscillator • Small parameter space – {c 1 , c 12 , c 2 , k 1 , k 12 , k 2 } • Mapping parameter space to 2D grid • Classical sampling baseline – Box-Behnken • Learning the transition function • Mapping prediction error • Population-based search • Experiments • TensorFlow implementation LLNL-PRES-748201

Recommend


More recommend