predictability of coarse grained models of atomistic
play

Predictability of Coarse-Grained Models of Atomistic Systems in the - PowerPoint PPT Presentation

Predictability of Coarse-Grained Models of Atomistic Systems in the Presence of Uncertainty J. Tinsley Oden Institute for Computational Engineering and Sciences The University of Texas at Austin Belytschko Lecture Theoretical and Applied


  1. Cox’s Theorem Every natural extension of Aristotelian logic with uncertainties is Bayesian Precisely: There exists a continuous, strictly increasing, real-valued, non-negative function p , the plausibility of a proposition conditioned on information X , such that for every proposition A and B and consistent X 1 p ( A | X ) = 0 iff A is false given the information in X 2 p ( A | X ) = 1 iff A is true given the information in X 3 0 ≤ p ( A | X ) ≤ 1 p ( A ∧ B | X ) = p ( A | X ) p ( B | AX ) 4 5 p ( ¯ A | X ) = 1 − p ( A | X ) if X is consistent Richard Trelked Cox, Am. J. Physics, 1946 Edwin T. Jaynes, Probability Theory: The Logic of Science, 2003 Kevin van Horn, J. Approx. Reasoning, 2003 J.T. Oden Belytschko Lecture October 2014 8 / 93

  2. Cox’s Theorem Every natural extension of Aristotelian logic with uncertainties is Bayesian ∄ A s.t. ( A | X ) and Precisely: ( ¯ A | X ) are true � There exists a continuous, strictly increasing, real-valued, non-negative � function p , the plausibility of a proposition conditioned on information X , � such that for every proposition A and B and consistent X 1 p ( A | X ) = 0 iff A is false given the information in X 2 p ( A | X ) = 1 iff A is true given the information in X 3 0 ≤ p ( A | X ) ≤ 1 p ( A ∧ B | X ) = p ( A | X ) p ( B | AX ) 4 5 p ( ¯ A | X ) = 1 − p ( A | X ) if X is consistent Richard Trelked Cox, Am. J. Physics, 1946 Edwin T. Jaynes, Probability Theory: The Logic of Science, 2003 Kevin van Horn, J. Approx. Reasoning, 2003 J.T. Oden Belytschko Lecture October 2014 8 / 93

  3. Cox’s Theorem Every natural extension of Aristotelian logic with uncertainties is Bayesian ∄ A s.t. ( A | X ) and Precisely: ( ¯ A | X ) are true � There exists a continuous, strictly increasing, real-valued, non-negative � function p , the plausibility of a proposition conditioned on information X , � such that for every proposition A and B and consistent X 1 p ( A | X ) = 0 iff A is false given the information in X 2 p ( A | X ) = 1 iff A is true given the information in X 3 0 ≤ p ( A | X ) ≤ 1 p ( A ∧ B | X ) = p ( A | X ) p ( B | AX ) Bayes’ Rule 4 5 p ( ¯ A | X ) = 1 − p ( A | X ) if X is consistent Richard Trelked Cox, Am. J. Physics, 1946 Edwin T. Jaynes, Probability Theory: The Logic of Science, 2003 Kevin van Horn, J. Approx. Reasoning, 2003 J.T. Oden Belytschko Lecture October 2014 8 / 93

  4. Post-Cox Developments • Halpern, Joseph Y. , Counterexample to Cox’s Theorem - then a correction in an “Addendum to Cox’s Theorem” (1999), then refuted by van Horn (2003) • Amborg, Stephan and Sjodin, Gunnar (1999, 2000) • Van Horn, Kevin S. , “Constructing a Logic of Plausibility - A Guide to Cox’s Theorem,” J. Approx. Reasoning (2003) • Jaynes, Edwin T. , Probability Theory: The Logic of Science (2003) • Dupre, Maurice J. and Tipler, Frank J. , “A Trivial Proof of Cox’s Theorem” (2009) • McGrayne, Sharon B. , The Theory That Would Not Die (2012) • Freedman, David (1999, 2006) • Kleijn, B. J. K. and van der Vaart, A. W. , The Bernstein-von-Mises Theorem Under Misspecification (2012) • Owhadi, Houman, Scoval, Clint and Sullivan, Tim , “Bayesian Brittleness: Why no Bayesian Model is Good Enough” (2013) J.T. Oden Belytschko Lecture October 2014 9 / 93

  5. The Logic of Science: Bayesian Inference Bayes’ Rule Thomas Bayes (1763): “An Essay Towards Solving a P ( A | B ) = P ( B | A ) P ( A ) Problem in the Doctrine of P ( B ) Chances” PRS ∗ Logical Probability ⊃ frequency based theory J.T. Oden Belytschko Lecture October 2014 10 / 93

  6. Bayesian Model Calibration, Validation, and Prediction P (Θ) = a parametric model class = { A ( θ , S, u ( θ , S )) = 0 } ✟ ❍ ✟ ❍ ❩❩ likelihood prior ✂ ❇ mathematical model solution ❇ ✂ ❩ � ✡ � ✡ ✂ parameters scenario PP π ( θ | y ) = π ( y | θ ) π ( θ ) ❏ P ❏ π ( y ) ✂✂ ◗ ◗ posterior evidence Ω i + ε i = y i Ω i − d i ( u ( θ , S )) = η i t Ω i = reality p ( ε i + “ η i ”) = p ( y i − d i ( θ )) = π ( y i | θ ) = likelihood J.T. Oden Belytschko Lecture October 2014 11 / 93

  7. 2. The Tyranny of Scales: Predictivity of Multiscale Models = ⇒ = ⇒ All-Atom Coarse-Grained Macro (AA) (CG) (Continuum) Model Model Model The confluence of all challenges in Predictive Science: Exactly what is the model? Is it “valid”? What is the level of uncertainty in the prediction? J.T. Oden Belytschko Lecture October 2014 12 / 93

  8. Nanomanufacturing a) Semiconductor Component b) Multiblock Component National Medal of Technology, 2008 Japan Prize, 2013 C. Grant Willson, UT Austin c) Manufacturing detail J.T. Oden Belytschko Lecture October 2014 13 / 93

  9. Motivation for Coarse Graining ���� 30 nm = 600 atoms ⇒ 216,000,000 atoms in a cube ⇒ 216,000,000 x 3 degrees of freedom = 20 coarse-grained particles ⇒ 8000 particles in a cube ⇒ 24,000 degrees of freedom J.T. Oden Belytschko Lecture October 2014 14 / 93

  10. Coarse Graining as a Reduced Order Method • M.L. Huggins, Journal of Chemical Physics , 1941 • P .J. Flory, Journal of Computational Physics , 1942 • S. Izvekov, M. Parrienllo, C.J. Burnham, and G.A. Voth, Journal of Chemical Physics , 2004 • S. Izvekov and G.A. Voth, Journal of Physical Chemistry B , 2005, Journal of Chemical Physics , 2005, 2006 • W.G. Noid, J.-W. Chu, P . Liu, G.S. Ayton, V. Krishna, S. Izvekov, G.A. Voth, A. Das, and H.C. Anderson, The Journal of Chemical Physics , 2008 • J.W. Mullinax and W.G. Noid, Journal of Chemical Physics , 2009 • S. Izvekov, P .W. Chung, B.M. Rice, Journal of Chemical Physics , 2010 • E. Brini, V. Marcon, and N.F.A. van der Vegt, Physical Chemistry Chemical Physics , 2011 • A. Chaimovich and M.S. Shell, Journal of Chemical Physics , 2011 • E. Brini and N.F.A. van der Vegt, Journal of Chemical Physics , 2012 • S.P . Carmichael and M.S. Shell, Journal of Physical Chemistry B , 2012 • Y. Li, B.C. Abberton, M. Kroger, W.K.Liu, Polymers , 2013. • W.G. Noid, Journal of Chemical Physics , 2013 J.T. Oden Belytschko Lecture October 2014 15 / 93

  11. Various CG Methods • Force-matching methods • Multiscale coarse-graining • Iterative Boltzmann inversion • Reverse Monte Carlo • Conditional Reverse Work • Minimum Relative Entropy . . . While often advocated, few take into account uncertainties in data, parameters, model inadequacy, . . . J.T. Oden Belytschko Lecture October 2014 16 / 93

  12. Parametric Model Classes M k M 1 G 1 M 2 G G 2 · · · G k M k M = {M 1 , M 2 , . . . , M k } J.T. Oden Belytschko Lecture October 2014 17 / 93

  13. CG Model ∂U ( G ( ω ); θ ) (+B . C . ′ s) − F i = 0 , i = 1 , 2 , . . . , n ∂ R i N θ N co k i κ i 2 ( R − R 0 i ) 2 + X X 2 ( θ i − θ 0 i ) 2 U ( G ( ω ); θ ) = i =1 i =1 N ω κ t X 2 (1 + cos( iω − γ )) 2 i µ + i =1 "„ σ ij „ σ ij N N ( « 6 # ) « 12 q i q j X X µ + 4 ε ij − + R ij R ij 4 πε 0 R ij i =1 j = i +1 θ = CG model parameters = { k i , κ i , κ t i , ε ij , γ, σ ij } Macroscale Model Div ∂W ( µ ; w ) x ∈ Ω ⊂ R 3 (+B . C . ′ s) − f = 0 , ∂ F W ( µ ; w ) = α ( I 1 ( C ) − 3) + β ( I 2 ( C ) − 3) − κ ln J ( C ) “ C = F T F ; ” F = I + ∇ w µ = macromodel parameters = ( α, β, κ ) J.T. Oden Belytschko Lecture October 2014 18 / 93

  14. Parametric Model Classes M k M k : Non-bonded interaction Angle Angle Bond Dihedral Bond P k 1 P k 2 Non-bonded Non-bonded interaction interaction Angle Dihedral Bond P k 3 P km M i = {P i 1 ( θ i 1 ) , P i 2 ( θ i 2 ) , . . . , P im ( θ im ) } , i = 1 , 2 , . . . , k For simplicity in notation: M = {P 1 ( θ 1 ) , P 2 ( θ 2 ) , . . . , P m ( θ m ) } J.T. Oden Belytschko Lecture October 2014 19 / 93

  15. What are the Models? P 1 ✒ � m 1 M 1 ✟ ✯ p 1 � ✟ ✈ ✈ r n = { r 1 , r 2 , . . . , r n } M 2 m 2 ✁✁ ✕ ✈ ✲ ✁✁ ✕ P p 2 R 1 P q r 1 ✈ ❃ ✚ r 2 ❃ ✚ R 2 P 2 ✚✚✚✚ ✚✚✚✚ ✁ ✁ R N = { R 1 , R 2 , . . . , R N } ✁ ✁ . . . . . p n O ✁ . O ✁ ❳❳❳ ❳❳❳ ✁ ✕ ✈ ✈ ✟ ✯ ❳ ③ ✈ ❳ ③ ✈ ✟ P n ✁ “ G ( r n ) = G ( ω ) = R N ” R n M n r n m n AA Model CG Model 1 ≤ α ≤ n G Aα r α = R A ; G αA R A = r α 1 ≤ A ≤ N Observables � τ � ρ AA ( r n ) q ( r n ) d r n = lim τ →∞ τ − 1 q ( r n ( t )) dt � q � = AA Γ AA 0 � τ � ρ CG ( R N , θ ) q ( R N ) d R N = lim τ →∞ τ − 1 q ( R N ( t ) , θ ) dt CG Q ( θ ) = Γ CG 0 J.T. Oden Belytschko Lecture October 2014 20 / 93

  16. What are the Models? AA 1 ≤ α, β ≤ n ∂ u AA ( r n ) − f αi = 0 m αβ ¨ r βi + ∂r αi 1 ≤ i ≤ 3 CG 1 ≤ A, B ≤ N ∂ M AB ¨ U ( R N , θ ) − F Ai = 0 R Bi + ∂R Ai 1 ≤ i ≤ 3 Adjoint ∂ z βi + H αiβj ( r n ) z βj − q ( r n ) = 0 m αβ ¨ ∂r αi H αiβj ( r n ) = ∂ 2 u AA ( r n ) ∂r αi ∂r βj J.T. Oden Belytschko Lecture October 2014 21 / 93

  17. What are the Models? Residual � τ � ∂ z αi G αA M AB ¨ R ( R N ( θ ) , z n ) = lim τ →∞ τ − 1 U ( R N , θ ) R Bi + z αi G αB ∂R Bi 0 � − z αi G αB F Bi dt Theorem (Under suitable smoothness conditions), the error in the observables due to the CG approximation is, ∀ θ ∈ Θ , ε ( θ ) = � q � − Q ( θ ) = R ( R N ( θ ) , z n ) + ∆ ≈ R ( R N ( θ ) , z n ) � � r n − R N � where ∆ is a remainder of higher order in “ � ” J.T. Oden Belytschko Lecture October 2014 22 / 93

  18. Information Entropy Suppose � Q ( r n ) ρ ( r n ) q ( r n ) d r n = Γ � � R N ( θ ) � ρ ( r n ) q ( G ( r n ( θ ))) d r n Q = Γ q ( r n ) log ρ ( r n ) = Then R N ( θ ) D KL ( ρ ( r n ) � ρ ( G ( r n ( θ )))) � � Q ( r n ) − Q = � R N ( θ ) , z n � R = J.T. Oden Belytschko Lecture October 2014 23 / 93

  19. Information Entropy Suppose � Q ( r n ) ρ ( r n ) q ( r n ) d r n = Γ � � R N ( θ ) � ρ ( r n ) q ( G ( r n ( θ ))) d r n Q = Γ q ( r n ) log ρ ( r n ) = Then R N ( θ ) D KL ( ρ ( r n ) � ρ ( G ( r n ( θ )))) � � Q ( r n ) − Q = ✟✟✟✟✟✟✟✟✟ � R N ( θ ) , z n � R = � ρ ( ω ) = ρ ( ω ) log ρ ( G ( ω )) dω J.T. Oden Belytschko Lecture October 2014 23 / 93

  20. 3. Bayesian Model Calibration, Validation, and Prediction “The essence of ‘honesty’ or ‘objectivity’ demands that we take into account all of the evidence we have, not just some arbitrarily chosen subset of it.” − E.T. Jaynes, 2003 J.T. Oden Belytschko Lecture October 2014 24 / 93

  21. Climbing the Prediction Pyramid QoI Prior π ( θ ) Observations Scenarios Calibration ( S c , y c ) π ( θ | y c ) = π ( y c | θ ) π ( θ ) ✯ π ( y c ) Validation ( S v , y v ) π ( θ | y v , y c ) = π ( y v | θ , y c ) π ( θ , y c ) ✿ π ( y v | y c ) 0.5 Prediction ( S p , QoI) 0.4 0.3 0.2 π ( Q ) = π ( Q | θ , S v , S c ) 0.1 1 0 0 0.2 0.5 0.4 0.6 0.8 0 1 J.T. Oden Belytschko Lecture October 2014 25 / 93

  22. Basic Ideas: • Use statistical inverse methods based on Bayes’ rule to calibrate parameters π ( θ | y c ) = π ( y c | θ ) π ( θ ) π ( y c ) • Design validation experiments to challenge model assumptions and inform model of QoIs π ( θ | y v , y c ) = π ( y v | θ , y c ) π ( θ | y c ) π ( y v | y c ) • Is model “valid” (not invalid) for the validation QoI (observable) given the data and predictions π ( Q vk | y vk ) , π ( Q | y c ) ? • Solve forward problem for the QoI (not observable) using validation parameters � π ( Q ) = π ( Q | θ , y vk , y vk − 1 , . . . , y c ) d θ • Compute quantity of uncertainty in π ( Q ) J.T. Oden Belytschko Lecture October 2014 26 / 93

  23. Basic Ideas: • Use statistical inverse methods based on Bayes’ rule to calibrate parameters What is the likelihood function? π ( θ | y c ) = π ( y c | θ ) π ( θ ) What are the priors? How does π ( y c ) one compute the posterior? • Design validation experiments to challenge model assumptions and inform model of QoIs π ( θ | y v , y c ) = π ( y v | θ , y c ) π ( θ | y c ) π ( y v | y c ) • Is model “valid” (not invalid) for the validation QoI (observable) given the data and predictions π ( Q vk | y vk ) , π ( Q | y c ) ? • Solve forward problem for the QoI (not observable) using validation parameters � π ( Q ) = π ( Q | θ , y vk , y vk − 1 , . . . , y c ) d θ • Compute quantity of uncertainty in π ( Q ) J.T. Oden Belytschko Lecture October 2014 26 / 93

  24. Basic Ideas: • Use statistical inverse methods based on Bayes’ rule to calibrate parameters What is the likelihood function? π ( θ | y c ) = π ( y c | θ ) π ( θ ) What are the priors? How does π ( y c ) one compute the posterior? • Design validation experiments to challenge model assumptions and Is the validation experiment inform model of QoIs π ( θ | y v , y c ) = π ( y v | θ , y c ) π ( θ | y c ) well chosen? Has it resulted π ( y v | y c ) in an information gain over the calibration? • Is model “valid” (not invalid) for the validation QoI (observable) given the data and predictions π ( Q vk | y vk ) , π ( Q | y c ) ? • Solve forward problem for the QoI (not observable) using validation parameters � π ( Q ) = π ( Q | θ , y vk , y vk − 1 , . . . , y c ) d θ • Compute quantity of uncertainty in π ( Q ) J.T. Oden Belytschko Lecture October 2014 26 / 93

  25. Basic Ideas: • Use statistical inverse methods based on Bayes’ rule to calibrate parameters What is the likelihood function? π ( θ | y c ) = π ( y c | θ ) π ( θ ) What are the priors? How does π ( y c ) one compute the posterior? • Design validation experiments to challenge model assumptions and Is the validation experiment inform model of QoIs π ( θ | y v , y c ) = π ( y v | θ , y c ) π ( θ | y c ) well chosen? Has it resulted π ( y v | y c ) in an information gain over the calibration? • Is model “valid” (not invalid) for the validation QoI (observable) given the data and predictions π ( Q vk | y vk ) , π ( Q | y c ) ? What is the criterion for “validity” of a model? • Solve forward problem for the QoI (not observable) using validation parameters � π ( Q ) = π ( Q | θ , y vk , y vk − 1 , . . . , y c ) d θ • Compute quantity of uncertainty in π ( Q ) J.T. Oden Belytschko Lecture October 2014 26 / 93

  26. Basic Ideas: • Use statistical inverse methods based on Bayes’ rule to calibrate parameters What is the likelihood function? π ( θ | y c ) = π ( y c | θ ) π ( θ ) What are the priors? How does π ( y c ) one compute the posterior? • Design validation experiments to challenge model assumptions and Is the validation experiment inform model of QoIs π ( θ | y v , y c ) = π ( y v | θ , y c ) π ( θ | y c ) well chosen? Has it resulted π ( y v | y c ) in an information gain over the calibration? • Is model “valid” (not invalid) for the validation QoI (observable) given the data and predictions π ( Q vk | y vk ) , π ( Q | y c ) ? What is the criterion for “validity” of a model? • Solve forward problem for the QoI (not observable) using validation How do we solve the parameters � forward problem? π ( Q ) = π ( Q | θ , y vk , y vk − 1 , . . . , y c ) d θ • Compute quantity of uncertainty in π ( Q ) J.T. Oden Belytschko Lecture October 2014 26 / 93

  27. Basic Ideas: • Use statistical inverse methods based on Bayes’ rule to calibrate parameters What is the likelihood function? π ( θ | y c ) = π ( y c | θ ) π ( θ ) What are the priors? How does π ( y c ) one compute the posterior? • Design validation experiments to challenge model assumptions and Is the validation experiment inform model of QoIs π ( θ | y v , y c ) = π ( y v | θ , y c ) π ( θ | y c ) well chosen? Has it resulted π ( y v | y c ) in an information gain over the calibration? • Is model “valid” (not invalid) for the validation QoI (observable) given the data and predictions π ( Q vk | y vk ) , π ( Q | y c ) ? What is the criterion for “validity” of a model? • Solve forward problem for the QoI (not observable) using validation How do we solve the parameters � forward problem? π ( Q ) = π ( Q | θ , y vk , y vk − 1 , . . . , y c ) d θ How do we “quantify” • Compute quantity of uncertainty in π ( Q ) uncertainty? J.T. Oden Belytschko Lecture October 2014 26 / 93

  28. 1 H ( p ) ∈ R The Prior 2 H ∈ C 0 ( R ) We seek a logical measure H ( p ) of the amount 3 “common sense:” H ( 1 n , 1 of uncertainty in a probability distribution p = n , . . . ) � as n → ∞ { p 1 , p 2 , . . . , p n } , p i = p ( x i ) 4 Consistency Shannon’s Theorem The only function satisfying four logical desiderata is the information entropy n � � H ( p ) = − (or − p i log p i p log p/m dx ) i =1 Moreover, the actual probability p maximizes H ( p ) subject to constraints imposed by available information Relative Entropy Given two pdfs p and q , the relative entropy is given by the Kullback-Leibler divergence, � p ( x ) log p ( x ) D KL ( p � q ) = q ( x ) dx = − H ( p ) + H ( p, q ) J.T. Oden Belytschko Lecture October 2014 27 / 93

  29. The Prior Maximize H ( p ) subject to prior information constraints: • � x � � n � n � � � � L ( p, λ ) = H ( p ) − λ 0 p i − 1 − λ 1 p i x i − � x � i =1 i =1 1 ⇒ � x � exp {− x/ � x �} π ( θ ) = • � x � , σ 2 � n � n x � � � � L ( p, λ ) = H ( p ) − λ 0 p i − 1 − λ 1 p i x i − � x � i =1 i =1 � n � � p i ( x i − � x � ) 2 − σ 2 − λ 2 x i =1 � − ( x − � x � ) 2 � 1 ⇒ π ( θ ) = √ exp 2 σ 2 2 πσ x x E. T. Jaynes (1988) J.T. Oden Belytschko Lecture October 2014 28 / 93

  30. Determining Calibration Priors: Bonds Bond Equilibrium Distance: R 0 � R 0 � = 2 . 5219 σ 2 R 0 = 4 . 1097 × 10 − 3 Spring Constant: k R 0 � k R � = k B T/ 2 σ 2 R 0 = 72 . 5264 Equilibrium Angle: θ 0 , � θ 0 � = 105 . 5117 σ 2 θ 0 = 192 . 8262 Spring Constant: k θ � k θ � = k B T/ 2 σ 2 θ 0 = 1 . 5458 × 10 − 3 J.T. Oden Belytschko Lecture October 2014 29 / 93

  31. The Likelihood Function R.A. Fisher, 1922: The likelihood that any parameter should have any assigned value is proportional to the probability that if this were true the totality of all observations should be that observed. Consider n i.i.d. random observables y 1 , y 2 , . . . , y n For each sample, π ( y i | θ ) = p ( y i − d i ( θ )) For many samples, n � π ( y 1 , y 2 , . . . , y n | θ ) = π ( y i | θ ) i =1 Then the log-likelihood is n � log π ( y i | θ ) L n ( θ ) = i =1 J.T. Oden Belytschko Lecture October 2014 30 / 93

  32. The Model Evidence - Model Plausibilities: Which model is “best”? M = set of parametric model classes = {P 1 , P 2 , . . . , P m } Each P has its own likelihood and parameters θ j Bayes’ rule in expanded form: π ( θ j | y , P j , M ) = π ( y | θ j , P j , M ) π ( θ j |P j , M ) , 1 ≤ j ≤ m π ( y |P j , M ) � model evidence = π ( y | θ j , P j , M ) π ( θ j |P j , M ) d θ j Now apply Bayes’ rule to the evidence: ρ j = π ( P j | y , M ) = π ( P j |M ) π ( y |M ) π ( y |P j , M ) = the posterior model plausibility m m 1 � � π ( y |P j , M ) π ( P j |M ) = 1 ρ j = π ( y |M ) j =1 j =1 J.T. Oden Belytschko Lecture October 2014 31 / 93

  33. 4. The Prediction Process: Traveling up the Prediction ✟ ✟ ✟✟✟✟ ✟✟✟✟ Pyramid ✟ ✟ ✟✟✟✟ ✟✟✟✟ S c ✟ ✟ ✟✟✟✟ ✟✟✟✟ ✟ ✟✟ ✟ ✟ ✟ ✟✟✟✟ Molecular Unit S v S p ✟✟✟ ✟ ✟✟✟ ✟ ✲ ✟✟✟ ✟ ✟✟✟ ✟ Polymer chains and Q = total energy per unit crosslinks = RPCs volume J.T. Oden Belytschko Lecture October 2014 32 / 93

  34. SFIL Coarse Graining Constituents of Etch Barrier Monomer 1 Monomer 2 Crosslinker Initiator H H C H H H Si H H H C C H H H H C C H H H O H O H C H O O H C H H H H C C O O H H H H H H H C Si O Si C C C H C C H H C O O C H C C C C C H O C O H H H H H H C H H C C C C C C C C H C H O H C H H H O C H H H H H H H H H H H H H C Si C H C C H H H H H C H H Farrell and Oden 2013 J.T. Oden Belytschko Lecture October 2014 33 / 93

  35. SFIL Coarse Graining Constituents of Etch Barrier Monomer 1 Monomer 2 Crosslinker Initiator alksjg dlkafjs J.T. Oden Belytschko Lecture October 2014 34 / 93

  36. SFIL Coarse Graining Constituents of Etch Barrier Monomer 1 Monomer 2 Crosslinker Initiator alksjg J.T. Oden Belytschko Lecture October 2014 35 / 93

  37. SFIL Calibration Scenarios: S c S c 1 S c 2 S c 3 J.T. Oden Belytschko Lecture October 2014 36 / 93

  38. SFIL Coarse Graining AA System 827 atoms 503 parameters CG System 61 particles J.T. Oden Belytschko Lecture October 2014 37 / 93

  39. SFIL Validation Scenario: S v Series of scenarios increasing in size S v, 1 S v, 2 S v, 3 For each scenario compute the QoI: � � � R N � � � ρ ( r n ) u AA ( r n ) d r n ; Q v,k ( θ ) = R N ; θ d R N Q = ρ U CG Γ AA Γ CG,k ρ ( r n ) ∝ exp {− βu ( r n ) } J.T. Oden Belytschko Lecture October 2014 38 / 93

  40. SFIL Validation Scenario: S v Series of scenarios increasing in size S v, 1 S v, 2 S v, 3 Compare the computed QoI to AA data: if � � π ( u AA | S v ) � π ( Q v ) D KL < γ tol the model is considered validated J.T. Oden Belytschko Lecture October 2014 38 / 93

  41. SFIL Model Classes Model Bonds Angles Dihedrals Non-Bonded # of Parameters P 1 � 12 P 2 � 18 P 3 � � 30 P 4 � 32 P 5 � � 44 P 6 � � 50 P 7 62 � � � P 8 � 96 P 9 � � 108 P 10 � � 114 P 11 � � � 126 P 12 � � 128 P 13 � � � 140 P 14 � � � 146 P 15 � � � � 158 J.T. Oden Belytschko Lecture October 2014 39 / 93

  42. Sensitivity Analysis • PIRT (Phenomena Identification and Ranking Table) • Importance Measures (Hora and Iman, 1995) • Correlation Ratios (McKay, 1995) • Sensitivity Analysis (Saltelli, Chan, Scott, 2000, Saltelli et. al. 2008) • Variance-based S i 1 ,...,i k = V i 1 ,...,i k ( Y ) V ( Y ) • Entropy-based � ∞ p 1 ( y ( θ 1 , θ 2 , . . . , ¯ KL i ( p 1 � p 0 ) = θ i , . . . , θ m )) −∞ � � � log p 0 ( y ( θ 1 , θ 2 , . . . , ¯ θ i , . . . , θ m )) � � ¯ × � dy, θ i = � θ i � � � p 1 ( y ( θ 1 , θ 2 , . . . , θ i , . . . , θ m )) = D KL • Scatter Plots, etc Saltelli, A., et.al. (2001) Auder, B. and Iooss, B. (2009) Chen, W et.al (2005) J.T. Oden Belytschko Lecture October 2014 40 / 93

  43. Sensitivity Analysis: Variance-Based Method Y ( θ ) = model output ( e.g. Y ( θ ) = � U ( · ; θ � CG ) V ( Y ) = output variance = E ( Y 2 ) − E 2 ( Y ) V ( Y ) = V θ ∼ i [( E X i ( Y | θ ∼ i ))] + E θ ∼ i [( V X i ( Y | θ ∼ i ))] S T i = 1 − V θ ∼ i [ E θ i ( Y | θ ∼ i )] (Total effect sensitivity index) V ( Y ) Example: Polyethylene chain with 24 beads Saltelli, A., et.al. (2001) J.T. Oden Belytschko Lecture October 2014 41 / 93

  44. Sensitivity Analysis: Monte Carlo Scatterplots J.T. Oden Belytschko Lecture October 2014 42 / 93

  45. Sensitivity Analysis: Comparison The sensitivity indices and scatterplots show that the dihedral parameters are unimportant, but how important are they? J.T. Oden Belytschko Lecture October 2014 43 / 93

  46. Occam’s Razor Principle of Occam’s Razor Among competing theories that lead to the same prediction, the one that relies on the fewest assumptions is the best. When choosing among a set of models: The simplest valid model is the best choice. J.T. Oden Belytschko Lecture October 2014 44 / 93

  47. Occam’s Razor Principle of Occam’s Razor Among competing theories that lead to the same prediction, the one that relies on the fewest assumptions is the best. When choosing among a set of models: The simplest valid model is the best choice. • simple ⇒ number of parameters • valid ⇒ passes Bayesian validation test How do we choose a model that adheres to this principle? J.T. Oden Belytschko Lecture October 2014 44 / 93

  48. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 45 / 93

  49. 5. Exploratory Example: Polyethylene Consider as an example polyethylene C 24 H 50 J.T. Oden Belytschko Lecture October 2014 46 / 93

  50. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 47 / 93

  51. Example: Occam-Plausibility Algorithm Consider as an example polyethylene Define the coarse-grained map: 2 carbons per bead J.T. Oden Belytschko Lecture October 2014 48 / 93

  52. Example: Occam-Plausibility Algorithm (cont) Representation of the CG potential using OPLS form Model Bonds Angles Dihedrals Non-Bonded Params P 1 � 2 P 2 � 2 P 3 � 2 P 4 � � 4 P 5 � � 4 P 6 � � 4 P 7 � � � 6 P 8 � 4 P 9 � � 6 P 10 � � 6 P 11 � � 6 P 12 8 � � � P 13 � � � 8 P 14 � � � 8 P 15 � � � � 10 J.T. Oden Belytschko Lecture October 2014 49 / 93

  53. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 50 / 93

  54. Example: Occam-Plausibility Algorithm (cont) Y = � U ( · ; θ ) � = potential energy J.T. Oden Belytschko Lecture October 2014 51 / 93

  55. Example: Occam-Plausibility Algorithm (cont) Model Bonds Angles Dihedrals Non-Bonded Params P 1 � 2 P 2 � 2 P 3 � 2 P 4 � � 4 P 5 � � 4 P 6 � � 4 P 7 � � � 6 P 8 � 4 P 9 � � 6 P 10 � � 6 P 11 � � 6 P 12 � � � 8 P 13 � � � 8 P 14 8 � � � P 15 � � � � 10 J.T. Oden Belytschko Lecture October 2014 52 / 93

  56. Example: Occam-Plausibility Algorithm (cont) Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params ¯ P 1 � 2 ¯ P 2 � 2 ¯ P 3 � 2 ¯ P 4 � 2 ¯ P 5 � � 4 ¯ P 6 � � 4 ¯ P 7 � � 4 ¯ P 8 � � 4 ¯ P 9 � � 4 ¯ P 10 � � � 6 ¯ P 11 � � � 6 J.T. Oden Belytschko Lecture October 2014 53 / 93

  57. Example: Occam-Plausibility Algorithm (cont) Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Category ¯ 1 P 1 � 2 ¯ P 2 � 2 ¯ P 3 � 2 ¯ P 4 � 2 ¯ 2 P 5 � � 4 ¯ P 6 � � 4 ¯ P 7 � � 4 ¯ P 8 � � 4 ¯ P 9 � � 4 ¯ 3 P 10 � � � 6 ¯ P 11 � � � 6 J.T. Oden Belytschko Lecture October 2014 54 / 93

  58. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 55 / 93

  59. Example: Occam-Plausibility Algorithm (cont) Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Category ¯ 1 P 1 � 2 ¯ P 2 � 2 ¯ P 3 � 2 ¯ P 4 � 2 ¯ 2 P 5 � � 4 ¯ P 6 � � 4 ¯ P 7 � � 4 ¯ P 8 � � 4 ¯ P 9 � � 4 ¯ 3 P 10 � � � 6 ¯ P 11 � � � 6 J.T. Oden Belytschko Lecture October 2014 56 / 93

  60. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 57 / 93

  61. Example: Occam-Plausibility Algorithm (cont) Calibration π ( y | θ ∗ j , P ∗ j , M ∗ ) π ( θ ∗ j |P ∗ j , M ∗ ) π ( θ ∗ j | y , P ∗ j , M ∗ ) = π ( y |P ∗ j , M ∗ ) Here, y = potential energy of C 6 H 14 Plausibility π ( y |P ∗ j , M ∗ ) π ( P ∗ j |M ∗ ) ρ ∗ j = π ( P ∗ j | y , M ∗ ) = π ( y |M ∗ ) Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Plausibility P ∗ � 2 1 1 P ∗ � 2 0 2 P ∗ � 2 0 3 P ∗ � 2 0 4 J.T. Oden Belytschko Lecture October 2014 58 / 93

  62. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 59 / 93

  63. Example: Occam-Plausibility Algorithm (cont) As a validation scenario, we consider C 18 H 38 at T = 300K in a canonical ensemble. Validation 1 | y v , y c ) = π ( y v | θ ∗ 1 , y c ) π ( θ ∗ 1 | y c ) π ( θ ∗ π ( y v ) Here, y v is the potential energy How well does this updated model reproduce the desired observable? Let π ( Q | θ ∗ ) = π ( U ( · ; θ ∗ )) , γ tol, 1 = 0 . 05 σ 2 • π ( Q ) = π ( u AA ) ⇒ AA E [ π ( Q | θ ∗ )] = E [ � U ( · ; θ ∗ ) � ] , γ tol, 2 = 0 . 2 Q • Q = � u AA � ⇒ J.T. Oden Belytschko Lecture October 2014 60 / 93

  64. Example: Occam-Plausibility Algorithm (cont) If we compare the distributions, D KL ( π ( Q AA ) � π ( Q CG )) = 0 . 2435 σ 2 AA > γ 1 ,tol where γ tol, 1 = 0 . 05 σ 2 AA If we compare the ensemble average, � � � � � Q AA − E π v post [ π ( Q v | θ )] � = 0 . 67173 Q > γ 2 ,tol where γ tol, 2 = 0 . 2 Q AA Model is invalid J.T. Oden Belytschko Lecture October 2014 61 / 93

  65. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 62 / 93

  66. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 63 / 93

  67. Example: Occam-Plausibility Algorithm (cont) Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Category ¯ 1 P 1 � 2 ¯ P 2 � 2 ¯ P 3 � 2 ¯ P 4 � 2 ¯ 2 P 5 � � 4 ¯ P 6 � � 4 ¯ P 7 � � 4 ¯ P 8 � � 4 ¯ P 9 � � 4 ¯ 3 P 10 � � � 6 ¯ P 11 � � � 6 J.T. Oden Belytschko Lecture October 2014 64 / 93

  68. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 65 / 93

  69. Example: Occam-Plausibility Algorithm (cont) Calibration π ( y | θ ∗ j , M ∗ ) π ( θ ∗ j , P ∗ j |P ∗ j , M ∗ ) π ( θ ∗ j | y , P ∗ j , M ∗ ) = π ( y |P ∗ j , M ∗ ) Here, y = potential energy of C 6 H 14 Plausibility π ( y |P ∗ j , M ∗ ) π ( P ∗ j |M ∗ ) ρ ∗ j = π ( P ∗ j | y , M ∗ ) = π ( y |M ∗ ) Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Plausibility P ∗ 3 . 7891 × 10 − 7 � � 4 1 P ∗ � � 4 0.3420 2 P ∗ � � 4 0.6580 3 J.T. Oden Belytschko Lecture October 2014 66 / 93

  70. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 67 / 93

  71. Example: Occam-Plausibility Algorithm (cont) As a validation scenario, we consider C 18 H 38 at T = 300K in a canonical ensemble. Validation 3 | y v , y c ) = π ( y v | θ ∗ 3 , y c ) π ( θ ∗ 3 | y c ) π ( θ ∗ π ( y v ) Here, y v is the potential energy How well does this updated model reproduce the desired observable? Let π ( Q | θ ∗ ) = π ( U ( · ; θ ∗ )) , γ 1 ,tol = 0 . 05 σ 2 • π ( Q ) = π ( u AA ) ⇒ AA E [ π ( Q | θ ∗ )] = E [ � U ( · ; θ ∗ ) � ] , γ 2 ,tol = 0 . 2 Q • Q = � u AA � ⇒ J.T. Oden Belytschko Lecture October 2014 68 / 93

  72. Example: Occam-Plausibility Algorithm (cont) If we compare the distributions, D KL ( π ( Q AA ) � π ( Q CG )) = 0 . 2084 σ 2 AA > γ 1 ,tol where γ tol, 1 = 0 . 05 σ 2 AA If we compare the ensemble average, � � � � � Q AA − E π v post [ π ( Q v | θ )] � = 0 . 5731 Q > γ 2 ,tol where γ tol, 2 = 0 . 2 Q AA Model is invalid J.T. Oden Belytschko Lecture October 2014 69 / 93

  73. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 70 / 93

  74. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 71 / 93

  75. Example: Occam-Plausibility Algorithm (cont) Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Category ¯ 1 P 1 � 2 ¯ P 2 � 2 ¯ P 3 � 2 ¯ P 4 � 2 ¯ 2 P 5 � � 4 ¯ P 6 � � 4 ¯ P 7 � � 4 ¯ P 8 � � 4 ¯ P 9 � � 4 ¯ 3 P 10 � � � 6 ¯ P 11 � � � 6 J.T. Oden Belytschko Lecture October 2014 72 / 93

  76. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 73 / 93

  77. Example: Occam-Plausibility Algorithm (cont) Calibration π ( y | θ ∗ j , M ∗ ) π ( θ ∗ j , P ∗ j |P ∗ j , M ∗ ) π ( θ ∗ j | y , P ∗ j , M ∗ ) = π ( y |P ∗ j , M ∗ ) Here, y = potential energy of C 6 H 14 Plausibility π ( y |P ∗ j , M ∗ ) π ( P ∗ j |M ∗ ) ρ ∗ j = π ( P ∗ j | y , M ∗ ) = π ( y |M ∗ ) Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Plausibility ¯ P 10 � � � 6 0.5 ¯ P 11 � � � 6 0.5 J.T. Oden Belytschko Lecture October 2014 74 / 93

  78. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 75 / 93

  79. Example: Occam-Plausibility Algorithm (cont) If we compare the distributions, D KL ( π ( Q AA ) � π ( Q CG )) = 0 . 0452 σ 2 AA < γ 1 ,tol where γ tol, 1 = 0 . 05 σ 2 AA If we compare the ensemble average, � � � � � Q AA − E π v post [ π ( Q v | θ )] � = 0 . 1721 Q < γ 2 ,tol where γ tol, 2 = 0 . 2 Q AA Model is NOT invalid J.T. Oden Belytschko Lecture October 2014 76 / 93

  80. Example: Occam-Plausibility Algorithm (cont) How do the observables change as we move through the Iterative Occam Step? J.T. Oden Belytschko Lecture October 2014 77 / 93

  81. The Occam-Plausibility Algorithm SENSITIVITY ANALYSIS OCCAM STEP Eliminate models with START ✲ ✲ Choose model(s) in the parameters to which the Identify a set of lowest Occam Category model output is insensitive possible models, M M ∗ = {P ∗ 1 , . . . , P ∗ m } ✲ M = { ¯ ¯ P 1 , . . . , ¯ P m } ❄ ITERATIVE OCCAM STEP ✲ CALIBRATION STEP Choose models in next Calibrate all models in M ∗ Occam category ❄ ✻ no PLAUSIBILITY STEP Identify a new set ✛ Does P ∗ j have the Compute plausibilities and of possible yes most parameters in ¯ M identify most plausible model P ∗ models j ✻ ❄ no Use validated ✛ ✛ VALIDATION STEP Is P ∗ j valid? yes Submit P ∗ j to validation test params to predict QoI J.T. Oden Belytschko Lecture October 2014 78 / 93

  82. Work in Progress: PMMA H P ✏✏ H P H C H One molecule has: ❇ 15 atoms ❇ . . . . . . 72 parameters C C ✂ ❇❇ ✂ H C O ✂ ✂ O ❇❇ H C H H J.T. Oden Belytschko Lecture October 2014 79 / 93

  83. Work in Progress: PMMA H P ✏✏ H P H C H ❇ ❇ . . . . . . C C ✂ ❇❇ ✂ H C O ✂ ✂ O ❇❇ H C H H J.T. Oden Belytschko Lecture October 2014 80 / 93

  84. CG Calibration Scenario G − → Model Bonds Angles LJ 9-6 LJ 12-6 A # of Parameters P 1 � � � 9 P 2 rigid � � � 9 P 3 � � � � 13 P 4 � � � 9 P 5 rigid � � � 9 P 6 � � � � 13 P 7 � � 5 P 8 rigid � � 5 P 9 � � � 9 J.T. Oden Belytschko Lecture October 2014 81 / 93

  85. The polymerization process (KMC) 10x10x10 nm J.T. Oden Belytschko Lecture October 2014 82 / 93

  86. Continuum Models Calibration Scenario − → Model # of Parameters P 1 : Saint Venant-Kirchhoff 2 P 2 : Neo-Hookean 2 P 3 : Mooney-Rivilin 3 J.T. Oden Belytschko Lecture October 2014 83 / 93

  87. Continuum Models Calibration Scenario Initial Configuration Equlibration Configuration J.T. Oden Belytschko Lecture October 2014 84 / 93

Recommend


More recommend