uncertainty quantification in computer experiments with
play

Uncertainty quantification in computer experiments with polynomial - PowerPoint PPT Presentation

Uncertainty quantification in computer experiments with polynomial chaos J. KO 1 with J. GARNIER 2 , D. LUCOR 3 & A. DIPANKAR 4 1. j o r d a n . k o @ m a c . c o m 2. Laboratoire de Probabilit es et Mod` eles Al eatoires, Universit


  1. Uncertainty quantification in computer experiments with polynomial chaos J. KO 1 with J. GARNIER 2 , D. LUCOR 3 & A. DIPANKAR 4 1. j o r d a n . k o @ m a c . c o m 2. Laboratoire de Probabilit´ es et Mod` eles Al´ eatoires, Universit´ e de Paris VII, France 3. L’Institut Jean Le Rond d’Alembert, Universit´ e de Paris VI, France 4. Max Planck Institute for Meteorology, Hamburg, Germany Workshop on uncertainty quantification, risk and decision-making Centre for the analysis of time series, LSE May 23, 2012

  2. Uncertainty quantification (UQ) in computer experiments ◮ Context: Deterministic and complex numerical simulator are used to model real dynamic systems and they can be computationally expensive to run ◮ We are interested to study the effect of epistemic (lack of knowledge) and aleatoric (inherent to system) uncertainties on the model outputs ◮ Sources include initial condition, boundary condition & model parameters ◮ Example: drug clearance in circulation as an exponential decay response d θ dt = − C θ with C as a r.v. that represents the population response ◮ Conventional approaches such as MC are not practical in studying these expensive simulators ◮ Goal: PC construct a metamodel that mimics the complex model’s behaviour and conduct UQ, SA, quantile estimation, optimization, calibration, etc .

  3. Probabilistic framework The UQ of a computer experiment follows the following iterative steps: 1. representation of input uncertainties - random variable or process 2. uncertainty propagation - MC, GP or gPC 3. quantification of solution uncertainty - mean, variance, pdf or sensitivity ! De Rocquigny (2006)

  4. Stochastic input representation: stochastic process Any second order random process κ ( x , ω ), with continuous and bounded covariance kernel C ( x 1 , x 2 ) = E ( κ ( x 1 , ω ) ⊗ κ ( x 2 , ω )), can be represented as an infinite sum of random variables. It is real, symmetric and positive–definite. Cl = 0.5 Cl = 1 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 Covariance Covariance 0.5 0.5 0.4 0.4 0.3 0.3 Exponential 0.2 0.2 Gaussian Sine 0.1 0.1 Triangular Linear Exponential 0 0 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 Lag Lag ◮ Karhunen-Lo` eve (KL) expansion represents the random process with an orthogonal set of deterministic functions with random coefficients as N √ X κ ( x , ω ) = µ κ ( x ) + λ n ψ n ( x ) ξ n ( ω ) . n =1 ◮ For a continuous kernel, the convergence of the KL expansion is uniform as N → ∞ . Karhunen (1948) & Lo` eve (1977) ◮ ψ n ( x ) and λ n solved from Fredholm integral equation of 2nd kind with C ( x 1 , x 2 ).

  5. Stochastic input representation: random variables ◮ Represent the random variable, κ ( ω ), with orthogonal functions of the stochastic variable with deterministic coefficients ∞ X κ ( ω ) = κ m φ m ( ξ ( ω )) . m =0 ◮ Wiener-Chaos: representation of a Gaussian random variable using Hermite polynomials with L 2 convergence as M → ∞ . Wiener (1938), Ghanem & Spanos (1991) and Cameron & Martin (1947) ◮ generalized Polynomial Chaos: generalized representation to non-Gaussian random variables with polynomials from the Wiener–Askey scheme. Xiu & Karniadakis (2002) ◮ if κ ( ω ) follows a normal distribution, it can be represented exactly as κ ( ω ) = µ κ + σ κ ξ where ξ is the linear term in Hermite

  6. Selection of orthogonal basis ◮ In the propagation step, we need to evaluate the inner product w.r.t. the probability space measure, ρ ( ξ ) d ξ as Z � φ i ( ξ ) , φ j ( ξ ) � = φ i ( ξ ) φ j ( ξ ) ρ ( ξ ) d ξ . Γ ◮ Correspondence between the pdf of ξ , ρ ( ξ ), and the weighting function of classical orthogonal polynomials, w ( ξ ), determines the polynomial basis Distribution Random variable, ξ Wiener-Askey PC, φ ( ξ ) Support, Γ Continuous Gaussian Hermite-chaos ( −∞ , ∞ ) gamma Laguerre-chaos [0 , ∞ ) beta Jacobi-chaos [ a , b ] uniform Legendre-chaos [ a , b ] Discrete Poisson Charlier-chaos { 0 , 1 , 2 , . . . } binomial Krawtchouk-chaos { 0 , 1 , . . . , N } negative binomial Meixner-chaos { 0 , 1 , 2 , . . . } hypergeometric Hahn-chaos { 0 , 1 , . . . , N } Periodic uniform Fourier-chaos ∗ [ − π, π )

  7. Multivariate basis Multivariate basis is the tensor products of 1D polynomials φ α m , n =1 ( ξ 1 ) ⊗ φ α m , n =2 ( ξ 2 ) ⊗ · · · ⊗ φ α m , n = N ( ξ N ) , for m = 0 , · · · , M , φ m ( ξ ) = φ α m ( ξ ) , for m = 0 , · · · , M . = Truncation depends on input dimension, N , and output nonlinearity, P M = 0 Q m P Notation Legendre Polynomials 2 P 0 ( ξ 1 ) P 0 ( ξ 2 ) 1 0 0 1 0 P 1 ( ξ 1 ) P 0 ( ξ 2 ) 1 1 1 ξ 1 1 0 0 ξ 2 −1 −1 ξ 1 P 0 ( ξ 1 ) P 1 ( ξ 2 ) 2 ξ 2 M = 1 M = 2 P 2 ( ξ 1 ) P 0 ( ξ 2 ) 3 / 2 ξ 2 1 1 3 2 1 − 1 / 2 P 1 ( ξ 1 ) P 1 ( ξ 2 ) 0 0 4 ξ 1 ξ 2 −1 −1 P 0 ( ξ 1 ) P 2 ( ξ 2 ) 3 / 2 ξ 2 5 2 − 1 / 2 1 1 1 1 0 0 0 0 ξ 2 ξ 1 ξ 2 ξ 1 −1 −1 −1 −1 P 3 ( ξ 1 ) P 0 ( ξ 2 ) 5 / 2 ξ 3 6 3 1 − 3 / 2 ξ 1 M = 3 M = 4 M = 5 P 2 ( ξ 1 ) P 1 ( ξ 2 ) 3 / 2 ξ 2 ξ 2 7 1 − 1 / 2 ξ 2 1 1 1 0.5 0.5 P 1 ( ξ 1 ) P 2 ( ξ 2 ) 3 / 2 ξ 1 ξ 2 0 8 2 − 1 / 2 ξ 1 0 0 −0.5 −1 −0.5 P 0 ( ξ 1 ) P 3 ( ξ 2 ) 5 / 2 ξ 3 9 2 − 3 / 2 ξ 2 1 1 1 1 1 1 0 0 0 0 0 0 ξ 2 ξ 2 ξ 2 −1 −1 ξ 1 −1 −1 ξ 1 −1 −1 ξ 1 M = 6 M = 7 M = 8 M = 9 1 1 1 1 0 0 0 0 −1 −1 −1 −1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 ξ 2 ξ 1 ξ 2 ξ 1 ξ 2 ξ 1 ξ 2 ξ 1 −1 −1 −1 −1 −1 −1 −1 −1

  8. Stochastic Galerkin method: intrusive approach PC represent the stochastic solution u ( x , ξ ) with the same orthogonal basis as the input, i.e. u ( x , ξ ) = P u m ( x ) φ m ( ξ ) Substitute the expansions into the system of equations, L ( x , ξ ; u ) = f ( x , ξ ). Take the Galerkin projection, i.e. “ ” X �L x , ξ ; u m ( x ) φ m ( ξ ) , φ m ( ξ ) � = � f ( x , ξ ) , φ m ( ξ ) � , for m = 0 , ..., M . ◮ u m ( x ) are solved from the system of ( M + 1) coupled equations. ◮ The system is deterministic and can be solved using a standard discretization technique. ◮ Extensive modification on the simulator is needed.

  9. Stochastic Galerkin method: intrusive approach Example First-order linear ODE: ˙ Θ( t , ξ ) = − C ( ξ )Θ( t , ξ ) with rate of decay as a normal r.v., i.e. C ( ξ ) = P M i =0 C i φ i ( ξ ). The gPC expansions of C ( ξ ) and Θ( t , ξ ) are substituted into the ODE to give M θ M C M θ X ˙ X X Θ k ( t ) φ k ( ξ ) = − C i Θ j ( t ) φ i ( ξ ) φ j ( ξ ) . k =0 i =0 j =0 The Galerkin projection of the expanded ODE with orthogonal polynomial: M C M θ � φ i φ j φ k � ˙ X X Θ k ( t ) = − C i Θ j ( t ) , for k = 0, ..., M θ . � φ 2 k � i =0 j =0 This coupled deterministic system of equations is solved with an initial condition Θ( t = 0) = P Θ m ( t = 0) φ m ( ξ ). With increasing t , the modal coefficients are propagated from the lower Θ m to higher Θ m , i.e. propagation of uncertainty as increasing non–linear response in the random space.

  10. Surface response of the linear ODE ˙ Θ( t , ξ ) = − C ( ξ )Θ( t , ξ ) ◮ ◮ Θ( t , ξ ) response is exponential in t with Θ( t = 0) = 1. ◮ Treating the coefficient of decay as a random variable, C ( ξ ) ∼ N (1 , 1) ◮ We represent the univariate stochastic output Θ( t ; ξ ) as a linear combination of Hermite polynomials Θ( t ; ξ ) = P Θ m ( t ) φ m ( ξ ). ◮ Uncertainty propagation visualized as solution response surface evolution in random space, ξ Θ (t; ξ ) response at t = 1, C ~ N(1,1) 60 Exact P=2 P=3 50 P=4 Exact response P=5 40 Θ (t; ξ ) 30 20 P=3 10 P=2 0 −5 −4 −3 −2 −1 0 1 ξ

  11. The choice of polynomial chaos truncation ◮ As response in ξ becomes more non–linear with t , the higher order P in φ m ( ξ ) are needed in gPC expansion ◮ Estimation of higher order statistics also require higher P ◮ Premature truncation leads to large error in the response surface and the solution statistics Mean Θ (t) with its std envelope, C ~ N(1,1) C ~ N(1,1) 0 10 1.6 1.4 P=5 Normal Error in Solution Variance −2 10 1.2 P=2 1 P=2 P=3 P=4 P=5 −4 10 0.8 Θ (t) 0.6 −6 10 0.4 0.2 −8 10 0 −10 −0.2 10 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 t t

  12. Evolution of the PC coefficients ◮ Increasing t propagates the initial uncertainty from lower order coefficients to higher order coefficients C ~ N(1,1) C ~ N(1,1) 1 0 10 −2 0.5 10 −4 10 0 | Θ m (t) | Θ m (t) −6 10 −0.5 m=0 m=0 m=1 −8 m=1 10 m=2 m=2 −1 m=3 m=3 m=4 m=4 −10 10 m=5 m=5 −1.5 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 t t ◮ The task now is to determine the coefficients of expansion, Θ m ( t ) in the representation. ◮ This simple system of equation easily solved with the intrusive approach ◮ Complex numerical solvers can benefit from a non–intrusive approach

Recommend


More recommend