Coarse-grained models for PDEs with random coefficients C. Grigo and P.-S. Koutsourelakis Continuum Mechanics Group Department of Mechanical Engineering Technical University of Munich SIAM CSE Atlanta 1 Mar 2017 SIAM CSE 17 — Mar 1 st , 2017 1/26 Reduced order modeling of SPDE’s
Stochastic PDE with random coefficients Stochastic PDE: K ( x , λ ( x , ξ )) u ( x , λ ( x , ξ )) = f ( x ) , +B.C. Figure: Random process λ ( x , ξ ) leads to random solutions u ( x , ξ ). SIAM CSE 17 — Mar 1 st , 2017 2/26 Reduced order modeling of SPDE’s
Outline 1 The Full-Order Model 2 A generative Bayesian surrogate model Model training 3 Sample problem: 2D stationary heat equation Model specifications Feature functions 4 Results 5 Summary SIAM CSE 17 — Mar 1 st , 2017 3/26 Reduced order modeling of SPDE’s
The Full-Order Model (FOM) Discretize K ( x , λ ( x , ξ )) u ( x , λ ( x , ξ )) = f ( x ) , +B.C. to a set of algebraic equations r f ( U f , λ f ( ξ )) = 0 Usually large ( N equations ∼ millions) Expensive, repeated evaluations for UQ (and various deterministic tasks, e.g. optimization/control, inverse problems) SIAM CSE 17 — Mar 1 st , 2017 4/26 Reduced order modeling of SPDE’s
Surrogate models Idea: Replace FOM U f = U f ( λ f ) by cheaper, yet inaccurate input-output map U f = f ( λ f ; θ ) based � N � U ( i ) f , λ ( i ) on training data D = f i =1 SIAM CSE 17 — Mar 1 st , 2017 5/26 Reduced order modeling of SPDE’s
Surrogate models Idea: Replace FOM U f = U f ( λ f ) by cheaper, yet inaccurate input-output map U f = f ( λ f ; θ ) based � N � U ( i ) f , λ ( i ) on training data D = f i =1 Problem: High dimensional uncertainties λ f - learning direct functional mapping (e.g. PCE [Gahem, Spanos 1991] , GP [Rasmussen 2006] , neural nets [Bishop 1995] ) will fail SIAM CSE 17 — Mar 1 st , 2017 5/26 Reduced order modeling of SPDE’s
Surrogate models Idea: Replace FOM U f = U f ( λ f ) by cheaper, yet inaccurate input-output map U f = f ( λ f ; θ ) based � N � U ( i ) f , λ ( i ) on training data D = f i =1 Problem: High dimensional uncertainties λ f - learning direct functional mapping (e.g. PCE [Gahem, Spanos 1991] , GP [Rasmussen 2006] , neural nets [Bishop 1995] ) will fail Solution: Coarse-grained model: Use models based on coarser discretization of PDE, U c = U c ( λ c ) SIAM CSE 17 — Mar 1 st , 2017 5/26 Reduced order modeling of SPDE’s
Surrogate models Idea: Replace FOM U f = U f ( λ f ) by cheaper, yet inaccurate input-output map U f = f ( λ f ; θ ) based � N � U ( i ) f , λ ( i ) on training data D = f i =1 Problem: High dimensional uncertainties λ f - learning direct functional mapping (e.g. PCE [Gahem, Spanos 1991] , GP [Rasmussen 2006] , neural nets [Bishop 1995] ) will fail Solution: Coarse-grained model: Use models based on coarser discretization of PDE, U c = U c ( λ c ) Question: Relation between U f and coarse output U c , but also relation between fine/coarse inputs λ f , λ c SIAM CSE 17 — Mar 1 st , 2017 5/26 Reduced order modeling of SPDE’s
Coarse-graining of SPDE’s Retain as much as possible information on U f during coarse-graining, i.e. Information bottleneck [Tishby, Pereira, Bialek, 1999] max θ I ( λ c , U f ; θ ) s.t. I ( λ f , λ c ; θ ) ≤ I 0 SIAM CSE 17 — Mar 1 st , 2017 6/26 Reduced order modeling of SPDE’s
Concept: Coarse grain random field λ ,. . . Probabilistic mapping λ f → λ c : p c ( λ c | λ f , θ c ) Goal: Prediction of U f , not reconstruction of λ f ! SIAM CSE 17 — Mar 1 st , 2017 7/26 Reduced order modeling of SPDE’s
. . . solve ROM and reconstruct U f from U c λ c → U c : solve r c ( U c , λ c ) = 0 Decode via coarse-to-fine map U c → U f : p cf ( U f | U c , θ cf ) SIAM CSE 17 — Mar 1 st , 2017 8/26 Reduced order modeling of SPDE’s
Graphical Bayesian model Figure: Bayesian network defining ¯ p ( U f | λ f , θ c , θ cf ). � p ( U f | λ f , θ c , θ cf ) = ¯ p cf ( U f | U c , θ cf ) p ( U c | λ c ) p c ( λ c | λ f , θ c ) d U c d λ c � = p cf ( U f | U c ( λ c ) , θ cf ) p c ( λ c | λ f , θ c ) d λ c . SIAM CSE 17 — Mar 1 st , 2017 9/26 Reduced order modeling of SPDE’s
Model training Maximum likelihood: � θ ∗ N � p ( U ( i ) f | λ ( i ) � c = arg max log ¯ f , θ c , θ cf ) θ ∗ θ c , θ cf cf i =1 Maximum posterior: � θ ∗ N � � p ( U ( i ) f | λ ( i ) c = arg max log ¯ f , θ c , θ cf ) + log p ( θ c , θ cf ) θ ∗ θ c , θ cf cf i =1 Data: λ ( i ) ∼ p ( λ ( i ) U ( i ) = U f ( λ ( i ) f ) , f ) . f f SIAM CSE 17 — Mar 1 st , 2017 10/26 Reduced order modeling of SPDE’s
Expectation - Maximization � p ( U ( i ) f | λ ( i ) p cf ( U ( i ) f | U c ( λ ( i ) c ) , θ cf ) p c ( λ ( i ) c | λ ( i ) f , θ c ) d λ ( i ) ¯ f , θ c , θ cf ) = c → Likelihood contains N integrals over N latent variables λ c → Use Expectation-Maximization algorithm [Dempster, Laird, Rubin 1977] : find lower bound p ( U ( i ) f | λ ( i ) log(¯ f , θ c , θ cf )) p cf ( U ( i ) f | U c ( λ ( i ) c ) , θ cf ) p c ( λ ( i ) c | λ ( i ) � � f , θ c ) � q ( i ) ( λ ( i ) d λ ( i ) ≥ c ) log c q ( i ) ( λ ( i ) c ) = F ( i ) ( θ ; q ( i ) t ( λ ( i ) c )) , where θ = [ θ c , θ cf ] . SIAM CSE 17 — Mar 1 st , 2017 11/26 Reduced order modeling of SPDE’s
Expectation-Maximization algorithm Maximize iteratively: E-step: Find optimal q ( i ) t ( λ ( i ) c ) given current estimate θ t of optimal θ and compute expectation values (MCMC, VI, EP) i F ( i ) t ( θ ; q ( i ) t ( λ ( i ) M-step: Maximize lower bound F t ( θ ) = � c )) w.r.t. θ to find θ t +1 Figure: Expectation-Maximization algorithm illustration SIAM CSE 17 — Mar 1 st , 2017 12/26 Reduced order modeling of SPDE’s
Expectation-Maximization algorithm Maximize iteratively: E-step: Find optimal q ( i ) t ( λ ( i ) c ) given current estimate θ t of optimal θ and compute expectation values (MCMC, VI, EP) i F ( i ) t ( θ ; q ( i ) t ( λ ( i ) M-step: Maximize lower bound F t ( θ ) = � c )) w.r.t. θ to find θ t +1 Figure: Expectation-Maximization algorithm illustration SIAM CSE 17 — Mar 1 st , 2017 12/26 Reduced order modeling of SPDE’s
Expectation-Maximization algorithm Maximize iteratively: E-step: Find optimal q ( i ) t ( λ ( i ) c ) given current estimate θ t of optimal θ and compute expectation values (MCMC, VI, EP) i F ( i ) t ( θ ; q ( i ) t ( λ ( i ) M-step: Maximize lower bound F t ( θ ) = � c )) w.r.t. θ to find θ t +1 Figure: Expectation-Maximization algorithm illustration SIAM CSE 17 — Mar 1 st , 2017 12/26 Reduced order modeling of SPDE’s
Expectation-Maximization algorithm Maximize iteratively: E-step: Find optimal q ( i ) t ( λ ( i ) c ) given current estimate θ t of optimal θ and compute expectation values (MCMC, VI, EP) i F ( i ) t ( θ ; q ( i ) t ( λ ( i ) M-step: Maximize lower bound F t ( θ ) = � c )) w.r.t. θ to find θ t +1 Figure: Expectation-Maximization algorithm illustration SIAM CSE 17 — Mar 1 st , 2017 12/26 Reduced order modeling of SPDE’s
Expectation-Maximization algorithm Maximize iteratively: E-step: Find optimal q ( i ) t ( λ ( i ) c ) given current estimate θ t of optimal θ and compute expectation values (MCMC, VI, EP) i F ( i ) t ( θ ; q ( i ) t ( λ ( i ) M-step: Maximize lower bound F t ( θ ) = � c )) w.r.t. θ to find θ t +1 Figure: Expectation-Maximization algorithm illustration SIAM CSE 17 — Mar 1 st , 2017 12/26 Reduced order modeling of SPDE’s
Sample problem: 2D heat equation with random coefficients ∇ x ( − λ ( x , ξ ( x )) ∇ x T ( x , ξ ( x ))) = 0 , +B.C. where ξ ( x ) ∼ GP (0 , cov( x i , x j )) with −| x i − x j | 2 � � cov( x i , x j ) = exp , l 2 and � if ξ ( x ) > c, λ hi , λ ( x , ξ ( x )) = λ lo , otherwise SIAM CSE 17 — Mar 1 st , 2017 13/26 Reduced order modeling of SPDE’s
FOM data samples Figure: Data samples for φ hi = 0 . 35, l = 0 . 098, c = 100 SIAM CSE 17 — Mar 1 st , 2017 14/26 Reduced order modeling of SPDE’s
FOM data samples Figure: Data samples for φ hi = 0 . 35, l = 0 . 098, c = 100 SIAM CSE 17 — Mar 1 st , 2017 14/26 Reduced order modeling of SPDE’s
FOM data samples Figure: Data samples for φ hi = 0 . 35, l = 0 . 098, c = 100 SIAM CSE 17 — Mar 1 st , 2017 14/26 Reduced order modeling of SPDE’s
FOM data samples Figure: Data samples for φ hi = 0 . 35, l = 0 . 098, c = 100 SIAM CSE 17 — Mar 1 st , 2017 14/26 Reduced order modeling of SPDE’s
Model specifications λ f → λ c = e z c : Element numbering with index k N features � z c,k = θ c,j ϕ j ( λ f,k ) + σ k Z k , Z k ∼ N (0 , 1) , j =1 U c → U f : p cf ( U f | U c ( z c ) , θ cf ) = N ( U f | W U c ( z c ) , S ) with feature functions ϕ i , coarse-to-fine projection W , diagonal covariance S = diag( s ). SIAM CSE 17 — Mar 1 st , 2017 15/26 Reduced order modeling of SPDE’s
Recommend
More recommend