university of oxford mlqmc methods for elliptic pdes
play

University of Oxford MLQMC Methods for Elliptic PDEs Driven by White - PowerPoint PPT Presentation

University of Oxford MLQMC Methods for Elliptic PDEs Driven by White Noise M. Croci (Oxford), M. B. Giles (Oxford), M. E. Rognes (Simula), P. E. Farrell (Oxford) SIAM CSE and ICIAM 2019 Mathematical Institute EPSRC Centre for Doctoral Training


  1. University of Oxford MLQMC Methods for Elliptic PDEs Driven by White Noise M. Croci (Oxford), M. B. Giles (Oxford), M. E. Rognes (Simula), P. E. Farrell (Oxford) SIAM CSE and ICIAM 2019 Mathematical Institute EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  2. Overview University of Oxford Introduction and Background Multilevel Monte Carlo Mathematical Institute Multilevel Quasi Monte Carlo Conclusions

  3. Introduction University of Oxford The motivation of our research is the solution of partial differential equations with random coefficients via the multilevel Quasi-Monte Carlo method. Today we focus on the ubiquitous model problem, −∇ · ( e u ( x ,ω ) ∇ p ( x , ω )) = 1 , x ∈ G ⊂ R d , ω ∈ Ω , p ( x , ω ) = 0 , x ∈ ∂ G , ω ∈ Ω . Here u ( x , ω ) ∼ N (0 , C ( x , y )) is a (Mat´ ern) Gaussian random field and we are interested in computing E [ P ], where P ( ω ) = || p || 2 L 2 ( G ) ( ω ). Mathematical Institute Simple approach: standard Monte Carlo (MC) method, N E [ P ] ≈ 1 � P ( ω n ) . N n =1 Convergence rate O ( N − 1 / 2 ). O ( ε − q ) cost per sample ⇒ O ( ε − 2 − q ) complexity for ε tol. EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  4. From standard Monte Carlo to Quasi Monte Carlo University of Oxford Approximate E [ P ] with an s -dimensional integral over [0 , 1] s , N [0 , 1] s Y ( x )d x ≈ 1 � � E [ P ] ≈ Y ( x n ) , N n =1 1 . 0 1 . 0 0 . 8 0 . 8 0 . 6 0 . 6 Mathematical Institute 0 . 4 0 . 4 0 . 2 0 . 2 0 . 0 0 . 0 0 . 0 0 . 2 0 . 4 0 . 6 0 . 8 1 . 0 0 . 0 0 . 2 0 . 4 0 . 6 0 . 8 1 . 0 Pseudo-random points. Low-discrepancy point sequence (Sobol’). QMC convergence rate up to O ( N − 1 ) ⇒ up to O ( ε − 1 − q ) complexity. EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  5. Randomised Quasi Monte Carlo University of Oxford M � N � � [0 , 1] s Y ( x )d x ≈ 1 1 � � E [ P ] ≈ Y (ˆ x n , m ) , M N m =1 n =1 x n , m } N n =1 is the m -th randomisation of a low-discrepancy point set { x n } N where { ˆ n =1 . 1 . 0 1 . 0 0 . 8 0 . 8 0 . 6 0 . 6 Mathematical Institute 0 . 4 0 . 4 0 . 2 0 . 2 0 . 0 0 . 0 0 . 0 0 . 2 0 . 4 0 . 6 0 . 8 1 . 0 0 . 0 0 . 2 0 . 4 0 . 6 0 . 8 1 . 0 First N = 256 Sobol’ points before (left) and after (right) scrambling. QMC convergence rate up to O ( N − 1 ) ⇒ up to O ( ε − 1 − q ) complexity. EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  6. Multilevel Monte Carlo [Giles 2008] University of Oxford Solve for P using a hierarchy of L + 1 (possibly non-nested) meshes to obtain the approximations P ℓ of different accuracy for ℓ = 0 , . . . , L , then L � E [ P ] ≈ E [ P L ] = E [ P 0 ] + E [ P ℓ − P ℓ − 1 ] . ℓ =0 Mathematical Institute EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  7. Multilevel Monte Carlo [Giles 2008] University of Oxford Solve for P using a hierarchy of L + 1 (possibly non-nested) meshes to obtain the approximations P ℓ of different accuracy for ℓ = 0 , . . . , L , then L � E [ P ] ≈ E [ P L ] = E [ P 0 ] + E [ P ℓ − P ℓ − 1 ] . ℓ =0 Apply standard MC to each term on the RHS: N 0 N ℓ L E [ P ] ≈ 1 1 Mathematical Institute � � � P 0 ( ω n [ P ℓ ( ω n ℓ ) − P ℓ − 1 ( ω n 0 ) + ℓ )] . N 0 N ℓ n =1 ℓ =0 n =1 ⇒ optimal N ℓ known and O ( ε − 2 ) complexity, O ( ε − q ) better Under suitable conditions = than MC. EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  8. Multilevel Quasi Monte Carlo [Giles and Waterhouse 2009] University of Oxford We now approximate each expectation in the telescoping sum with randomised quasi Monte Carlo (QMC). Rewrite E [ P ℓ − P ℓ − 1 ] as the s ℓ -dimensional integral over [0 , 1] s ℓ , M N ℓ � � � Y ℓ ( x )d x ≈ 1 1 � � x ℓ = ˆ E [ P ℓ − P ℓ − 1 ] = Y ℓ (ˆ n , m ) Y ℓ , M N ℓ [0 , 1] s ℓ m =1 n =1 n , m } N ℓ n } N ℓ x ℓ n =1 is the m -th randomisation of a low-discrepancy point set { x ℓ where { ˆ n =1 . We use M = 32 and random digital shifted Sobol’ sequences 1 . Mathematical Institute QMC integration convergence 2 up to O ( N − 1 ⇒ complexity up to O ( ε − 1 ). ) = ℓ � MKL library Sobol’ sequence implementation augmented with Joe and 1 Python-wrapped Intel R Kuo’s primitive polynomials and direction numbers ( s max = 21201) [Joe and Kuo 2008]. 2 Lots of recent work with randomly shifted lattice rules [Kuo, Schwab, Sloan 2015, Kuo, Scheichl, Schwab, Sloan, Ullmann 2017, Herrmann and Schwab 2017,...]. EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  9. Gaussian field sampling University of Oxford Sampling the Gaussian field u ( x , ω ) ∼ N (0 , C ) is hard! Mathematical Institute Lots of recent developments related to ML(Q)MC: [Kuo et al. 2015, Kuo et al. 2018, Graham et al. 2018, Drzisga et al. 2017, Herrmann and Schwab 2017, Osborn et al. 2018, C. et al. 2018, ...] EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  10. Gaussian field sampling University of Oxford Sampling the Gaussian field u ( x , ω ) ∼ N (0 , C ) is hard! Na¨ ıve approach Discretise u and compute a Cholesky factorization of the covariance matrix. Better approaches Karhunen-Lo` eve. Mathematical Institute FFT + circulant embeddings. [Dietrich and Newsam 1997] PDE approach [Lindgren et al. 2009] (see next). Lots of recent developments related to ML(Q)MC: [Kuo et al. 2015, Kuo et al. 2018, Graham et al. 2018, Drzisga et al. 2017, Herrmann and Schwab 2017, Osborn et al. 2018, C. et al. 2018, ...] EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  11. PDE approach to Mat´ ern field sampling [Lindgren et al. 2009] University of Oxford If the covariance of u is of the Mat´ ern class with smoothness parameter ν , then u approximately satisfies the linear elliptic SPDE, � k u ( x , ω ) = η ˙ I − κ − 2 ∆ � x ∈ D ⊂ R d , ν = 2 k − d / 2 > 0 , Lu ( x , ω ) = W , ˙ W is spatial white noise, G ⊂ D , η ∈ R and for today k ∈ N , η = 1. where Mathematical Institute Spatial white noise in [0 , 1] 2 . Refs: [Abrahamsen 1997, Scheuerer 2010, Lindgren et al. 2009, Bolin et al. 2017, Khristenko et al. 2018] EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  12. PDE approach to Mat´ ern field sampling [Lindgren et al. 2009] University of Oxford ˙ Definition (Spatial White Noise W (Hida et al. 1993)) For any φ ∈ L 2 ( D ), define � ˙ D φ d ˙ W . For any φ i , φ j ∈ L 2 ( D ), b i = � ˙ � W , φ � := W , φ i � , b j = � ˙ W , φ j � are zero-mean Gaussian random variables, with, � E [ b i b j ] = φ i φ j dx =: M ij , b ∼ N (0 , M ) . (1) D Mathematical Institute Spatial white noise in [0 , 1] 2 . Refs: [Abrahamsen 1997, Scheuerer 2010, Lindgren et al. 2009, Bolin et al. 2017, Khristenko et al. 2018] EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  13. Overview University of Oxford Introduction and Background Multilevel Monte Carlo Mathematical Institute Multilevel Quasi Monte Carlo Conclusions

  14. Efficient white noise sampling for MLMC [C. et al. 2018] University of Oxford Let { V ℓ } L ℓ =0 be a hierarchy of (possibly non-nested) FEM approximation subspaces with n ℓ V ℓ = span( { φ ℓ i =1 ) ⊆ H 1 i } dofs 0 ( D ). On each MLMC level, we need to solve for u ℓ and u ℓ − 1 , Lu ℓ = ˙ Lu ℓ − 1 = ˙ W , and W , if ℓ > 0 , where we use the same white noise sample on both levels to enforce the MLMC coupling. Mathematical Institute EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  15. Efficient white noise sampling for MLMC [C. et al. 2018] University of Oxford Let { V ℓ } L ℓ =0 be a hierarchy of (possibly non-nested) FEM approximation subspaces with n ℓ V ℓ = span( { φ ℓ i =1 ) ⊆ H 1 i } dofs 0 ( D ). On each MLMC level, we need to solve for u ℓ and u ℓ − 1 , Lu ℓ = ˙ Lu ℓ − 1 = ˙ W , and W , if ℓ > 0 , where we use the same white noise sample on both levels to enforce the MLMC coupling. After discretisation, the above yields the linear system, � � � � � � A ℓ u ℓ b ℓ 0 i = � ˙ b ℓ W , φ ℓ = = b , where i � . A ℓ − 1 u ℓ − 1 b ℓ − 1 0 Mathematical Institute Hence b ∼ N (0 , M ), where M is the mass matrix over V ℓ + V ℓ − 1 , (set V − 1 = ∅ ), i.e. � � M ℓ M ℓ,ℓ − 1 � M ℓ,ℓ − 1 φ ℓ i φ ℓ − 1 M = , = dx , if ℓ > 0 . ( M ℓ,ℓ − 1 ) T M ℓ − 1 ij j NOTE: we do not require the FEM approximation subspaces to be nested! EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  16. How to sample b ? University of Oxford Sampling b is hard! Mathematical Institute EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

  17. How to sample b ? University of Oxford Sampling b is hard! Na¨ ıve approach Factorise the covariance M using Cholesky ( cubic complexity! ) Works well if M diagonal . Previous work under this assumption [Lindgren et al. 2009, Osborn et al. 2017, Drzisga, et al. 2017, Du and Zhang 2002]. Mathematical Institute EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

Recommend


More recommend