approximation of stochastic partial differential
play

Approximation of Stochastic Partial Differential Equations by a - PowerPoint PPT Presentation

Approximation of Stochastic Partial Differential Equations by a Kernel-based Collocation Method Qi Ye Department of Applied Mathematics Illinois Institute of Technology Joint work with Prof. I. Cialenco and Prof. G. E. Fasshauer February 2012


  1. Approximation of Stochastic Partial Differential Equations by a Kernel-based Collocation Method Qi Ye Department of Applied Mathematics Illinois Institute of Technology Joint work with Prof. I. Cialenco and Prof. G. E. Fasshauer February 2012 qye3@iit.edu MCQMC 2012 February 2012

  2. Introduction Outline Introduction 1 Background 2 Kernel-based Collocation Methods 3 Numerical Examples 4 5 Acknowledgments qye3@iit.edu MCQMC 2012 February 2012

  3. Introduction Meshfree Methods Statistical Learning Stochastic Analysis qye3@iit.edu MCQMC 2012 February 2012

  4. Introduction Books Monographs j jpg jpg qye3@iit.edu MCQMC 2012 February 2012 g

  5. Background Outline Introduction 1 Background 2 Kernel-based Collocation Methods 3 Numerical Examples 4 5 Acknowledgments qye3@iit.edu MCQMC 2012 February 2012

  6. Background The method in a nutshell Parabolic Stochastic Equations = ⇒ Elliptic Stochastic Equations Here, we only consider the simple high-dimensional elliptic SPDE � in D ⊂ R d , ∆ u = f + ξ, u = 0 , on ∂ D , where ∂ 2 ∆ = � d j is the Laplacian operator, j = 1 ∂ x 2 suppose that u ∈ Sobolev space H m ( D ) with m > 2 + d / 2 a.s., f : D → R is a deterministic function, ξ : D × Ω ξ → R is a Gaussian field with mean zero and covariance kernel W : D × D → R defined on a probability space (Ω ξ , F ξ , P ξ ) , i.e., E ( ξ x ) = 0 , Cov ( ξ x , ξ y ) = W ( x , y ) . qye3@iit.edu MCQMC 2012 February 2012

  7. Background The method in a nutshell The proposed numerical method for solving a parabolic SPDE can be described as follows: We choose a reproducing kernel 1 K : D × D → R whose reproducing kernel Hilbert space H K ( D ) is embedded into H m ( D ) . → Noise Covariance Kernel W Smoothness of Exact Solution u ↓ ց ↓ ← Convergent Rates Reproducing Kernel K qye3@iit.edu MCQMC 2012 February 2012

  8. Background The method in a nutshell We simulate the Gaussian field with covariance structure W at a 2 finite collection of predetermined collocation points X D := { x 1 , · · · , x N } ⊂ D , X ∂ D := { x N + 1 , · · · , x N + M } ⊂ ∂ D , i.e., j = 1 , · · · , N , j = 1 , · · · , M , y j := f ( x j ) + ξ x j , y N + j := 0 , and � N , N � ξ := ( ξ x 1 , · · · , ξ x N ) ∼ N ( 0 , W ) , W := W ( x j , x k ) j , k = 1 . We also let the random vector y ξ := ( y 1 , · · · , y N + M ) T . qye3@iit.edu MCQMC 2012 February 2012

  9. Background The method in a nutshell We also define its integral-type kernel 3 ∗ � ∗ K ∈ H m , m ( D × D ) . K ( x , y ) := K ( x , z ) K ( y , z ) d z , D The kernel-based collocation solution is written as 4 N M ∗ ∗ � � u ( x ) ≈ ˆ u ( x ) := c k ∆ 2 K ( x , x k ) + K ( x , x N + k ) , c N + k k = 1 k = 1 where the unknown random coefficients c := ( c 1 , · · · , c N + M ) T are obtained by solving a random system of linear equations, i.e., ∗ K c = y ξ . qye3@iit.edu MCQMC 2012 February 2012

  10. Background Advantages Advantages The kernel-based collocation method is a meshfree approximation method. It does not require an underlying triangular mesh as the Galerkin finite element method does. The kernel-based collocation method can be applied to a high-dimensional domain D with complex boundary ∂ D . To obtain the truncated Gaussian noise ξ n for the finite element method, it is difficult for us to compute the eigenvalues and eigenfunctions of the noise covariance kernel W . For the kernel-based collocation method we need not worry about this issue. Once the reproducing kernel is fixed, the error of the collocation solution only depends on the collocation points. qye3@iit.edu MCQMC 2012 February 2012

  11. Background Difference for Finite Element Methods Given a finite element basis φ , we shall compute the right-hand side for the Galerkin finite element methods. Popular Methods: n � � � � ξ n � ξ x φ ( x ) d x ≈ x φ ( x ) d x = ζ k λ k e k ( x ) φ ( x ) d x , D D D k = 1 where the truncated Gaussian noise n � � ξ x ≈ ξ n ζ 1 , . . . , ζ n ∼ i.i.d. N ( 0 , 1 ) , x = ζ k λ k e k ( x ) , k = 1 and n � W ( x , y ) ≈ W n ( x , y ) = λ k e k ( x ) e k ( y ) . k = 1 qye3@iit.edu MCQMC 2012 February 2012

  12. Background Difference for Finite Element Methods Monte Carlo Methods: For each fixed sample path ω ∈ Ω ξ , ξ x ( ω ) is a function defined on D . However, we do not know its exact form. We can only use Monte Carlo methods to approximate the right-hand side, i.e., N � � ξ x φ ( x ) d x ≈ ξ x j φ ( x j ) . D j = 1 Kernel-based Methods: ξ x ≈ ˆ ξ x := w ( x ) T W − 1 ξ , where � N , N w ( x ) := ( W ( x , x 1 ) , · · · , W ( x , x N )) T , � W := W ( x j , x k ) j , k = 1 . qye3@iit.edu MCQMC 2012 February 2012

  13. Kernel-based Collocation Methods Outline Introduction 1 Background 2 Kernel-based Collocation Methods 3 Numerical Examples 4 5 Acknowledgments qye3@iit.edu MCQMC 2012 February 2012

  14. Kernel-based Collocation Methods Gaussian Fields According to [Cialenco, Fasshauer and Ye 2011 SPDE, Theorem 3.1], for a given µ ∈ H K ( D ) , there exists a probability measure P µ defined on (Ω K , F K ) = ( H K ( D ) , B ( H K ( D ))) such that the stochastic fields ∆ S , S given by x ∈ D , ω ∈ Ω K = H K ( D ) , ∆ S x ( ω ) = ∆ S ( x , ω ) := (∆ ω )( x ) , S x ( ω ) = S ( x , ω ) := ω ( x ) , x ∈ D ∪ ∂ D , ω ∈ Ω K = H K ( D ) , ∗ ∗ are Gaussian with means ∆ µ , µ and covariance kernels ∆ 1 ∆ 2 K , K defined on (Ω K , F K , P µ ) , respectively. For any fixed z ∈ R , we let E x ( z ) := { ω ∈ Ω K : ω ( x ) = z } = { ω ∈ Ω K : S x ( ω ) = z } . qye3@iit.edu MCQMC 2012 February 2012

  15. Kernel-based Collocation Methods Gaussian Fields [Cialenco, Fasshauer and Ye 2011 SPDE, Corollary 3.2], shows that the random vector ∗ S := (∆ S x 1 , · · · , ∆ S x N , S x N + 1 , · · · , S x N + M ) ∼ N ( m µ , K ) , where m µ := (∆ µ ( x 1 ) , · · · , ∆ µ ( x N ) , µ ( x N + 1 ) , · · · , µ ( x N + M )) T ∗ ∗   K ( x j , x k )) N , N K ( x j , x N + k )) N , M  (∆ 1 ∆ 2 j , k = 1 , (∆ 1 ∗ j , k = 1  . K := ∗ ∗ K ( x N + j , x k )) M , N K ( x N + j , x N + k )) M , M (∆ 2 j , k = 1 , ( j , k = 1 For any given y = ( y 1 , · · · , y N + M ) T ∈ R N + M , we let E X ( y ) := { ω ∈ Ω K : ∆ ω ( x 1 ) = y 1 , . . . , ω ( x N + M ) = y N + M } = { ω ∈ Ω K : S ( ω ) = y } . qye3@iit.edu MCQMC 2012 February 2012

  16. Kernel-based Collocation Methods Approximation and Convergence For each fixed x ∈ D and ω 2 ∈ Ω ξ , we obtain the "optimal" estimator P µ � u ( x , ω 2 ) ≈ ˆ � � �� u ( x , ω 2 ) = argmax sup E x ( z ) × Ω ξ � E X y ξ ( ω 2 ) , ξ z ∈ R µ ∈ H K ( D ) P µ � � � = argmax sup S x = z � S = y ξ ( ω 2 ) , ξ z ∈ R µ ∈ H K ( D ) p µ = argmax sup x ( z | y ξ ( ω 2 )) , z ∈ R µ ∈ H K ( D ) = k ( x ) T ∗ K − 1 y ξ ( ω 2 ) ∗ ∗ K ( x , x N + M )) T and where k ( x ) := (∆ 2 K ( x , x 1 ) , · · · , ξ := P µ ⊗ P ξ , P µ Ω K ξ := Ω K × Ω ξ , F K ξ := F K ⊗ F ξ , so that ∆ S , S and ξ can be extended to the product space while preserving the original probability distributional properties. qye3@iit.edu MCQMC 2012 February 2012

  17. Kernel-based Collocation Methods Approximation and Convergence Error Bound Analysis For any ǫ > 0, we define � E ǫ ω 1 × ω 2 ∈ Ω K × Ω ξ : | ω 1 ( x ) − ˆ x := u ( x , ω 2 ) | ≥ ǫ, � s.t. ∆ ω 1 ( x 1 ) = y 1 ( ω 2 ) , . . . , ω 1 ( x N + M ) = y N + M ( ω 2 ) . Let the fill distance h X := sup 1 ≤ j ≤ N + M � x − x j � 2 . min x ∈D qye3@iit.edu MCQMC 2012 February 2012

  18. Kernel-based Collocation Methods Approximation and Convergence We can deduce that � h m − 2 − d / 2 � P µ ξ ( E ǫ X sup x ) = O , ǫ µ ∈ H K ( D ) where m is the order of the Sobolev space corresponded to the exact solution of the SPDE. u ( x , ω 2 ) | ≥ ǫ if and only if u ∈ E ǫ Since | u ( x , ω 2 ) − ˆ x , we have P µ P µ ξ ( E ǫ � � u − ˆ � sup u � L ∞ ( D ) ≥ ǫ ≤ sup x ) → 0 , ξ µ ∈ H K ( D ) µ ∈ H K ( D ) , x ∈D when h X → 0. qye3@iit.edu MCQMC 2012 February 2012

  19. Numerical Examples Outline Introduction 1 Background 2 Kernel-based Collocation Methods 3 Numerical Examples 4 5 Acknowledgments qye3@iit.edu MCQMC 2012 February 2012

  20. Numerical Examples Stochastic Laplace’s Equations Let the domain D := ( 0 , 1 ) 2 ⊂ R 2 . We choose the deterministic function f ( x ) := − 2 π 2 sin ( π x 1 ) sin ( π x 2 ) − 8 π 2 sin ( 2 π x 1 ) sin ( 2 π x 2 ) , and the covariance kernel of the Gaussian noise ξ to be W ( x , y ) := 4 π 4 sin ( π x 1 ) sin ( π x 2 ) sin ( π y 1 ) sin ( π y 2 ) + 16 π 4 sin ( 2 π x 1 ) sin ( 2 π x 2 ) sin ( 2 π y 1 ) sin ( 2 π y 2 ) . Then the exact solution of the above elliptic SPDE has the form u ( x ) := sin ( π x 1 ) sin ( π x 2 ) + sin ( 2 π x 1 ) sin ( 2 π x 2 ) + ζ 1 sin ( π x 1 ) sin ( π x 2 ) + ζ 2 2 sin ( 2 π x 1 ) sin ( 2 π x 2 ) , where ζ 1 , ζ 2 ∼ i.i.d. N ( 0 , 1 ) . qye3@iit.edu MCQMC 2012 February 2012

Recommend


More recommend