on the weak approximation of solutions to stochastic
play

On the weak approximation of solutions to stochastic partial - PowerPoint PPT Presentation

On the weak approximation of solutions to stochastic partial differential equations Raphael Kruse (TU Berlin) (joint work with Adam Andersson und Stig Larsson) Berlin-Padova Young Researchers Meeting Stochastic Analysis and Applications in


  1. On the weak approximation of solutions to stochastic partial differential equations Raphael Kruse (TU Berlin) (joint work with Adam Andersson und Stig Larsson) Berlin-Padova Young Researchers Meeting Stochastic Analysis and Applications in Biology, Finance and Physics Berlin, October 23, 2014

  2. A semilinear SPDE with additive noise We consider � � d X ( t ) + AX ( t ) + F ( X ( t )) d t = d W ( t ); t ∈ ( 0 , T ] , (SPDE) X ( 0 ) = x 0 ∈ H , where H is a separable Hilbert space, ◮ X : [ 0 , T ] × Ω → H , ◮ − A generator of an analytic semigroup S ( t ) on H , ◮ ( W ( t )) t ∈ [ 0 , T ] is an H -valued Q -Wiener process, with Q : H → H covariance operator, Tr ( Q ) < ∞ , ◮ F : H → H nonlinear mapping, globally Lipschitz cont., ◮ x 0 ∈ H deterministic initial value.

  3. Existence and uniqueness ◮ Semigroup approach by [Da Prato, Zabczyk 1992]. ◮ There exists a unique mild solution X to (SPDE) given by the variation of constants formula � t � t X ( t ) = S ( t ) x 0 − S ( t − σ ) F ( X ( σ )) d σ + S ( t − σ ) d W ( σ ) 0 0 P -a.s. for 0 ≤ t ≤ T . ◮ It holds sup t ∈ [ 0 , T ] � X ( t ) � L p (Ω; H ) < ∞ , for all p ≥ 2.

  4. Existence and uniqueness ◮ Semigroup approach by [Da Prato, Zabczyk 1992]. ◮ There exists a unique mild solution X to (SPDE) given by the variation of constants formula � t � t X ( t ) = S ( t ) x 0 − S ( t − σ ) F ( X ( σ )) d σ + S ( t − σ ) d W ( σ ) 0 0 P -a.s. for 0 ≤ t ≤ T . ◮ It holds sup t ∈ [ 0 , T ] � X ( t ) � L p (Ω; H ) < ∞ , for all p ≥ 2. 1 ◮ X takes values in dom ( A 2 ) provided that − A is self-adjoint, positive definite with compact inverse.

  5. Computational goal For a given mapping ϕ ∈ C 2 p ( H ; R ) we want to estimate � � E ϕ ( X ( T )) . For this we need to discretize ◮ H (spatial discretization), ◮ [ 0 , T ] (temporal discretization), 1 ◮ H 0 = Q 2 ( H ) (discretization of the noise), ◮ Monte Carlo methods. . .

  6. Computational goal For a given mapping ϕ ∈ C 2 p ( H ; R ) we want to estimate � � E ϕ ( X ( T )) . For this we need to discretize ◮ H (spatial discretization), ◮ [ 0 , T ] (temporal discretization), 1 ◮ H 0 = Q 2 ( H ) (discretization of the noise), ◮ Monte Carlo methods. . . In this talk: ◮ Discretization of H by Galerkin finite element methods, ◮ Temporal discretization of [ 0 , T ] by linearly implicit Euler scheme.

  7. The numerical scheme The numerical scheme is given by, j ∈ { 1 , . . . , N k } , X j X j − 1 − kP h F ( X j − 1 h = ( I + kA h ) − 1 � ) + P h ∆ W j � , h h X 0 h = P h x 0 , where ◮ ( V h ) h ∈ ( 0 , 1 ] family of finite dimensional subspaces of H , ◮ A h : V h → V h discrete version of A , ◮ P h : H → V h orthogonal projector onto V h , ◮ k equidistant temporal step size, ◮ ∆ W j = W ( t j ) − W ( t j − 1 ) , t j = jk , j = 0 , 1 , . . . , N k .

  8. The numerical scheme The numerical scheme is given by, j ∈ { 1 , . . . , N k } , X j X j − 1 − kP h F ( X j − 1 h = ( I + kA h ) − 1 � ) + P h ∆ W j � , h h X 0 h = P h x 0 , where ◮ ( V h ) h ∈ ( 0 , 1 ] family of finite dimensional subspaces of H , ◮ A h : V h → V h discrete version of A , ◮ P h : H → V h orthogonal projector onto V h , ◮ k equidistant temporal step size, ◮ ∆ W j = W ( t j ) − W ( t j − 1 ) , t j = jk , j = 0 , 1 , . . . , N k . Examples for ( V h ) h ∈ ( 0 , 1 ] : ◮ standard finite element method, ◮ spectral Galerkin method, ◮ . . .

  9. Strong vs. weak convergence Two different notions of convergence: Strong convergence: Estimates of the error � 2 �� 1 2 , j = 1 , . . . , N k . � � X ( t j ) − X j �� � E h ⇒ Sample paths of X and X h are close to each other.

  10. Strong vs. weak convergence Two different notions of convergence: Strong convergence: Estimates of the error � 2 �� 1 2 , j = 1 , . . . , N k . � � X ( t j ) − X j �� � E h ⇒ Sample paths of X and X h are close to each other. Weak convergence: Estimates of the error ϕ ( X ( t N k )) − ϕ ( X N k for ϕ ∈ C 2 � � �� � E h ) p ( H , R ) . � ⇒ Distribution of X N k converges to distribution of X ( t N k ) . h Both notions play an important role in setting up MLMC methods.

  11. Main Idea: Gelfand triples Let us consider a Gelfand triple V ⊂ L 2 (Ω; H ) ⊂ V ∗ .

  12. Main Idea: Gelfand triples Let us consider a Gelfand triple V ⊂ L 2 (Ω; H ) ⊂ V ∗ . By the mean value theorem, the weak error reads ϕ ( X ( t N k )) − ϕ ( X N k � = Φ N k h , X ( t N k ) − X N k � � �� � �� � � � E h ) � , h L 2 (Ω; H ) where � 1 Φ n ρ X ( t n ) + ( 1 − ρ ) X n ϕ ′ � � h = d ρ. h 0

  13. Main Idea: Gelfand triples Let us consider a Gelfand triple V ⊂ L 2 (Ω; H ) ⊂ V ∗ . By the mean value theorem, the weak error reads ϕ ( X ( t N k )) − ϕ ( X N k � = Φ N k h , X ( t N k ) − X N k � � �� �� � � � � E h ) � , h L 2 (Ω; H ) where � 1 Φ n ρ X ( t n ) + ( 1 − ρ ) X n ϕ ′ � � h = d ρ. h 0 Then by duality ϕ ( X ( t N k )) − ϕ ( X N k � Φ N k � X ( t N k ) − X N k � ≤ � � �� � � � � h ) � E V ∗ . � � h h V

  14. Road map to weak convergence In order to prove weak convergence, we 1. determine a “nice” subspace V , 2. prove � Φ N k h � V < ∞ , � X ( t N k ) − X N k � � 3. Then: Convergence of V ∗ → 0 implies weak � h convergence with the same order.

  15. Road map to weak convergence In order to prove weak convergence, we 1. determine a “nice” subspace V , 2. prove � Φ N k h � V < ∞ , � X ( t N k ) − X N k � � 3. Then: Convergence of V ∗ → 0 implies weak � h convergence with the same order. Simplest choice: V = L 2 (Ω; H ) gives the well-known fact that strong convergence implies weak convergence.

  16. Road map to strong convergence The same steps also yield strong convergence: � 2 � � X ( t j ) − X j X ( t j ) − X j h , X ( t j ) − X j �� � � � E = h h L 2 (Ω; H ) � X ( t j ) − X j � X ( t j ) − X j � � � � ≤ V ∗ . � � h V h Thus: In order to prove strong convergence, we 1. determine a “nice” subspace V , 2. prove max j = 1 ,..., N k � X ( t j ) − X j h � V < ∞ , � X ( t j ) − X j � � 3. then: Convergence of max j = 1 ,..., N k V ∗ → 0 � h implies strong convergence with half the order.

  17. Weak convergence – Main result Theorem (Weak convergence) Let A be s.p.d. with compact inverse. Let F ∈ C 2 b ( H ) , 1 2 ) and Q be of finite trace. Let ( V h ) h ∈ ( 0 , 1 ] be x 0 ∈ dom ( A suitable approximation spaces. Then for every ϕ ∈ C 2 p ( H , R ) and γ ∈ ( 0 , 1 ) there exists C such that � ≤ C ( k γ + h 2 γ ) ϕ ( X ( t N k )) − ϕ ( X N k � � �� � E h ) ∀ h , k ∈ ( 0 , 1 ] .

  18. Weak convergence – Main result Theorem (Weak convergence) Let A be s.p.d. with compact inverse. Let F ∈ C 2 b ( H ) , 1 2 ) and Q be of finite trace. Let ( V h ) h ∈ ( 0 , 1 ] be x 0 ∈ dom ( A suitable approximation spaces. Then for every ϕ ∈ C 2 p ( H , R ) and γ ∈ ( 0 , 1 ) there exists C such that � ≤ C ( k γ + h 2 γ ) ϕ ( X ( t N k )) − ϕ ( X N k � � �� � E h ) ∀ h , k ∈ ( 0 , 1 ] . ◮ Assumptions can be relaxed for white noise. ◮ Assumptions on F can be relaxed to allow for more interesting Nemytskii operators.

  19. Sketch of proof: Stochastic convolution For simplicity we consider the equation (SPDE) with F = 0, x 0 = 0, d X ( t ) + AX ( t ) d t = d W ( t ) , t ∈ ( 0 , T ] , (SPDE2) X ( 0 ) = 0 . Then, � t X ( t ) = W A ( t ) = S ( t − σ ) d W ( σ ) , 0 is the stochastic convolution.

  20. Sketch of proof: Stochastic convolution For simplicity we consider the equation (SPDE) with F = 0, x 0 = 0, d X ( t ) + AX ( t ) d t = d W ( t ) , t ∈ ( 0 , T ] , (SPDE2) X ( 0 ) = 0 . Then, � t X ( t ) = W A ( t ) = S ( t − σ ) d W ( σ ) , 0 is the stochastic convolution. Numerical approximation � t j + 1 n − 1 X n � ( I + kA h ) n − j P h d W ( σ ) , n ∈ { 1 , . . . , N k } . h = t j j = 0

  21. Sketch of proof: V = L 2 (Ω; H ) The It¯ o isometry gives � T � 2 1 � 2 � X ( T ) − X N k � � � 2 � L 2 (Ω , H ) = � E h , k ( T − σ ) Q L 2 ( H ) d σ h , k 0 � T ( T − σ ) − θ d σ ( h θ + k θ 2 ) 2 , ≤ C Tr ( Q ) 0 where E h , k ( t ) = S ( t ) − ( I + kA h ) j + 1 P h for t ∈ [ t j , t j + 1 ) ,

  22. Sketch of proof: V = L 2 (Ω; H ) The It¯ o isometry gives � T � 2 1 � 2 � X ( T ) − X N k � � � 2 � L 2 (Ω , H ) = � E h , k ( T − σ ) Q L 2 ( H ) d σ h , k 0 � T ( T − σ ) − θ d σ ( h θ + k θ 2 ) 2 , ≤ C Tr ( Q ) 0 where E h , k ( t ) = S ( t ) − ( I + kA h ) j + 1 P h for t ∈ [ t j , t j + 1 ) , since � E h , k ( t ) � L ( H ) ≤ Ct − θ 2 ( h θ + k θ 2 ) . Therefore, θ ∈ [ 0 , 1 ) .

  23. Sobolev-Malliavin spaces For p ∈ [ 2 , ∞ ) let D 1 , p ( H ) be the subspace of all H -valued random variables Z : Ω → H , such that � 1 p < ∞ , � � Z � p L p (Ω , H ) + � DZ � p � Z � D 1 , p ( H ) = L p (Ω , L 2 ([ 0 , T ] , L 2 ( H 0 , H )) where DZ denotes the Malliavin derivative of Z .

Recommend


More recommend