karhunen lo eve approximation of random fields using
play

Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical - PowerPoint PPT Presentation

Institut f ur Numerische Mathematik und Optimierung Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR, August 1925, 2007 1


  1. Institut f¨ ur Numerische Mathematik und Optimierung Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR, August 19–25, 2007

  2. 1 Collaborators • Catherine Powell, David Silvester University of Manchester, School of Mathematics, Manchester, UK • Ingolf Busch, Michael Eiermann, Elisabeth Ullmann, TU Bergakademie Freiberg, Freiberg, Germany Support: DAAD/British Council Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  3. 2 Outline • Random fields and the Karhunen-Lo` eve expansion • Discretization of the covariance operator • Solution of the discrete eigenvalue problem • A numerical example Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  4. 3 Random Fields Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  5. 4 Formally • stochastic process indexed by a spatial coordinate x ∈ D ⊂ R d , D bounded, i.e., • measurable function a : D × Ω → R , where (Ω , A , P ) is a given probability space • For ω ∈ Ω fixed, a ( · , ω ) is a realization of the random field, i.e., a function D → R . • For x ∈ D fixed, a ( x, · ) is a random variable (RV) w.r.t. (Ω , A , P ) . Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  6. 5 Notation � � ξ � := ξ ( ω ) dP ( ω ) expected value Ω of RV ξ : Ω → R a ( x ) := � a ( x, · ) � mean of RF a at x ∈ D Cov a ( x, y ) := � ( a ( x, · ) − a ( x ))( a ( y, · ) − a ( y )) � covariance of RF a at x, y ∈ D Var a ( x ) := Cov a ( x, x ) variance of RF a at x ∈ D � σ a ( x ) := Var a ( x ) standard deviation of RF a at x ∈ D L 2 P (Ω) := { ξ : � ξ 2 � < ∞} RV of second order Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  7. 6 A RF is of second order, if a ( x, · ) ∈ L 2 P (Ω) for all x ∈ D . Theorem ( Karhunen-Lo` eve expansion ) . Given a second order RF a = a ( x, ω ) with continuous covariance function c ( x, y ) := Cov a ( x, y ) , denote by { ( λ m , a m ( x )) } the eigenpairs of the (compact) integral operator � C : L 2 ( D ) → L 2 ( D ) , ( Cu )( x ) = u ( y ) c ( x, y ) dy, D there exists a sequence { ξ m } m ∈ N of random variables with � ξ m � = 0 ∀ m, � ξ m ξ n � = δ m,n ∀ m, n such that the Karhunen-Lo` eve (KL) expansion ∞ � � a ( x, ω ) = a ( x ) + λ m a m ( x ) ξ m ( ω ) (KL) m =1 converges uniformly on D and in L 2 P . Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  8. 7 Note: • Covariance functions c ( x, y ) are continuous on D × D as well as symmetric and of positive type. • Therefore covariance operators C are compact, hence spectra Λ( C ) consist of countably many eigenvalues accumulating at most at zero. • Covariance operators are selfadjoint and positive semidefinite. Analogy Singular value expansion of integral operator � A : L 2 ( D ) → L 2 P , f ( x ) �→ ( Af )( ω ) := f ( x ) a ( x, ω ) dx, D � A ∗ : L 2 P → L 2 ( D ) , ξ ( ω ) �→ ( A ∗ ξ )( x ) = ξ ( ω ) a ( x, ω ) dP ( ω ) Ω C = A ∗ A. Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  9. 8 Common Covariance Models Cov a ( x, y ) = c ( x, y ) = c ( ρ ) , ρ = � x − y � 1 1 1 c(r) c(r) c(r) 0 0 0 −1 0 1 −1 0 1 −1 0 1 r r r Bessel exponential Gaussian c ( r ) = σ 2 ρ ℓ K 1 ( ρ c ( r ) = σ 2 e − ρ 2 /ℓ 2 c ( r ) = σ 2 e − ρ/ℓ ℓ ) ℓ > 0 is a measure of the “correlation length”, here ℓ = 0 . 1 , 1 , 2 . Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  10. 9 Variance For normalized eigenfunctions a m ( x ) , ∞ � λ m a m ( x ) 2 , Var a ( x ) = c ( x, x ) = m =1 � ∞ � Var a ( x ) dx = λ m ( a m , a m ) D = trace C. � �� � D m =1 =1 For constant variance (e.g., stationary RF), � Var a ≡ σ 2 > 0 , λ m = | D | σ 2 . m Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  11. 10 Truncated KL Expansion For computational purposes, KL expansion truncated after M terms: M � � a ( M ) ( x, ω ) = a ( x ) + λ m a m ( x ) ξ m ( ω ) . m =1 Truncation error ∞ � �� a − a ( M ) � 2 D � = λ m . m = M +1 Choose M such that sufficient amount of total variance of RF is retained. Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  12. 11 Eigenvalue Decay Roughly: the smoother the kernel, the faster { λ m } m ∈ N → 0 . More precisely: if D ⊂ R d , then if the kernel function c is piecewise H r : λ m ≤ c 1 m − r/d λ m ≤ c 2 m − r for any r > 0 piecewise smooth : λ m ≤ c 3 exp( − c 4 m 1 /d ) piecewise analytic : for suitable constants c 1 , c 2 , c 3 , c 4 . Note: piecewise smoothness of kernel also leads to bounds on derivatives of eigenfunctions a m in L ∞ ( D ) . Proven e.g. in [Schwab & Todor (2006)] , [Todor (2006)] Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  13. 12 Galerkin Discretization • T h admissible finite element triangulation of D • finite dimensional subspace of piecewise polynomials V h = { φ : D → R : φ | T ∈ P k ∀ T ∈ T } ⊂ L 2 ( D ) . • Discrete eigenvalue problem: find pairs ( λ h m , a h m ) such that ( Ca h m , φ ) = λ h m ( a h ∀ φ ∈ V h , m , φ ) m = 1 , 2 , . . . corresponds to generalized matrix eigenvalue problem Cx = λ Mx , [ C ] i,j = ( Cφ j , φ i ) , [ M ] i,j = ( φ j , φ i ) , i, j = 1 , 2 , . . . , N = dim V h . • C large and dense, M can be made diagonal using suitable basis. Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  14. 13 Discretization Error Discrete operator given by C h = P h CP h , P h the L 2 ( D ) orthogonal projec- tion to V h . Discrete eigenpairs { ( λ h m , a h m ) } N m =1 If covariance operator is piecewise smooth, then for any r > 0 � � 0 ≤ λ m − λ h h 2( k +1) λ 1 − r + h 4( k +1) λ − 2 r m ≤ K r , m m � ( I − P h ) a m � L 2 ( D ) ≤ K r λ − r m h k +1 . [Todor (2006)] Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  15. 14 Solution of Matrix Eigenvalue Problem • Only fixed number of leading eigenpairs required, suggests restarted Krylov subspace technique. We use the Thick-Restart Lanczos (TRL) method [Simon & Wu (2000)] . Idea: limit dimension of Krylov space to fixed m , save some desired approximate eigenpairs, generate new Krylov space which contains these retained approximations (restart). • Krylov methods require inexpensive matrix-vector product. We obtain this by replacing C by a hierarchical matrix approximation � C , for which matrix vector products can be computed in O ( N log N ) operations [Hackbusch (1999)] . Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  16. 15 Thick-Restart Lanczos Cycle (1) Given Lanczos decomposition of Krylov space K m ( A, v ) AQ m = Q m T m + β m +1 q m +1 e ⊤ Q m = [ q 1 , . . . , q m ] , Q ⊤ m , m Q m = I m , (2) compute eigenpairs T m y j = ϑ j y j , j = 1 , . . . , m , (3) select k < m Ritz vectors to retain, Y k := [ y 1 , . . . , y k ] , (4) set � Q k := Q m Y k , � T k := � k T m � Q ⊤ Q k to obtain A � Q k = � Q k � q k +1 s ⊤ q k +1 = q m +1 and s := Y ⊤ T k + β m +1 � with � k e m , (5) extend span { � q 1 , . . . , � q m +1 } to Krylov space of order m with Lanczos- type decomposition A � Q m = � Q m � T m + � q m +1 e ⊤ β m +1 � m Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  17. 16 After restart cycle, projection � T m of A on new Krylov space in T m + � A � Q m = � Q m � q m +1 e ⊤ β m +1 � m has the form   � � T k β m s   � �   β m s ⊤ α k +1 � β k +1     ... ... �   � T m = . β k +1     ... ...   � β m   � β m α m � Note: Leading k × k block is diagonal. Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

  18. 17 Remarks: • Mathematically equivalent to implicitly restarted Lanczos method and other augmented Krylov techniques, but more efficient. • Takes advantage of symmetry (ARPACK uses full recurrences). • Projected matrix � T k readily available ( = diag( ϑ 1 , . . . , ϑ k ) ). • Eigenvector residual norms from coordinate calculations (like in stan- dard symmetric Lanczos). • Well-known reorthogonalization techniques can be incorporated. • For covariance problem: no shift-invert techniques required. • Note: Need efficient matrix-vector product. Karhunen-Lo` eve Approximation of Random Fields Using Hierarchical Matrix Techniques Harrachov 2007

Recommend


More recommend