solving large scale eigenvalue problems
play

Solving large scale eigenvalue problems Lecture 11, May 9, 2018: - PowerPoint PPT Presentation

Solving large scale eigenvalue problems Solving large scale eigenvalue problems Lecture 11, May 9, 2018: JacobiDavidson algorithms http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Z urich E-mail:


  1. Solving large scale eigenvalue problems Solving large scale eigenvalue problems Lecture 11, May 9, 2018: Jacobi–Davidson algorithms http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Z¨ urich E-mail: arbenz@inf.ethz.ch Large scale eigenvalue problems, Lecture 11, May 9, 2018 1/42

  2. Solving large scale eigenvalue problems Survey Survey of today’s lecture ◮ Jacobi–Davidson algorithms ◮ Basic ideas ◮ Davidson’s subspace expansion ◮ Jacobi’s orthogonal component correction ◮ Jacobi–Davidson algorithms ◮ Correction equation ◮ JDQR, JDSYM, JDQZ ◮ Nonlinear Jacobi–Davidson Large scale eigenvalue problems, Lecture 11, May 9, 2018 2/42

  3. Solving large scale eigenvalue problems Jacobi–Davidson algorithm Jacobi–Davidson algorithm ◮ The Lanczos and Arnoldi methods are effective to compute extremal eigenvalues. ◮ Lanczos and Arnoldi methods combined with shift-and-invert spectral transformation are efficient to compute interior eigenvalues close to shift σ . Linear systems of the form ( A − σ I ) x = y , or ( A − σ M ) x = y , need to be solved in each iteration step. ◮ Systems have to be solved accurately. Otherwise the Lanczos/ Arnoldi relation does not hold anymore. In most cases the matrix A − σ I (or A − σ M ) is LU or Cholesky factored. ◮ The Jacobi–Davidson (JD) algorithm is particularly attractive if this factorization is not feasible. Large scale eigenvalue problems, Lecture 11, May 9, 2018 3/42

  4. Solving large scale eigenvalue problems Jacobi–Davidson algorithm The Davidson algorithm Let v 1 , . . . , v m be a set of orthonormal vectors, spanning the search space R ( V m ) with V m = [ v 1 , . . . , v m ]. Galerkin approach: we look for m -vector s for which Galerkin condition holds, AV m s − ϑ V m s ⊥ v 1 , . . . , v m . This leads to (small) eigenvalue problem V ∗ m AV m s = ϑ V ∗ m V m s = ϑ s with solutions ( ϑ ( m ) , s ( m ) ), j = 1 , . . . , m . j j In the sequel we omit superscript m . Large scale eigenvalue problems, Lecture 11, May 9, 2018 4/42

  5. Solving large scale eigenvalue problems Jacobi–Davidson algorithm The Davidson algorithm (cont.) Consider Ritz pair ( ϑ j , u j = V m s j ) and residual r j = A u j − ϑ j u j . Can we improve ( ϑ j , u j ) if � r j � is still large? We try to find a better approximate eigenpair by expanding the search space. Davidson [1] suggested to compute vector t from ( D A − ϑ j I ) t = r j , D A = diag( A ) . The vector t is then orthogonalized against v 1 , . . . , v m . The resulting vector, after normalization, is chosen as v m +1 by which R ( V m ) is expanded, i.e., V m +1 = [ v 1 , . . . , v m , v m +1 ]. Large scale eigenvalue problems, Lecture 11, May 9, 2018 5/42

  6. Solving large scale eigenvalue problems Jacobi–Davidson algorithm The Davidson algorithm (cont.) ◮ Method turned out to be successful in finding dominant eigenvalues of (strongly) diagonally dominant matrices. ◮ Matrix D A − ϑ j I has therefore often been viewed as a preconditioner for the matrix A − ϑ j I . ◮ A number of investigations were made with more sophisticated preconditioners M − ϑ j I . ◮ They lead to the conclusion that M − ϑ j I should not be ‘too close’ to A − ϑ j I which contradicts the notion of a preconditioner as being an easily invertible (factorizable) approximation of A − ϑ j I . Large scale eigenvalue problems, Lecture 11, May 9, 2018 6/42

  7. Solving large scale eigenvalue problems Jacobi–Davidson algorithm The Jacobi orthogonal component correction ◮ Jacobi (1846) introduced Jacobi rotations to solve the symmetric eigenvalue problem. He also presented an approach to improve an approximate eigenpair with an iterative procedure. ◮ Sleijpen and van der Vorst [2] used Jacobi’s idea to improve Davidson’s subspace expansion. ◮ Let u j be approximation to eigenvector x with A x = λ x . ◮ Jacobi proposed to correct u j by a vector t , u j ⊥ t , such that A ( u j + t ) = λ ( u j + t ) , u j ⊥ t . (1) Sleijpen & van der Vorst called this the Jacobi orthogonal component correction (JOCC). Large scale eigenvalue problems, Lecture 11, May 9, 2018 7/42

  8. Solving large scale eigenvalue problems Jacobi–Davidson algorithm The Jacobi orthogonal component correction (cont.) As t ⊥ u j we may split equation (1) in the part parallel to u j and in the part orthogonal to u j . (Then u j u ∗ Let � u j � = 1. j an orthogonal projector.) Then part parallel to u j is u j u j ∗ A ( u j + t ) = λ u j u j ∗ ( u j + t ) which simplifies to the scalar equation ϑ j + u j ∗ A t = λ. Here, ϑ j = ρ ( u j ) is the Rayleigh quotient of u j . Large scale eigenvalue problems, Lecture 11, May 9, 2018 8/42

  9. Solving large scale eigenvalue problems Jacobi–Davidson algorithm The Jacobi orthogonal component correction (cont.) The part orthogonal to u j is ( I − u j u j ∗ ) A ( u j + t ) = λ ( I − u j u j ∗ )( u j + t ) which is equivalent to (move t to left, u to right.) ( I − u j u j ∗ )( A − λ I ) t = ( I − u j u j ∗ )( − A u j + λ u j ) = − ( I − u j u j ∗ ) A u j = − ( A − ϑ j I ) u j =: − r j . As ( I − u j u j ∗ ) t = t we can ‘symmetrize’ this equation: ( I − u j u j ∗ )( A − λ I )( I − u j u j ∗ ) t = − r j . If A is symmetric then the matrix above is symmetric indeed. Large scale eigenvalue problems, Lecture 11, May 9, 2018 9/42

  10. Solving large scale eigenvalue problems Jacobi–Davidson algorithm The Jacobi–Davidson correction equation Unfortunately, we do not know λ ! So, we replace λ by ϑ j to get Jacobi–Davidson correction equation ( I − u j u ∗ j )( A − ϑ j I )( I − u j u ∗ j ) t = − r j = − ( A − ϑ j I ) u j , t ⊥ u j . As r j ⊥ u j (in fact r j ⊥ V m ) this equation is consistent if A − ϑ j I is nonsingular. The correction equation is, in general, solved iteratively by the GMRES or the MINRES algorithm. Often, only little accuracy in the solution is required. V m isn’t a Krylov space! Large scale eigenvalue problems, Lecture 11, May 9, 2018 10/42

  11. Solving large scale eigenvalue problems Jacobi–Davidson algorithm The Jacobi–Davidson correction equation (cont.) Once t is (approximately) known we set u j +1 = u j + t . and ϑ j +1 = ϑ j + u j ∗ A t . If A is symmetric we may set ϑ j +1 = ρ ( u j +1 ) . Large scale eigenvalue problems, Lecture 11, May 9, 2018 11/42

  12. Solving large scale eigenvalue problems Solving the Jacobi–Davidson correction equation Solving the Jacobi–Davidson correction equation The matrix ( I − u j u ∗ j )( A − ϑ j I )( I − u j u ∗ j ) in the correction equation is evidently singular. ( I − u j u ∗ j )( A − ϑ j I )( I − u j u ∗ j ) t = − r j , t ⊥ u j . If ϑ j � = σ ( A ) then A − ϑ j I is nonsingular. Since r ∗ j u j = 0, r j ∈ R (( I − u j u ∗ j )( A − ϑ j I )( I − u j u ∗ j )) . So, there is a solution t . (In fact many.) Uniqueness is obtained through constraint t ∗ u j = 0. How can we solve the correction equation? Large scale eigenvalue problems, Lecture 11, May 9, 2018 12/42

  13. Solving large scale eigenvalue problems Solving the Jacobi–Davidson correction equation Solving the Jacobi–Davidson correction equation (cont.) How can we solve the correction equation? ( I − u j u ∗ j )( A − ϑ j I ) t = ( A − ϑ j I ) t − α u j = − r j with the scalar α = u ∗ j ( A − ϑ j I ) t . Assuming that ϑ j � = σ ( A ) we get t = α ( A − ϑ j I ) − 1 u j − ( A − ϑ j I ) − 1 r j . The constraint u ∗ j t = 0 allows us to determine the free variable α , 0 = α u ∗ j ( A − ϑ j I ) − 1 u j − u ∗ j ( A − ϑ j I ) − 1 r j , whence u ∗ j ( A − ϑ j I ) − 1 r j α = . u ∗ j ( A − ϑ j I ) − 1 u j Large scale eigenvalue problems, Lecture 11, May 9, 2018 13/42

  14. Solving large scale eigenvalue problems Solving the Jacobi–Davidson correction equation Solving the Jacobi–Davidson correction equation (cont.) So, the next approximate is (up to normalization) u j +1 = u j + t = u j + α ( A − ϑ j I ) − 1 u j − ( A − ϑ j I ) − 1 r j = α ( A − ϑ j I ) − 1 u j � �� � u j which is a step of Rayleigh quotient iteration! This implies a fast convergence rate of this algorithm: ◮ quadratic for general matrices and ◮ cubic in the Hermitian case. Large scale eigenvalue problems, Lecture 11, May 9, 2018 14/42

  15. Solving large scale eigenvalue problems Solving the Jacobi–Davidson correction equation Iterative solution of the correction equation In general the correction equation A t := ( I − u j u ∗ ˜ j )( A − ϑ j I )( I − u j u ∗ j ) t = − r j , ( I − u j u ∗ j ) t = t is solved iteratively with a Krylov space solver like GMRES or MINRES. Large scale eigenvalue problems, Lecture 11, May 9, 2018 15/42

  16. Solving large scale eigenvalue problems Solving the Jacobi–Davidson correction equation JD algorithm to compute eigenvalue closest to τ 1: Let A ∈ R n × n . Let t be initial vector. Set V 0 = [], m = 1. 2: loop t = ( I − V m − 1 V ∗ m − 1 ) t 3: v m := t / � t � ; V m := [ V m − 1 , v m ]; 4: M = V ∗ m AV m 5: Rayleigh–Ritz step: Compute eigenpair ( ϑ, s ) of M closest 6: to τ : M s = ϑ s ; � s � = 1; u := V m s ; r := A u − ϑ u ; 7: if � r � < tol then 8: return (˜ λ = ϑ, ˜ x = u ) 9: end if 10: (Approximatively) solve the correction equation for t , 11: ( I − uu ∗ )( A − ϑ j I )( I − uu ∗ ) t = − r , t ⊥ u ; 12: end loop Large scale eigenvalue problems, Lecture 11, May 9, 2018 16/42

Recommend


More recommend