solving large scale eigenvalue problems
play

Solving large scale eigenvalue problems Lecture 3, March 7, 2018: - PowerPoint PPT Presentation

Solving large scale eigenvalue problems Solving large scale eigenvalue problems Lecture 3, March 7, 2018: Newton methods http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Z urich E-mail: arbenz@inf.ethz.ch


  1. Solving large scale eigenvalue problems Solving large scale eigenvalue problems Lecture 3, March 7, 2018: Newton methods http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Z¨ urich E-mail: arbenz@inf.ethz.ch Large scale eigenvalue problems, Lecture 3, March 7, 2018 1/30

  2. Solving large scale eigenvalue problems Survey Survey of today’s lecture ◮ Linear and nonlinear eigenvalue problems ◮ Eigenvalues as zeros of the determinant function ◮ Hyman’s method for Hessenberg matrices ◮ Algorithmic differentiation ◮ Newton iterations ◮ Successive linear approximations Large scale eigenvalue problems, Lecture 3, March 7, 2018 2/30

  3. Solving large scale eigenvalue problems Linear and nonlinear evp’s Linear and nonlinear eigenvalue problems ◮ Linear eigenvalue problems Find values λ ∈ C such that A − λ I is singular. Or equivalently: Find values λ ∈ C such that there is a nonzero (nontrivial) ① such that ( A − λ I ) ① = 0 ⇐ ⇒ A ① = λ ① . Large scale eigenvalue problems, Lecture 3, March 7, 2018 3/30

  4. Solving large scale eigenvalue problems Linear and nonlinear evp’s Linear and nonlinear eigenvalue problems (cont.) ◮ Nonlinear eigenvalue problems More general: Find λ ∈ C such that A ( λ ) ① = 0 where A ( λ ) is a matrix the elements of which depend on λ . d � λ k A k ; Examples: A ( λ ) = k =0 d = 1: A ( λ ) = A 0 − λ A 1 , A 0 = A , A 1 = I . Large scale eigenvalue problems, Lecture 3, March 7, 2018 4/30

  5. Solving large scale eigenvalue problems Linear and nonlinear evp’s Linear and nonlinear eigenvalue problems (cont.) ◮ Matrix polynomials Matrix polynomials can be linearized. Example : A x + λ K x + λ 2 M x . We can generate equivalent eigenvalue problems that are linear but have the size doubled: With y = λ x we get � A � � x � � − K � � x � O − M = λ O I y I O y or � A � � x � � O � � x � K − M = λ . O I y I O y Many other linearizations exist. (C.f. transformation of high order to first order ODE’s.) Large scale eigenvalue problems, Lecture 3, March 7, 2018 5/30

  6. Solving large scale eigenvalue problems Linear and nonlinear evp’s Numerical example ◮ Example: The matrix   − 0 . 9880 1 . 8000 − 0 . 8793 − 0 . 5977 − 0 . 7819 − 1 . 9417 − 0 . 5835 − 0 . 1846 − 0 . 7250 1 . 0422     A = 0 . 6003 − 0 . 0287 − 0 . 5446 − 2 . 0667 − 0 . 3961     0 . 8222 1 . 4453 1 . 3369 − 0 . 6069 0 . 8043   − 0 . 4187 − 0 . 2939 1 . 4814 − 0 . 2119 − 1 . 2771 has eigenvalues given approximately by λ 1 = − 2, λ 2 = − 1 + 2 . 5 ı , λ 3 = − 1 − 2 . 5 ı , λ 4 = 2 ı , and λ 5 = − 2 ı . It is known that closed form formulas for the roots of a polynomial do not generally exist if the polynomial is of degree 5 or higher. Thus we cannot expect to be able to solve the eigenvalue problem in a finite procedure. Large scale eigenvalue problems, Lecture 3, March 7, 2018 6/30

  7. Solving large scale eigenvalue problems Linear and nonlinear evp’s Numerical example (cont.) Eigenvalues in C . For real matrices, the complex eigenvalues come in pairs. If λ is an eigenvalue, then so is ¯ λ . Large scale eigenvalue problems, Lecture 3, March 7, 2018 7/30

  8. Solving large scale eigenvalue problems Determinant Zeros of determinant Find values λ ∈ C such that A − λ I is singular. Equivalent: Find values λ ∈ C such that det A ( λ ) = 0 . (1) Apply zero finder to eq. (1). Questions: 1. What zero finder? 2. How to compute f ( λ ) = det A ( λ )? 3. How to compute f ′ ( λ ) = d d λ det A ( λ )? Large scale eigenvalue problems, Lecture 3, March 7, 2018 8/30

  9. Solving large scale eigenvalue problems Determinant Gaussian elimination with partial pivoting (GEPP) Let the factorization P ( λ ) A ( λ ) = L ( λ ) U ( λ ) be obtained by GEPP. P : permutation matrix, L : lower unit triangular matrix, U : upper triangular matrix. det P ( λ ) · det A ( λ ) = det L ( λ ) · det U ( λ ) . n � ± 1 · det A ( λ ) = 1 · u ii ( λ ) . i =1 Large scale eigenvalue problems, Lecture 3, March 7, 2018 9/30

  10. Solving large scale eigenvalue problems Determinant Newton iteration Need the derivative f ′ ( λ ) of f ( λ ) = det A ( λ ). n n � � f ′ ( λ ) = ± 1 · u ′ ii ( λ ) u jj ( λ ) i =1 j � = i n n n u ′ u ′ ii ( λ ) ii ( λ ) � � � = ± 1 · u jj ( λ ) = u ii ( λ ) f ( λ ) . u ii ( λ ) i =1 j =1 i =1 How do we compute the u ′ ii ? Possibility: algorithmic differentiation See: Arbenz & Gander: Solving Nonlinear Eigenvalue Problems by Algorithmic Differentiation. Computing 36, 205 – 215 (1986). Large scale eigenvalue problems, Lecture 3, March 7, 2018 10/30

  11. Solving large scale eigenvalue problems Algorithmic differentiation Algorithmic differentiation Example: Horner scheme to evaluate polynomial n � c i z i . f ( z ) = i =1 p 0 ( z ) = c 0 + z ( c 1 + z ( c 2 + · · · + z ( c n ))) by the recurrence p n := c n , p i := z p i +1 + c i , i = n − 1 , n − 2 , . . . , 0 f ( z ) := p 0 . Consider the p i as functions (polynomials) in z . Large scale eigenvalue problems, Lecture 3, March 7, 2018 11/30

  12. Solving large scale eigenvalue problems Algorithmic differentiation Algorithmic differentiation (cont.) dp n := 0 , p n := c n , dp i := p i +1 + z dp i +1 , p i := z p i +1 + c i , i = n − 1 , n − 2 , . . . , 0 , f ′ ( z ) := dp 0 , f ( z ) := p 0 . Can proceed in a similar fashion for computing det A ( λ ). Need to be able to compute derivatives a ′ ij . Then, derive each single assignment in the algorithm of Gaussian elimination. Large scale eigenvalue problems, Lecture 3, March 7, 2018 12/30

  13. Solving large scale eigenvalue problems Algorithmic differentiation Discussion We restrict ourselves to the standard eigenvalue problem A x = λ x , i.e., A ( λ ) = A − λ I . Then A ′ ( λ ) = − I . In the Newton method we have to compute the determinant for possibly many values λ . 3 n 3 flops (floating point Computing the determinant costs 2 operations). Can we do better? Idea: Transform A by a similarity transformation to Hessenberg form. Large scale eigenvalue problems, Lecture 3, March 7, 2018 13/30

  14. Solving large scale eigenvalue problems Hyman’s algorithm Hessenberg matrices Definition A matrix H is a Hessenberg matrix if its elements below the lower off-diagonal are zero, h ij = 0 , i > j + 1 . Any matrix A can be transformed into a Hessenberg matrix by a sequence of elementary Householder transformations, for details see QR algorithm. Let S ∗ AS = H , where S is unitary. Then A x = λ x ⇐ ⇒ H y = λ y , x = S y . We assume that H is unreduced, i.e., h i +1 , i � = 0 for all i . Large scale eigenvalue problems, Lecture 3, March 7, 2018 14/30

  15. Solving large scale eigenvalue problems Hyman’s algorithm Hessenberg matrices (cont.) Let λ be an eigenvalue of H and ( H − λ I ) x = 0 , (2) i.e., x is an eigenvector of H associated with the eigenvalue λ . Then x n � = 0. (Proof by contradiction.) W.l.o.g., we can set x n = 1. If λ is an eigenvalue then there are x i , 1 ≤ i < n , such that       h 11 − λ h 12 h 13 h 14 x 1 0 h 22 − λ 0 h 21 h 23 h 24 x 2        =  .       h 32 h 33 − λ h 34 x 3 0     h 44 − λ 1 0 h 43 Large scale eigenvalue problems, Lecture 3, March 7, 2018 15/30

  16. Solving large scale eigenvalue problems Hyman’s algorithm Hessenberg matrices (cont.) If λ is not an eigenvalue then we determine the x i such that       h 11 − λ h 12 h 13 h 14 x 1 ∗ h 21 h 22 − λ h 23 h 24 x 2 0        =  . ( ∗ )       0 h 32 h 33 − λ h 34 x 3     h 44 − λ 1 0 h 43 Determine the n − 1 numbers x n − 1 , x n − 2 , . . . , x 1 by the equations n down to 2 of the equation above − 1 � � x i = ( h i +1 , i +1 − λ ) x i +1 + h i +1 , i +2 x i +2 + · · · + h i +1 , n x n . h i +1 , i ���� 1 The first equation gives ( h 1 , 1 − λ ) x 1 + h 1 , 2 x 2 + · · · + h 1 , n x n = c · f ( λ ) . (3) Large scale eigenvalue problems, Lecture 3, March 7, 2018 16/30

  17. Solving large scale eigenvalue problems Hyman’s algorithm Hessenberg matrices (cont.) We can consider the x i as functions of λ , in fact, x i ∈ P n − i . Therefore, we can algorithmically differentiate the x ′ i to get f ′ ( λ ). For i = n − 1 , . . . , 1 we have − 1 � � x ′ − x i +1 +( h i +1 , i +1 − λ ) x ′ i +1 + h i +1 , i +2 x ′ i +2 + · · · + h i +1 , n − 1 x ′ i = . n − 1 h i +1 , i Finally, c · f ′ ( λ ) = − x 1 + ( h 1 , 1 − λ ) x ′ 1 + h 1 , 2 x ′ 2 + · · · + h 1 , n − 1 x ′ n − 1 . Large scale eigenvalue problems, Lecture 3, March 7, 2018 17/30

  18. Solving large scale eigenvalue problems Hyman’s algorithm Hessenberg matrices (matrix form) In matrix form ( ∗ ) reads � x ( λ ) � � h ( λ ) � � x � � p ( λ ) � h 1 n ( H − λ I ) = = . 1 R ( λ ) k ( λ ) 1 0 Computing p : x ( λ ) = − R ( λ ) − 1 k ( λ ) , R ( λ ) x ( λ ) + k ( λ ) = 0 = ⇒ p ( λ ) = h ( λ ) x ( λ ) + h 1 n . Large scale eigenvalue problems, Lecture 3, March 7, 2018 18/30

Recommend


More recommend