6 the symmetric eigenvalue problem a must for engineers
play

6. The Symmetric Eigenvalue Problem A must for engineers . . . 6. - PowerPoint PPT Presentation

Motivation Condition Vector Iteration QR Iteration Reduction Algorithms 6. The Symmetric Eigenvalue Problem A must for engineers . . . 6. The Symmetric Eigenvalue Problem Numerical Programming I (for CSE), Hans-Joachim Bungartz page 1


  1. Motivation Condition Vector Iteration QR Iteration Reduction Algorithms 6. The Symmetric Eigenvalue Problem A ‘must’ for engineers . . . 6. The Symmetric Eigenvalue Problem Numerical Programming I (for CSE), Hans-Joachim Bungartz page 1 of 28

  2. Motivation Condition Vector Iteration QR Iteration Reduction Algorithms 6.1. Motivation 6.1.1. An Example from Physics Consider the following system, consisting of two bodies B 1 , B 2 with mass m , connected by springs with spring constant k . Let x 1 ( t ) , x 2 ( t ) denote the displacement of the bodies B 1 , B 2 at time t . k k k m m x 1 x 2 From Newton’s law ( F = m ¨ x ) and Hooke’s law ( F = mk ), we have the following linear second-order differential equations with constant coefficients : m ¨ x 1 = − kx 1 + k ( x 2 − x 1 ) m ¨ x 2 = − k ( x 2 − x 1 ) − kx 2 6. The Symmetric Eigenvalue Problem Numerical Programming I (for CSE), Hans-Joachim Bungartz page 2 of 28

  3. Motivation Condition Vector Iteration QR Iteration Reduction Algorithms • From calculus, we know that we can use the following complex ansatz to find a (non-trivial) solution of this differential equation: x j ( t ) = c j e iωt , c j , ω ∈ R , j = 1 , 2 . x j ( t ) = − c j ω 2 e iωt , we have: • Since ¨ − mc 1 ω 2 = − kc 1 + k ( c 2 − c 1 ) = k ( − 2 c 1 + c 2 ) − mc 2 ω 2 = − k ( c 2 − c 1 ) − kc 2 = k ( c 1 − 2 c 2 ) • We substitute λ := − mω 2 /k : λc 1 = − 2 c 1 + c 2 λc 2 = c 1 − 2 c 2 Or, with c := ( c 1 , c 2 ) T , „ − 2 « 1 c = λc 1 − 2 We have an instance of the symmetric eigenvalue problem ! • For given initial values , the solution is unique. • We can obtain similar instances of the symmetric eigenvalue problem for systems with a higher number of bodies. 6. The Symmetric Eigenvalue Problem Numerical Programming I (for CSE), Hans-Joachim Bungartz page 3 of 28

  4. Motivation Condition Vector Iteration QR Iteration Reduction Algorithms 6.1.2. Eigenvalues in Numerics • A common problem in numerics is to solve systems of linear equations of the form Ax = b with A ∈ R n × n symmetric, b ∈ R n . • An important question is the condition of this problem, i.e. to which extent an error in b affects the value of the correct solution x ∗ (cf. chapter 5). • The maximum ratio of the relative error in the solution x ∗ to the relative error in b (measured using the Euclidean norm) is the condition number κ ( A ) of solving Ax = b . It can be shown that, for symmetric A , ˛ ˛ λ max ( A ) ˛ ˛ κ ( A ) = ˛ . ˛ ˛ λ min ( A ) ˛ That means the condition depends directly on the eigenvalues of A ! • This number plays an important role in iterative solution of linear equation systems (cf. chapter 7), for example: – Assume we have an approximate solution ˆ x . – This means we have exactly solved the system Ax = ˆ b with ˆ b := A ˆ x . – However, if κ ( A ) is large, this means that even if ˆ b − b is small, our solution may be far away from the solution of Ax = b ! • Finally, also the convergence speed of common iterative solvers is greatly affected by the eigenvalues of A . 6. The Symmetric Eigenvalue Problem Numerical Programming I (for CSE), Hans-Joachim Bungartz page 4 of 28

  5. Motivation Condition Vector Iteration QR Iteration Reduction Algorithms 6.2. Condition • Before trying to develop numerical algorithms for the symmetric eigenvalue problem, we should have a look at its condition ! • Assume that instead of A we have a disturbed matrix A + εB , where || B || 2 = 1 . Since A is symmetric, we assume that B is also symmetric (usually only one half of A is stored in memory). • Let λ be an eigenvalue of A and x an eigenvector to this eigenvalue, i.e. Ax = λx . Let x ( ε ) and λ ( ε ) be the disturbed values of x and λ , implicitly given by the equality: ( A + εB ) x ( ε ) = λ ( ε ) x ( ε ) • Using Taylor series expansion around ε 0 = 0 , we have: 2 x ′′ (0) ε 2 + . . . ) = 2 λ ′′ (0) ε 2 + . . . ) ( A + εB )( x + x ′ (0) ε + 1 ( λ + λ ′ (0) ε + 1 2 x ′′ (0) ε 2 + . . . ) ( x + x ′ (0) ε + 1 • We are interested in the value of λ ′ (0) . Comparing coefficients of ε , we have: Bx + Ax ′ (0) λ ′ (0) x + λx ′ (0) = x T Bx + x T Ax ′ (0) λ ′ (0) x T x + λx T x ′ (0) = x T Bx + λx T x ′ (0) λ ′ (0) x T x + λx T x ′ (0) = x T Bx λ ′ (0) = x T x 6. The Symmetric Eigenvalue Problem Numerical Programming I (for CSE), Hans-Joachim Bungartz page 5 of 28

  6. Motivation Condition Vector Iteration QR Iteration Reduction Algorithms • It follows that λ ( ε ) − λ (0) = x T Bx x T x ε + O ( ε 2 ) . ˛ x T Bx ˛ ˛ • Since ˛ ≤ � B � 2 , we get ˛ ˛ x T x | λ ( ε ) − λ (0) | ≤ | ε | � B � 2 + O ( ε 2 ) • Finally, with our assumption � B � 2 ≤ 1 for the perturbation matrix B , we have | λ ( ε ) − λ (0) | = O ( ε ) • Thus, the symmetric eigenvalue problem is a well-conditioned problem! Remark: The asymmetric eigenvalue problem is ill-conditioned ! Consider the asymmetric matrices „ 1 « „ « 1 1 1 A = , A + εB = . 1 + 10 − 10 10 − 10 1 + 10 − 10 0 A has eigenvalues λ 1 = 1 , λ 2 = 1 + 10 − 10 while A + εB has eigenvalues µ 1 / 2 = 1 ± 10 − 5 . This means the error in the eigenvalues is about 10 5 times larger than the error in the matrix! 6. The Symmetric Eigenvalue Problem Numerical Programming I (for CSE), Hans-Joachim Bungartz page 6 of 28

  7. Motivation Condition Vector Iteration QR Iteration Reduction Algorithms Naive Computation • A naive method to compute eigenvalues could be to determine the characteristic polynomial p ( λ ) of A , then using an iterative method for solving p ( λ ) = 0 , e.g. Newton’s iteration. • However, this reduces the well-conditioned symmetric eigenvalue problem to the ill-conditioned problem of determining the roots of a polynomial! • Example: Consider a 12 × 12 matrix with the eigenvalues λ i = i . Its characteristic polynomial is 12 Y p ( λ ) = ( λ − i ) i =1 The coefficient of λ 7 is -6926634. Assume we have the polynomial q ( λ ) = p ( λ ) − 0 . 001 λ 7 , i.e. the relative error of the coefficient of λ 7 is ε ≈ 1 . 44 · 10 − 10 . However, the relative error of the eigenvalue λ 9 in this case is ≈ 0 . 02 ! • We ought to have some better ideas . . . 6. The Symmetric Eigenvalue Problem Numerical Programming I (for CSE), Hans-Joachim Bungartz page 7 of 28

  8. Motivation Condition Vector Iteration QR Iteration Reduction Algorithms 6.3. Vector Iteration 6.3.1. Power Iteration Let A ∈ R n × n be symmetric. Then A has n eigenvectors u 1 , . . . , u n that form a basis of R n , and we can write any vector x ∈ R n as a linear combination of u 1 , . . . , u n : x = c 1 u 1 + c 2 u 2 + · · · + c n u n ( c 1 , . . . , c n ∈ R ) Let λ 1 , . . . , λ n be the eigenvalues of A corresponding to the eigenvectors u 1 , . . . , u n . Then Ax = λ 1 c 1 u 1 + λ 2 c 2 u 2 + · · · + λ n c n u n A 2 x λ 2 1 c 1 u 1 + λ 2 2 c 2 u 2 + · · · + λ 2 = n c n u n . . . A k x λ k 1 c 1 u 1 + λ k 2 c 2 u 2 + · · · + λ k = n c n u n » – ” k ” k “ “ λ 2 λ n λ k = c 1 u 1 + c 2 u 2 + · · · + c n u n 1 λ 1 λ 1 Assume that λ 1 is dominant (i.e. | λ 1 | > | λ 2 | ≥ · · · ≥ | λ n | ). ” k ” k “ “ λ 2 λ n 1 A k x → c 1 u 1 . Then for k → ∞ , we have , . . . , → 0 , and thus λ 1 λ 1 λ k 1 6. The Symmetric Eigenvalue Problem Numerical Programming I (for CSE), Hans-Joachim Bungartz page 8 of 28

  9. Motivation Condition Vector Iteration QR Iteration Reduction Algorithms • By choosing an appropriate start vector x (0) ∈ R n and performing the iteration x ( k +1) = Ax ( k ) , we can calculate an approximation of an eigenvector of A belonging to the eigenvalue λ 1 . 1 x (0) � = 0 . (Otherwise, x ( k ) will converge to the first • “Appropriate” means that u T i x (0) � = 0 .) If x (0) is chosen randomly with uniform eigenvector u i where u T 1 x (0) = 0 is negligible (mathematically, it is probability, then the probability that u T zero). • The values of the iteration vector get arbitrarily large (if λ 1 > 1 ) or small (if λ 1 < 1 ). To avoid numerical problems, we should therefore normalize the value of x k after each iteration step. • Calculating eigenvalues from eigenvectors: Let x be an eigenvector of A belonging to the eigenvalue λ . Then Ax = λx x T Ax = λ x T x If x is normalized, i.e. � x � = 1 , then λ = x T Ax . • The term x T Ax x T x is also called Rayleigh quotient . 6. The Symmetric Eigenvalue Problem Numerical Programming I (for CSE), Hans-Joachim Bungartz page 9 of 28

  10. Motivation Condition Vector Iteration QR Iteration Reduction Algorithms Power Iteration: Algorithm • Choose a random x (0) ∈ R n such that ‚ = 1 ‚ x (0) ‚ ‚ • for k = 0 , 1 , 2 , · · · : 1. w ( k ) := Ax ( k ) 2. λ ( k ) := ( x ( k ) ) T w ( k ) w ( k ) 3. x ( k +1) := � w ( k ) � 4. stop if approximation “sufficiently accurate” Remarks: • We can consider our approximation x ( k ) “sufficiently accurate” if ‚ ‚ w ( k ) − λ ( k ) x ( k ) ‚ ‚ ‚ w ( k ) ‚ ‚ ≤ ε ‚ . ‚ ‚ ‚ ‚ • The cost per iteration step is determined by the cost of computing the matrix-vector product Ax ( k ) , which is at most O ( n 2 ) . 6. The Symmetric Eigenvalue Problem Numerical Programming I (for CSE), Hans-Joachim Bungartz page 10 of 28

Recommend


More recommend