rank one modification of symmetric eigenproblem
play

Rank-one Modification of Symmetric Eigenproblem Zack 11/8/2013 - PowerPoint PPT Presentation

Rank-one Modification of Symmetric Eigenproblem Zack 11/8/2013 Eigenproblem = Q is an orthonormal matrix, = = D is diagonal matrix, 1 2


  1. Rank-one Modification of Symmetric Eigenproblem Zack 11/8/2013

  2. Eigenproblem β€’ 𝐡 = 𝑅𝐸𝑅 π‘ˆ β€’ Q is an orthonormal matrix, 𝑅𝑅 π‘ˆ = 𝑅 π‘ˆ 𝑅 = 𝐽 β€’ D is diagonal matrix, 𝑒 1 ≀ 𝑒 2 ≀ β‹― ≀ 𝑒 π‘œ

  3. Initial modification of eigenproblem 𝐡 = 𝐡 + πœπ‘£π‘£ π‘ˆ β€’ β€’ 𝐡 = 𝑅𝐸𝑅 π‘ˆ is known. β€’ Compute eigenvalue decomposition of 𝐡 : 𝐡 = 𝑅 𝐸 𝑅

  4. 𝐡 = 𝐡 + πœπ‘£π‘£ π‘ˆ = 𝑅 𝐸 + πœπ‘¨π‘¨ π‘ˆ 𝑅 π‘ˆ β€’ β€’ Define C = 𝐸 + πœπ‘¨π‘¨ π‘ˆ , problem comes to eigenvalue decomposition of matrix C

  5. β€’ The eigenvalues of the matrix C satisfy the equation: det 𝐸 + πœπ‘¨π‘¨ π‘ˆ βˆ’ λ𝐽 = 0 β€’ det 𝐸 + πœπ‘¨π‘¨ π‘ˆ βˆ’ λ𝐽 = det 𝐸 βˆ’ λ𝐽 det 𝐽 + 𝜏 𝐸 βˆ’ λ𝐽 βˆ’1 𝑨𝑨 π‘ˆ π‘œ π‘œ 2 πœ‚ 𝑗 = ( 𝑒 𝑗 βˆ’ πœ‡ )(1 + Οƒ 𝑒 𝑗 βˆ’ πœ‡ ) 𝑗=1 𝑗=1

  6. π‘œ 2 πœ‚ 𝑗 1 + Οƒ 𝑒 𝑗 βˆ’ πœ‡ = 0 𝑗=1

  7. π‘œ 2 πœ‚ 𝑗 1 + Οƒ 𝑒 𝑗 βˆ’ πœ‡ = 0 𝑗=1 β€’ Define πœ‡ = 𝑒 𝑗 + πœπ‘’ , πœ€ π‘˜ = (𝑒 π‘˜ βˆ’ 𝑒 𝑗 )/𝜏 π‘œ 2 πœ‚ π‘˜ π‘₯ 𝑗 𝑒 = 1 + πœ€ π‘˜ βˆ’ 𝑒 = 0 𝑒 ∈ (0, πœ€ 𝑗+1 ) π‘˜=1

  8. Newton’s method would be an obvious choice for finding the solution. However, Newton’s method is based on a local linear approximation to the function. Since our functions are rational functions, it seems more natural to develop a method based on a local approximation via simple rational function f(t).

  9. Step1: choose an approximate function , for example: 𝑒→0 𝑔 𝑒 = βˆ’βˆž lim 𝑐 𝑑 οƒ  𝑔 𝑒 = 𝑏 + 𝑒 + lim π‘’β†’πœ€ 𝑔 𝑒 = +∞ πœ€βˆ’π‘’ Step2: iteration 1. Choose 𝑒 0 ∈ (0, Ξ΄) 2. solve 𝑏 0 ,𝑐 0 , 𝑑 0 , with 𝑔 𝑒 0 = π‘₯ 𝑗 𝑒 0 , 𝑔′ 𝑒 0 = π‘₯ 𝑗 β€² 𝑒 0 , 𝑔′′ 𝑒 0 = π‘₯ 𝑗 β€²β€² 𝑒 0 … 3. solve 𝑔 𝑒 1 = 0

  10. 𝑐 𝑑 β€’ 𝑔 𝑒 = 𝑏 + πœ€βˆ’π‘’ οƒŸ Gragg’s method 𝑒 + Converges from any point in 0, πœ€ with a cubic order of convergence. 2 πœ‚ 𝑗 𝑐 β€’ 𝑔 𝑒 = 𝑏 βˆ’ πœ€βˆ’π‘’ οƒŸ fixed weight 1 method (FW1) 𝑒 + 2 𝑐 πœ‚ 𝑗+1 πœ€βˆ’π‘’ οƒŸ fixed weight 2 method (FW2) 𝑔 𝑒 = 𝑏 + 𝑒 + Converges from any point in 0, πœ€ with quadratic order of convergence

  11. β€’ Divide π‘₯ 𝑗 𝑒 into 2 part: 𝑗 π‘œ 2 2 πœ‚ π‘˜ πœ‚ π‘˜ π‘₯ 𝑗 𝑒 = 1 + πœ€ π‘˜ βˆ’ 𝑒 + πœ€ π‘˜ βˆ’ 𝑒 π‘˜=1 π‘˜=𝑗+1 = 1 + πœ” 𝑒 + 𝜚(𝑒)

  12. 𝑗 2 πœ‚ π‘˜ πœ” 𝑒 = πœ€ π‘˜ βˆ’ 𝑒 π‘˜=1 π‘œ 2 πœ‚ π‘˜ 𝜚 𝑒 = πœ€ π‘˜ βˆ’ 𝑒 π‘˜=𝑗+1 πœ€ 1 < β‹― < πœ€ 𝑗 = 0 < πœ€ 𝑗+1 < β‹― < πœ€ π‘œ

  13. β€’ choose an approximate function for πœ” 𝑒 ,𝜚 𝑒 πœ” 𝑒 οƒŸ 𝑏 + 𝑐𝑒 βˆ’1 𝜚 𝑒 οƒŸ 𝑑 + 𝑒(πœ€ βˆ’ 𝑒) βˆ’1 This method is named β€œthe middle way”. For this method, convergence cannot be guaranteed unless the starting point lies close enough to the root. In case of convergence, the order is quadratic.

  14. β€œapproaching from the left” (BNS1) πœ” 𝑒 οƒŸ 𝑏(𝑐 βˆ’ 𝑒) βˆ’1 𝜚 𝑒 οƒŸ 𝑑 + 𝑒(πœ€ βˆ’ 𝑒) βˆ’1 Converge from any point in (0, 𝑒 βˆ— ] and the order of convergence is quadratic. β€œapproaching from the right” (BNS2) πœ” 𝑒 οƒŸ 𝑏 + 𝑐𝑒 βˆ’1 𝜚 𝑒 οƒŸ 𝑑(𝑒 βˆ’ 𝑒) βˆ’1 Converge from any point in [𝑒 βˆ— ,πœ€) and the order of convergence is quadratic.

  15. Starting points π‘œ 2 πœ‚ π‘˜ 1 + π‘˜ βˆ’ 𝑒 = 0 πœ€ π‘˜=1 π‘œ 2 2 2 πœ‚ π‘˜ 1 βˆ’ πœ‚ 𝑗 πœ‚ 𝑗+1 𝑒 + πœ€ 𝑗+1 βˆ’ 𝑒 + π‘˜ βˆ’ 𝑒 = 0 πœ€ π‘˜=1 π‘˜β‰ π‘—,𝑗+1 π‘œ 2 2 2 πœ‚ π‘˜ 1 βˆ’ πœ‚ 𝑗 πœ‚ 𝑗+1 + + β‰ˆ 0 𝑒 0 πœ€ 𝑗+1 βˆ’ 𝑒 0 πœ€ π‘˜ βˆ’ πœ€ 𝑗+1 π‘˜=1 π‘˜β‰ π‘—,𝑗+1

  16. Calculating the eigenvector β€’ Assume π‘Ÿ 𝑗 is 𝑗 th eigenvector of matrix A π‘Ÿ 𝑗 βˆ’ 𝐡 𝑒 𝑗 π‘Ÿ 𝑗 = 0 β€’ if 𝑨 = 𝑅 π‘ˆ 𝑐 and 𝑦 𝑗 = 𝑅 π‘ˆ π‘Ÿ 𝑗 𝑒 𝑗 𝐽 + πœπ‘¨π‘¨ π‘ˆ 𝑦 𝑗 = 0 𝐸 βˆ’

  17. Calculating the eigenvector β€’ Theorem. If A is invertable and 𝜍 β‰  0 , the following statements are equivalent: 𝐡 + πœπ‘£π‘€ π‘ˆ 𝑦 = 𝑐 And 1 𝑦 = 𝐡 βˆ’1 𝑐 βˆ’ πœ„π΅ βˆ’1 𝑣 , πœˆπœ„ = 𝑀 π‘ˆ 𝐡 βˆ’1 𝑐 , where 𝜈 = ( 𝜍 + 𝑀 π‘ˆ 𝐡 βˆ’1 𝑣) 𝑒 𝑗 𝐽) βˆ’1 𝑨 β€’ Therefore, 𝑦 𝑗 = βˆ’πœ„(𝐸 βˆ’

  18. Q&A

Recommend


More recommend