idr as a deflation method
play

IDR as a Deflation Method Gerard Sleijpen Department of - PowerPoint PPT Presentation

Netherlands-Japan, NLA2012, Delft, April 10, 2012 IDR as a Deflation Method Gerard Sleijpen Department of Mathematics http://www.staff.science.uu.nl/ sleij101/ Gerard Sleijpen Joint work with Martin van Gijzen Delft University of


  1. Netherlands-Japan, NLA2012, Delft, April 10, 2012 IDR as a Deflation Method Gerard Sleijpen Department of Mathematics http://www.staff.science.uu.nl/ ∼ sleij101/

  2. Gerard Sleijpen Joint work with Martin van Gijzen Delft University of Technology, Tijmen Collignon Delft Institue of Applied Mathematics Delft, the Netherlands

  3. Program • Introduction Sonneveld spaces • • Inducing dimension reduction • IDR and deflation

  4. Program • Introduction Sonneveld spaces • • Inducing dimension reduction • IDR and deflation

  5. Solve Ax = b for x . A is n × n non-singular (complex).

  6. Krylov subspace methods To solve Ax = b for x Expansion. • Built a Krylov subspace K k ( A , b ) • Extraction. Extract an approximate solution x k from K k ( A , b )

  7. Krylov subspace methods To solve Ax = b for x Expansion. • Built a Krylov subspace K k ( A , b ) • Extraction. Extract an approximate solution x k from K k ( A , b ) Convergence depends on angle b and A K k ( A , b )

  8. Krylov subspace methods To solve Ax = b for x Expansion. • Built a Krylov subspace K k ( A , b ) • Extraction. Extract an approximate solution x k from K k ( A , b ) Convergence depends on spectral properties of A : – how well eigenvalues are clustered – on the “conditioning” of the eigenvector basis

  9. Krylov subspace methods To solve Ax = b for x Expansion. • Built a Krylov subspace K k ( A , b ) • Extraction. Extract an approximate solution x k from K k ( A , b ) Convergence depends on spectral properties of A : – how well eigenvalues are clustered – on the “conditioning” of the eigenvector basis Extraction quality depends on specific method practical issues as efficiency, stability

  10. Krylov subspace methods To solve Ax = b for x Expansion. • Built a Krylov subspace K k ( A , b ) • Extraction. Extract an approximate solution x k from K k ( A , b ) Convergence depends on spectral properties of A : – how well eigenvalues are clustered – on the “conditioning” of the eigenvector basis Two ways to improve spectral properties: • Preconditioning . To cluster eigenvalues, . . . Apply iterative method to K − 1 Ax = K − 1 b Deflation . To replace small eigenvalues by 0 • Remove known components

  11. Krylov subspace methods To solve Ax = b for x Expansion. • Built a Krylov subspace K k ( A , b ) • Extraction. Extract an approximate solution x k from K k ( A , b ) Convergence depends on spectral properties of A : – how well eigenvalues are clustered – on the “conditioning” of the eigenvector basis Two ways to improve spectral properties: • Preconditioning . To cluster eigenvalues, . . . Apply iterative method to K − 1 Ax = K − 1 b Deflation . To replace small eigenvalues by 0 • Remove known components

  12. Krylov subspace methods To solve Ax = b for x Expansion. • Built a Krylov subspace K k ( A , b ) • Extraction. Extract an approximate solution x k from K k ( A , b ) Convergence depends on spectral properties of A : – how well eigenvalues are clustered – on the “conditioning” of the eigenvector basis Two ways to improve spectral properties: • Preconditioning . To cluster eigenvalues, . . . Apply iterative method to K − 1 Ax = K − 1 b Deflation . To replace small eigenvalues by 0 • Remove known components

  13. Krylov subspace methods To solve Ax = b for x Expansion. • Built a Krylov subspace K k ( A , b ) • Extraction. Extract an approximate solution x k from K k ( A , b ) Convergence depends on spectral properties of A : – how well eigenvalues are clustered – on the “conditioning” of the eigenvector basis Two ways to improve spectral properties: • Preconditioning . To cluster eigenvalues, . . . Apply iterative method to K − 1 Ax = K − 1 b Deflation . To replace small eigenvalues by 0 • Remove known components

  14. Krylov subspace methods To solve Ax = b for x Expansion. • Built a Krylov subspace K k ( A , b ) • Extraction. Extract an approximate solution x k from K k ( A , b ) Convergence depends on spectral properties of A : – how well eigenvalues are clustered – on the “conditioning” of the eigenvector basis Two ways to improve spectral properties: • Preconditioning . To cluster eigenvalues, . . . Apply iterative method to K − 1 Ax = K − 1 b Deflation . To replace small eigenvalues by 0 • Remove known components

  15. Program • Introduction Sonneveld spaces • • Inducing dimension reduction • IDR and deflation

  16. s ∈ N , typically s = 1 , 2 , 4 , 8. � R is a full rank n × s matrix. Terminology. � R is called the initial shadow residual or the IDR test matrix

  17. s ∈ N , typically s = 1 , 2 , 4 , 8. � R is a full rank n × s matrix. Definition . • Block Krylov subspace of order k generated by A ∗ and � R     � K k ( A ∗ , � ( A ∗ ) j � R γ j | γ j ∈ C s R ) ≡   j<k

  18. s ∈ N , typically s = 1 , 2 , 4 , 8. � R is a full rank n × s matrix. Definition . • Block Krylov subspace of order k generated by A ∗ and � R     � K k ( A ∗ , � ( A ∗ ) j � R γ j | γ j ∈ C s R ) ≡   j<k Example. ⊥ K k ( A ∗ , � Bi-CG generates residuals r Bi-CG r 0 ), k here � R = [ � r 0 ]: − Au k α k ⊥ K k +1 ( A ∗ , � r Bi-CG k +1 = r Bi-CG r 0 ) k with u k such that Au k ⊥ K k ( A ∗ , � r 0 ).

  19. s ∈ N , typically s = 1 , 2 , 4 , 8. � R is a full rank n × s matrix. Definition . P k polynomial of exact degree k . • P k Sonneveld subspace order k generated by A and � R � � P k ( A ) v | v ⊥ K k ( A ∗ , � S ( P k , A , � R ) ≡ R )

  20. s ∈ N , typically s = 1 , 2 , 4 , 8. � R is a full rank n × s matrix. Definition . P k polynomial of exact degree k . • P k Sonneveld subspace order k generated by A and � R � � P k ( A ) v | v ⊥ K k ( A ∗ , � S ( P k , A , � R ) ≡ R ) Example . � r Bi-CGSTAB = P k ( A ) r Bi-CG R = [ � r 0 ], k k ⊥ K k ( A ∗ , � r Bi-CG Note that r 0 ). k In Bi-CGSTAB : P k +1 ( λ ) = (1 − ω k λ ) P k ( λ ), where, with r ′ k ≡ P k ( A ) r Bi-CG k +1 , ω k ≡ argmin ω ∈ C � r ′ k − ω Ar ′ k � 2 r Bi-CGSTAB ∈ S ( P k , A , � Theorem . r 0 ). k

  21. s ∈ N , typically s = 1 , 2 , 4 , 8. � R is a full rank n × s matrix. Definition . P k polynomial of exact degree k . • P k Sonneveld subspace order k generated by A and � R � � P k ( A ) v | v ⊥ K k ( A ∗ , � S ( P k , A , � R ) ≡ R ) [Sonneveld, van Gijzen, 2008, S. Sonneveld, van Gijzen 2010] Property . Bi-CGSTAB ∼ IDR( s ) for s = 1 (i.e. R = [ � r 0 ])

  22. Convergence history linear solvers. Matrix "meier01" is 1000 x 1000 4 bicgstab BiCGstab(2) 2 bi−idrs(8) 0 −2 −4 log 10 ||r k || 2 −6 −8 −10 −12 −14 −16 0 100 200 300 400 500 600 # MV u xx + u yy + u zz + 1000 u x = f, f is defined by the solution u ( x, y, z ) = exp( xyz ) sin( πx ) sin( πy ) sin( πz ). Discretized with (10 × 10 × 10) volumes. No preconditioning.

  23. Convergence history linear solvers. Matrix is 1000 x 1000 4 bicgstab BiCGstab(2) 2 idr(2)stab(2) bi−idrs(8) 0 −2 −4 log 10 ||r k || 2 −6 −8 −10 −12 −14 −16 0 100 200 300 400 500 600 # MV u xx + u yy + u zz + 1000 u x = f, f is defined by the solution u ( x, y, z ) = exp( xyz ) sin( πx ) sin( πy ) sin( πz ). Discretized with (10 × 10 × 10) volumes. No preconditioning.

  24. s ∈ N , typically s = 1 , 2 , 4 , 8. � R is a full rank n × s matrix. Definition . P k polynomial of exact degree k . • P k Sonneveld subspace order k generated by A and � R � � P k ( A ) v | v ⊥ K k ( A ∗ , � S ( P k , A , � R ) ≡ R ) IDR Theorem . With P k +1 ( λ ) ≡ ( α − ωλ ) P k ( λ ), ω � = 0 � R ⊥ � S ( P k , A , � R ) ∩ � = S ( P k +1 , A , � • ( α I − ω A ) R ) S ( P k +1 , A , � R ) ⊂ S ( P k , A , � • R ) R ⊥ and S ( P k , A , � � if A has no eigenvector in � • R ) � = { 0 } .

  25. s ∈ N , typically s = 1 , 2 , 4 , 8. � R is a full rank n × s matrix. Definition . P k polynomial of exact degree k . • P k Sonneveld subspace order k generated by A and � R � � P k ( A ) v | v ⊥ K k ( A ∗ , � S ( P k , A , � R ) ≡ R ) IDR Theorem . With P k +1 ( λ ) ≡ ( α − ωλ ) P k ( λ ), ω � = 0 � R ⊥ � S ( P k , A , � R ) ∩ � = S ( P k +1 , A , � • ( α I − ω A ) R ) S ( P k +1 , A , � R ) ⊂ S ( P k , A , � • R ) R ⊥ and S ( P k , A , � � if A has no eigenvector in � • R ) � = { 0 } . If zeros P k +1 � = eigenvalues A , then, increase k leads to R ) = dimension increase K k ( A ∗ , � dimension reduction S ( P k , A , � R )

  26. s ∈ N , typically s = 1 , 2 , 4 , 8. � R is a full rank n × s matrix. Definition . P k polynomial of exact degree k . • P k Sonneveld subspace order k generated by A and � R � � P k ( A ) v | v ⊥ K k ( A ∗ , � S ( P k , A , � R ) ≡ R ) IDR Theorem . With P k +1 ( λ ) ≡ ( α − ωλ ) P k ( λ ), ω � = 0 � R ⊥ � S ( P k , A , � R ) ∩ � = S ( P k +1 , A , � • ( α I − ω A ) R ) S ( P k +1 , A , � R ) ⊂ S ( P k , A , � • R ) R ⊥ and S ( P k , A , � � if A has no eigenvector in � • R ) � = { 0 } . G ′ k ≡ G k ∩ R ⊥ . G k ≡ S ( P k , A , � Corollary R ), With P k +1 ( λ ) ≡ (1 − ω k λ ) P k ( λ ), we have G k +1 ≡ ( I − ω k A ) G ′ and A G ′ k ⊂ G k k ⊂ G k

Recommend


More recommend