summary of the course 2014h ed
play

Summary of the course 2014h ed. Anton Evgrafov November 19, 2014 - PowerPoint PPT Presentation

Summary of the course 2014h ed. Anton Evgrafov November 19, 2014 Linear algebra background: Gauss elimination, LU/Cholesky/LDL-factorization; Rank/null space/range/adjoint; Eigenvalues, -vectors, -spaces; algebraic and geometric


  1. Summary of the course 2014h ed. Anton Evgrafov November 19, 2014

  2. Linear algebra background: ◮ Gauss elimination, LU/Cholesky/LDL-factorization; ◮ Rank/null space/range/adjoint; ◮ Eigenvalues, -vectors, -spaces; algebraic and geometric multiplicities; ◮ Induced/Frobenius matrix norm; ◮ Symmetric/Hermitian/normal/unitary matrices; ◮ Similarity (unitary or not), diagonalization, Jordan normal form, Schur factorization; ◮ Solution perturbation ∼ κ ( A )

  3. QR-factorization Q ∗ Q = I , A = QR , R ij = 0 , i ≥ j Used for: ◮ Solving least-squares problems (e.g. in GMRES/DQGMRES/MINRES/SYMMLQ) ◮ QR-algorithm for eigenvalues Computational algorithms: ◮ (modified) Gramm-Schmidt orthogonalization ◮ Hausholder reflections ◮ Givens rotations

  4. Diagonalization methods Rare instances, known: 1. Eigenvalues/vectors A = Q Λ Q T 2. Fast algorithm (e.g. FFT) v ↔ Qv ⇒ x = Q [Λ − 1 ( Q T b )] Ax = b =

  5. Matrix-splitting methods A = M − N x k +1 = M − 1 ( Nx k + b ) e k +1 = M − 1 Ne k Theorem ⇒ ρ ( M − 1 N ) < 1. � e k � → 0 ∀ x 0 ⇐ Asymptotically � e k +1 � / � e k � ∼ ρ ( M − 1 N ). Slow!

  6. Matrix-splitting methods Standard examples: ◮ Jacobi: M = diag A ◮ Gauss-Seidel: M = triu A ◮ Block variations Use: ◮ Smoothers in multigrid ◮ Preconditioners: additive Schwarz ≈ block-Jacobi; multiplicative Schwarz ≈ block-GS

  7. Projection operators P 2 = P ◮ I − P also projector ◮ Null space/range: ker P ⊕ range P = C n ◮ y = Px ⇐ ⇒ y ∈ range P & x − y = ( I − P ) x ∈ ker P ⇐ ⇒ y ∈ range P & x − y = ( I − P ) x ⊥ [ ker P ] ⊥ ◮ Orthogonal projectors: P T = P Used in this course: analysis of projection methods

  8. Projection/Petrov-Galerkin framework dim K = dim L << dim C n Find: x ∈ x 0 + K such that r = b − Ax ⊥ L

  9. Projection/Petrov-Galerkin framework dim K = dim L << dim C n Find: x ∈ x 0 + K such that r = b − Ax ⊥ L ◮ Error projection: A SPD, L = K Theorem y ∈K � y − A − 1 b � A x ∈ arg min

  10. Projection/Petrov-Galerkin framework dim K = dim L << dim C n Find: x ∈ x 0 + K such that r = b − Ax ⊥ L ◮ Error projection: A SPD, L = K Theorem y ∈K � y − A − 1 b � A x ∈ arg min ◮ Residual projection: L = A K Theorem x ∈ arg min y ∈K � b − Ay � 2

  11. 1D variations: ◮ L = K = e k ⇐ ⇒ Gauss-Seidel ◮ L = K = r = b − Ax ⇐ ⇒ steepest descent ◮ . . .

  12. Krylov subspace methods: K = K m ( A , r 0 ) = span � r 0 , Ar o , . . . , A m − 1 r 0 � ◮ L = K : FOM, CG, SYMMLQ ◮ L = A K : GMRES, MINRES

  13. Arnoldi: Generating ON basis for K m ( A , r 0 )! V T m V m = I , range V m = K m ( A , r 0 ) AV m = V m +1 ¯ H m , [ ¯ H m ] ij = 0 if i > j + 1. Algorithms: (modified) Gramm-Schmidt, Hausholder Arnoldi used in: FOM, GMRES

  14. Cannot store Arnoldi vectors: ◮ Restarts: FOM(k), GMRES(k) ◮ Incomplete (partial) orthogonalization: IOM/DIOM, QGMRES/DQGMRES

  15. Lanczos: A = A T = ⇒ H m = H T m = ⇒ tri-diagonal! In Gramm-Schmidt, only need to orthogonolaze to two previous Arnoldi vectors (3-term recursion) = ⇒ computational savings O ( m 2 n ) → O ( mn ). Lanczos used in: D-LANCZOS ( ≈ CG), MINRES ( ≈ DQGMRES on symmetric A ), SYMMLQ, Lanczos method for eigenvalues (+Rayleigh-Ritz)

  16. “Typical” convergence theorem: Convergence, GMRES, diagonalizable case: A = X Λ X − 1 Theorem � r k � 2 ≤ κ ( X ) min p max | p ( λ i ) | i p -polynomial degree k , p (0) = 1.

  17. Preconditioning: ◮ Left: M − 1 Ax = M − 1 b ◮ Right: AM − 1 u = b , x = M − 1 u . Goal: M − 1 A is easier for Krylov methods than A (e.g. has few clustered eigenvalues) M − 1 ≈ A − 1 but “simple” to apply (every Krylov iteration) ILU, SAI, multigrid, domain decomposition (DD). . .

  18. 2-grid algorithm: 1. (pre-)smooth error/residual: underrelaxed Jacobi, GS, . . . 2. restrict residual to coarse grid: r H = I H h r h 3. appx. solve A H e H = − r H 4. interpolate onto fine grid: e h = I h H e H 5. apply correction: x h = x h − e h 6. (post-)smooth error/residual after interpolations 7. repeat Multigrid: recursion at step 3.

  19. Domain decomposition: ◮ PDE-level: overlapping (Schwarz) vs non-overlapping (Schur). Then discretize and solve! Use special block-structure of the discretized system when solving. ◮ Discrete level: additive/multiplicative Schwarz

  20. SVD decomposition and applications A = U Σ V ∗ ◮ U , V - unitary ◮ Σ - diagonal, positive σ = [ λ ( AA ∗ )] 1 / 2 = [ λ ( A ∗ A )] 1 / 2 Uses: ◮ Least-squares ◮ Low rank approximation of A ◮ Range space ◮ Matrix norms

  21. Eigenvalue algorithms for small problems: Idea: eigenvalue revealing similarity transformation (Schur factorization): A = QRQ T . Phase I: Hessenberg (non-symmetric)/tridiagonalization (symmetric) A = QHQ T , Q T Q = I , H ij = 0 , i > j + 1 Phase I algorithms: Hausholder transformations

  22. Eigenvalue algorithms for small problems: Phase II algorithms: ◮ Rayleigh quotient: λ ≈ r ( v ) = [ v T Av ] / [ v T v ] ◮ Power iteration: v k = A k v 0 / � · � ; eigenvector corr. largest in magnitude eigenvalue; slow convergence ◮ Inverse iteration: v k = A − k v 0 / � · � ; smallest in magn eigenvalue ◮ Shifted inverse iteration: v k = ( A − µ I ) − k v 0 / � · � eigenvalue closest to µ ◮ Rayleigh iteration: v k = ( A − λ k I ) − k v 0 / � · � , λ k = r ( v k ). Cubically convergent for symmetric matrices!

  23. Eigenvalue algorithms for small problems: Phase II algorithms: ◮ Simultaneous versions of power iterations; and their relation to ◮ QR-algorithm with shifts: Q k R k = A k − µ k I ; A k +1 = R k Q k + µ k I . ◮ Power/inverse power iteration for first/last column of Q k ◮ Wilkinson shift

  24. Perturbation analysis for symmetric eigenvalue problems ◮ Eigenvalues (symm matrices): | λ ( A ) − λ ( A + E ) | ≤ � E � 2 | α − λ ( A ) | ≤ � Av − α v � 2 , ∀� v � 2 = 1 ◮ Eigenvectors (symm matrices): 1 2 sin(2 angle) ∼ � perturbation � 2 eigenvalue gap

  25. Eigenvalue algorithms for large problems: ◮ Lanczos for tridiagonalization: T k = Q T k AQ k ◮ Rayleigh-Ritz idea: λ ( A ) ≈ λ ( T k ); error estimates available ◮ Use Phase-II algorithms on T k .

  26. Most important omissions: ◮ Bi-orthogonalization Krylov methods ( ≈ non-symmetric Lanczos): bi-CG, bi-CG-stab, QMR, TFQMR (transpose-free QMR). Most need v �→ A T v in addition to v �→ Av ◮ Multigrid cycles: V, W, F, full-multigrid ◮ Algebraic multigrid ◮ Other eigenvalue algorithms for “small” problems: Jacobi (most accurate); divide & conquer (fastest for symmetric matrices); ◮ Other eigenvalue algorithms for “large” non-symmetric problems: ≈ Rayleigh-Ritz idea + Arnoldi or biorthogonalization ◮ Algorithms for SVD

Recommend


More recommend