compact fourier analysis for multigrid methods
play

Compact Fourier Analysis for Multigrid Methods Cortona 2008 Thomas - PowerPoint PPT Presentation

Compact Fourier Analysis for Multigrid Methods Cortona 2008 Thomas Huckle joint work with Christos Kravvaritis 1 Overview 1. Multigrid Method 2. Structured Matrices and Generating Functions 3. Compact Fourier Analysis for Multigrid methods


  1. Compact Fourier Analysis for Multigrid Methods Cortona 2008 Thomas Huckle joint work with Christos Kravvaritis 1

  2. Overview 1. Multigrid Method 2. Structured Matrices and Generating Functions 3. Compact Fourier Analysis for Multigrid methods based on the symbol 2

  3. 1. Multigrid Method Problem: Solve linear system A x = b Apply iterative solver like Gauss-Seidel via splitting A = M+N = (L+D)+L T : = + − − = − + − − 1 1 1 x , x x M ( b Ax ) M b ( I M A ) x + 0 k 1 k k k Convergence depending on 3

  4. 1. Multigrid Method Problem: Solve linear system A x = b Apply iterative solver like Gauss-Seidel via splitting A = M+N = (L+D)+L T : = + − − = − + − − 1 1 1 x , x x M ( b Ax ) M b ( I M A ) x + 0 k 1 k k k Convergence depending on Observation: Fast convergence in eigenspace to small eigenvalues of R, resp. large eigenvalues of A Slow convergence in eigenspace to large eigenvalues of R, resp. small eigenvalues of A Idea: Multi-iterative method with different iterations that remove the 4 error in different subspaces.

  5. Laplacian h h h Consider elliptic PDE in 1D with discretization: x i-1 x i x i+1 x i+2 2 d = = = = − u ( x ) u f ( x ); u ( x ) a , u ( x ) b ; + xx 0 n 1 2 dx Δ − − + − ⎡ ⎤ Δ − 2 2 d u d d u u u u u du u u u = ≈ ≈ + + − ≈ = + 1 i 1 i i 1 i i 1 i i u ⎢ ⎥ Δ 2 ⎣ ⎦ 2 dx dx dx dx h h dx x h − ⎛ ⎞ ⎛ ⎞ − + ⎛ ⎞ 2 2 1 u h f u ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 1 1 0 − − ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ O 2 1 2 u h f = = = 2 2 Au ⎜ ⎟ b ⎜ ⎟ ⎜ ⎟ − O O M M 1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ − − + ⎝ ⎠ 2 ⎝ ⎠ 5 1 2 u ⎝ ⎠ h f u + 1 n n n

  6. Fourier Analysis Eigenvectors s k of matrix A are closely related to sine function: n ⎛ π ⎞ ⎛ π ⎞ ⎛ ⎞ ⎛ ⎞ k j λ = ⎜ − ⎟ = ⎜ ⎟ → ∈ π ⎜ ⎟ ⎜ ⎟ 2 1 cos ; sin sin( ), [ 0 , ] ⎜ ⎟ s ⎜ k ⎟ kx x + + k ⎝ ⎠ k ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ n 1 n 1 = j 1 Plot: sin(kx), k=1,2,3: Low frequency: k=1, eigenmode: sin(x), eigenvalue: 2(1-cos( π /(n+1))) ≈π 2 /(2n 2 ) ≈ 0; High frequency: k=n, eigenmode: sin(nx), eigenvalue: 2(1-cos(n π /(n+1))) ≈ 4; 6

  7. Fourier Analysis for Jacobi Error reduction of m Jacobi iteration steps, considered for different eigenmodes: m ⎛ π ⎞ ⎛ ⎞ k ∑ = ⎜ ⎟ α ⎜ ⎟ e cos s ⎜ ⎟ + m k k ⎝ ⎠ ⎝ ⎠ n 1 Error reduction for k=1 (low frequency component): cos( π /(n+1)) ≈ 1 Error reduction for k=n/2 (medium frequency component): cos(n π /(2(n+1))) ≈ 0 Error reduction for k=n (high frequency component): cos(n π /(n+1)) ≈ -1 λ (k)=cos(k π /(n+1)) Very good error reduction for medium frequency; Poor error reduction for low/high frequency. 7

  8. Fourier Analysis for Damped Jacobi Is is possible to modify Jacobi iteration such that it removes high frequency error in the same way as Gauss-Seidel? Solution: damped Jacobi: + ω − ω − = + = + − = ( k 1 ) ( k ) 1 ( k ) 1 ( k ) x x D r x D ( b Ax ) k ω ω ( ) ⎛ − ⎞ = ω − + − ω − = + ⎜ ⎟ 1 1 ( k ) ; D b I D A x b I A ⎝ ⎠ 2 2 Error reduction for eigenmode: ⎛ ⎞ ω ω ⎛ π ⎞ ⎛ − ⎞ ⎛ ⎞ k ⎜ ⎟ = − ⎜ − ⎟ = ⎜ ⎟ ⎜ ⎟ I A s 1 2 ⎜ 1 cos ⎟ s ⎜ ⎟ + k k ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ 2 2 n 1 ⎛ ⎞ ⎛ π ⎞ ⎛ π ⎞ ⎛ ⎞ ⎛ ⎞ k k ⎜ ⎟ = − ω ⎜ − ⎟ = ⎜ − ω + ω ⎟ ⎜ ⎟ ⎜ ⎟ 1 1 cos s 1 cos s ; ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 8 + + k k ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ n 1 ⎠ n 1

  9. Error Reduction damped Jacobi High frequency modes are related to π /2 <= x <= π , resp. n/2 <= k <= n. − ω − π = − ω k=n/2, x= π /2: 1 ( 1 cos( / 2 )) 1 − ω + ω π = − ω k=n, x= π : 1 cos( ) 1 2 To minimize this function, we have to choose ω such that these two values have the same absolute value but different sign: 2 − ω = ω − ⇒ ω = 1 2 1 ; opt 3 Error reduction for these high frequency modes: 1 – ω opt = 1/3 . Error reduction depending on k and omega 9

  10. Coarsening Similarly, the high frequency error components are reduced by Gauss-Seidel and Red Black – Gauss-Seidel. After reducing the high frequency error by some smoothing iterations with damped Jacobi or GS, the residual b – Ax (k) is smooth and can be represented on a coarser grid. Mapping from fine to coarse representation? Smooth function in fine and ⎛ ⎞ coarse discretization u ⎜ ⎟ 1 ⎛ ⎞ ⎜ ⎟ u u ⎜ ⎟ 2 2 ⎜ ⎟ ⎜ ⎟ u u Projection or Restriction, → ⎜ ⎟ 4 3 ⎜ ⎟ M ⎜ ⎟ u ⎜ ⎟ 6 e.g. trivial injection ⎜ ⎟ ⎜ ⎟ M M ⎝ ⎠ ⎜ ⎟ ⎜ ⎟ 10 M ⎝ ⎠

  11. Coarsening (continued) + + x 2 x x → − + = i 1 i i 1 Better coarsening by mean value: x , i 2 , 4 , 6 ,... i 4 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ u u 1 2 1 0 0 u ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 1 , coarse 1 1 Fine to coarse ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ u u 0 0 1 2 1 0 0 u 1 → = = 2 , coarse 2 2 via ⎜ ⎟ Ru ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ M M M 0 0 1 2 1 4 ⎜ ⎟ restriction R ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ O ⎝ ⎠ u ⎝ ⎠ ⎝ ⎠ u ⎝ ⎠ u / 2 , n coarse n n Coarse to fine via prolongation P, e.g. ⎛ ⎞ 1 ⎜ ⎟ ⎛ ⎞ ⎛ ⎞ ⎜ ⎟ u u 2 ⎜ ⎟ ⎜ ⎟ 1 , coarse 1 ⎜ ⎟ ⎧ u for i odd ⎜ ⎟ ⎜ ⎟ ⎪ u u 1 1 1 i ⎜ ⎟ = = = = 2 , coarse 2 T ⎨ Pu R u ; u ⎜ ⎟ ⎜ ⎟ ( ) ⎜ ⎟ coarse coarse i M M ⎪ + 2 2 ⎜ ⎟ ⎩ ⎜ ⎟ u u / 2 for i even + ⎟ ⎜ ⎜ ⎟ i i 1 ⎜ ⎟ ⎝ ⎠ u u 1 ⎝ ⎠ ⎜ ⎟ n / 2 , coarse n ⎜ ⎟ 11 O ⎝ ⎠

  12. PDE in 2D y j x i = = = = = + = − u ( x , y ) g ( x , y ) for x a , x b , y a , y b u ( x , y ) : u u f ( x , y ); + + xx yy 0 n 1 0 n 1 − + − − + − u 2 u u u 2 u u + − + − + = − i 1 , j i , j i 1 , j i , j 1 i , j i , j 1 f i , j 2 2 h h − ⎛ ⎞ ⎛ ⎞ A 2 I I ⎜ ⎟ ⎜ ⎟ 1 = ⊗ + ⊗ = + − = ⎜ O ⎟ ⎜ O ⎟ A A I I A I 2 I 1 1 ⎜ ⎟ ⎜ ⎟ O O ⎝ ⎠ ⎝ ⎠ A 1 − − ⎛ ⎞ 4 1 1 ⎜ ⎟ − − ⎜ O ⎟ 1 4 1 ⎜ ⎟ O O O ⎜ ⎟ − − ⎜ ⎟ O 1 4 1 ⎜ ⎟ − − O 1 1 4 ⎜ ⎟ ⎜ ⎟ O O O O ⎜ ⎟ 12 O O O ⎝ ⎠

  13. Twogrid and Multigrid (1) Apply a few steps smoothing iterations (damped Jacobi, GS, RB-GS) (2) Coarse grid correction: - Consider the residual equation A(x (k) +x correction )=b, that is related to the best approximate solution x (k) : Ax correction = b-Ax (k) = r k - Restriction to the coarse grid via R - Solve Residual equation on coarse grid - Prolongate the solution back to the fine grid - Add the correction to x (k) Repeat until convergence. V - Cycle Multigrid: Presmoothing Postsmoothing Coarsening Prolongation Presmoothing Postsmoothing Coarsening Prolongation Presmoothing Postsmoothing If coarse system is „similar“ to Coarsening Prolongation 13 original problem! Solve coarse system

  14. Two Grid Error Reduction − − 1 Smoothing with M leads to error reduction by I M A ( ) = + − 1 − T Error reduction by Coarse Grid Correction: x x PA P b Ax new old coarse old ( ) = − = + − − − + − − = 1 T 1 T e x x x PA P ( b Ax ) x PA P ( b A x ) new new old coarse old coarse ( ) ( ( ) ) old = − − = − − 1 T 1 T e PA P Ae I PA P A e old coarse old coarse = = T T Coarse matrix A coarse is given as Galerkin projection A P AP RAR coarse 14

  15. Two Grid Error Reduction − − 1 Smoothing with M leads to error reduction by I M A ( ) = + − 1 − T Error reduction by Coarse Grid Correction: x x PA P b Ax new old coarse old ( ) = − = + − − − + − − = 1 T 1 T e x x x PA P ( b Ax ) x PA P ( b A x ) new new old coarse old coarse ( ) ( ( ) ) old = − − = − − 1 T 1 T e PA P Ae I PA P A e old coarse old coarse = = T T Coarse matrix A coarse is given as Galerkin projection A P AP RAR coarse Overall error reduction by pre/post-smoothing by m pre , resp. m post iterations steps with M pre , resp. M post , and Coarse Grid Correction: ( ) ( ( ) ( ) ) − 1 − m − m − ⋅ − ⋅ ⋅ − 1 T T 1 post pre I M A I P P AP P A I M A post pre 15

Recommend


More recommend