CS 294-73 Software Engineering for Scientific Computing Lecture 16: Multigrid (structured grids revisited) Guest lecturer: Hans Johansen, hjohansen@lbl.gov
Laplacian, Poisson’s equation, Heat Equation Laplacian: Poisson’s equation: Heat equation: Blue: Ω Black: ∂Ω 2 10/17/2017 CS294-73 – Lecture 16
Discretizing on a rectangle (or a union of rectangles) Defined only on interior (blue) nodes. Values at black nodes are set to zero if we are using the boundary conditions given above. 3 10/17/2017 CS294-73 – Lecture 16
Solving Poisson’s equation Want to solve linear system of equations For ϕ defined on the interior (blue) grid points i . This makes sense, since the stencil for the operator requires only nearest neighbors. and we have values defined by the boundary conditions to be set to zero (black points). 4 10/17/2017 CS294-73 – Lecture 16
Point Jacobi (ref: Briggs, et al, A Multigrid Tutorial) As in the unstructured grid case, we can use point Jacobi to solve these equations. where is the relaxation parameter for our iterative scheme. λ k th ∆ h φ h Each iteration corrects eigenmode of at different rate: 1 0.8 0.6 0.4 0.2 0 − 0.2 − 0.4 − 0.6 − 0.8 − 1 0 0.2 0.4 0.6 0.8 1 O ( N 2 ) N=16, k=1, 16 iterations … operations to get this far … 5 10/17/2017 CS294-73 – Lecture 16
Smoothing properties of point Jacobi Define the error Even though we don’t know the error, we can compute the residual to provide a measure for the error: e.g. convergence if . We can also see how the error behaves under point Jacobi 6 10/17/2017 CS294-73 – Lecture 16
Smoothing properties of point Jacobi For example, choose . Then 7 10/17/2017 CS294-73 – Lecture 16
Smoothing properties of point Jacobi • The value at the new iteration is an average of the values at the old iteration: weighted sum, positive weights that sum to 1. • The max and min values are always strictly decreasing. Smoothes the local error very rapidly. • Error equation looks like forward Euler for the heat equation – smooths higher wavenumbers faster. 8 10/17/2017 CS294-73 – Lecture 16
Point Jacobi - smoothing example 1 N=128, k=1, for 128 iterations: 0.8 0.6 64x the work, but worse error! 0.4 0.2 0 What about other modes? − 0.2 1 − 0.4 0.8 − 0.6 0.6 − 0.8 0.4 − 1 0 0.2 0.4 0.6 0.8 1 0.2 0 − 0.2 k=2,4,8 for 128 iterations: − 0.4 1 − 0.6 0.8 each closer to final solution. − 0.8 0.6 − 1 0.4 0 0.2 0.4 0.6 0.8 1 0.2 0 − 0.2 Point Jacobi for Poisson − 0.4 1 O ( N D +2 ) requires work − 0.6 0.8 − 0.8 0.6 for a specified level of error! − 1 0.4 0 0.2 0.4 0.6 0.8 1 0.2 (and leaves “smooth” error) 0 − 0.2 Why? − 0.4 − 0.6 Discrete Fourier analysis … − 0.8 − 1 0 0.2 0.4 0.6 0.8 1 9 10/17/2017 CS294-73 – Lecture 16
Discrete Fourier Analysis of Point Jacobi Assume periodic bc’s for Poisson, and recall discrete Fourier modes: And finite difference operators are diagonalized: 10 10/17/2017 CS294-73 – Lecture 16
Discrete Fourier Analysis of Point Jacobi So what does Point Jacobi do in Fourier space? δ l +1 = δ l + λ ∆ h δ l ⇒ F k ( δ l +1 ) = F k ( δ l ) + λ Λ ( k ) F k ( δ l ) = (1 + λ Λ ( k )) F k ( δ l ) = ω ( k ) F k ( δ l ) Where each error mode’s damping factor is: ω ( k ) = 1 ω ( k ) = 1 + λ Λ ( k ) = 1 + λ z k − 2 + z − k � � h 2 = 1 + 2 λ h 2 ( cos (2 π kh ) − 1) 11 10/17/2017 CS294-73 – Lecture 16
Discrete Fourier Analysis of Point Jacobi β = 2 π kh , k ∈ [0 , N 2 ] What does this look like for (real spectrum) ? λ = 1 / 4 λ = 1 / 8 No choice helps highest k modes lowest k modes completely damped meh 1 0.8 0.6 0.4 0.2 0 ω ( β ) = 1 + 2 λ ( cos ( β ) − 1) − 0.2 − 0.4 − 0.6 − 0.8 − 1 0 0.5 1 1.5 2 2.5 3 λ = 1 / 3 λ = 1 / 2 +/- modes undamped! most high- k modes damped 12 10/17/2017 CS294-73 – Lecture 16
Point Jacobi Fourier Analysis: Conclusions • A few iterations of point Jacobi can be tuned to reduce the “high frequency” error with very just a few sweeps. • These modes can be fixed locally – across a small number of points – while the “smooth” error persists globally. • As resolution decreases, N increases, and smooth errors ( j=1 ) are less and less responsive to point Jacobi: max ( λ ) = h 2 ω (1) = 1 + 2 λ h 2 ( cos (2 π h ) − 1) 2 ≈ 1 − Ch 2 • Ideally, we’d use PJ for those modes separately with a coarse N c , h c and with good convergence rate and is stable on coarse grid … λ c Multigrid! • Then somehow blend the answers? 13 10/17/2017 CS294-73 – Lecture 16
Multigrid • Do a few iterations of point Jacobi on your grid to obtain • Compute the residual . • Average the residual down onto a grid coarsened by a factor of 2 : • Apply point Jacobi to solve the residual equation for : • Interpolate correction back onto fine grid solution: • Smooth again using point Jacobi. 14 10/17/2017 CS294-73 – Lecture 16
Multigrid • If the number of grid points is 2 M +1, can apply this recursively. multigrid(phi,f,h,level) phi:= phi + lambda*(L(phi) – f); (p times) if (level > 0) R = L(phi) – f; Rcoarse = Av(R); delta = 0.; call multigrid(delta,Rcoarse,2h, level-1); phi += Int(delta); endif; phi:= phi + lambda*(L(phi) – f); (p times) 15 10/17/2017 CS294-73 – Lecture 16
Averaging and Interpolation • Conservation of total charge. ¹ ² ¹ A ¹ ⁄ ₁₆ × ⁴ ² ² • Adjoint & order conditions, other considerations ¹ ² ¹ (see Trottenberg, et al, Multigrid ). • For our second-order accurate discretization on a nodal-point grid, we can use the trapezoidal rule ¼ ¼ ½ 1 ½ ½ I (and bilinear interpolation). ½ ¼ ¼ Even if the grid is not a power of 2, can apply a direct solve at the bottom. 3 levels in 3D leads to a reduction by 512 in the number of unknowns. 16 10/17/2017 CS294-73 – Lecture 16
Multigrid Convergence: 1D example 30 N=128 , RHS step functions 20 10 0 − 10 − 20 − 30 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 0.8 N=128 , exact solution 0.6 0.4 0.2 0 − 0.2 − 0.4 − 0.6 − 0.8 − 1 0 0.2 0.4 0.6 0.8 1 17 10/17/2017 CS294-73 – Lecture 16
Multigrid Convergence: 1D example 1 0.8 N=128 , Point Jacobi only 0.6 0.4 0.2 0 Iterations Max Error − 0.2 N ~0.9 − 0.4 8 * N ~0.45 − 0.6 N^2 ~2e-6 − 0.8 − 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 Depth Max Error 0.8 Coarse solution only 0.6 1 ~9e-4 0.4 2 ~4e-3 0.2 0 3 ~2e-2 − 0.2 − 0.4 4 ~6e-2 − 0.6 − 0.8 5 ~0.25 − 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 18 10/17/2017 CS294-73 – Lecture 16
Multigrid Convergence: 1D example What’s the error from a coarse grid solution? Look at Fourier modes … 2 • Solution has lots of high 10 wave-number content 1 10 0 10 − 1 10 − 2 10 10 20 30 40 50 60 19 10/17/2017 CS294-73 – Lecture 16
Multigrid Convergence: 1D example What’s the error from a coarse grid solution? Look at Fourier modes … 2 • Solution has high- k 10 (wave number) content • Solution on 1 st coarse 1 10 level matches low- k well, but has high- k error 0 (piecewise linear interp.) 10 − 1 10 − 2 10 10 20 30 40 50 60 20 10/17/2017 CS294-73 – Lecture 16
Multigrid Convergence: 1D example What’s the error from a coarse grid solution? Look at Fourier modes … 2 • Solution has high- k 10 (wave number) content • Solution on 1 st coarse 1 10 level matches low- k well, but has high- k error 0 (piecewise linear interp.) 10 • Solution on 2 nd coarse level has mid- k error too − 1 10 (PJ less effective) − 2 10 10 20 30 40 50 60 21 10/17/2017 CS294-73 – Lecture 16
Multigrid Convergence: 1D example What’s the error from a coarse grid solution? Look at Fourier modes … 2 • Solution has high- k 10 (wave number) content • Solution on 1 st coarse 1 10 level matches low- k well, but has high- k error 0 (piecewise linear interp.) 10 • Solution on 2 nd coarse level has mid- k error too − 1 10 (PJ less effective) • Solution on 3 rd coarse − 2 10 level has errors in most k 10 20 30 40 50 60 (solving wrong problem) 22 10/17/2017 CS294-73 – Lecture 16
Multigrid Convergence: 1D example What’s the error from a coarse grid solution? Look at Fourier modes … 2 • Solution has high- k 10 (wave number) content • Solution on 1 st coarse 1 10 level matches low- k well, but has high- k error 0 (piecewise linear interp.) 10 • Solution on 2 nd coarse level has mid- k error too − 1 10 (PJ less effective) • Solution on 3 rd coarse − 2 10 level has errors in most k 10 20 30 40 50 60 (solving wrong problem) • MG V -cycle gets all- k 23 10/17/2017 CS294-73 – Lecture 16
Recommend
More recommend