on the regularizing power of multigrid type algorithms
play

On the regularizing power of multigrid-type algorithms Marco - PowerPoint PPT Presentation

On the regularizing power of multigrid-type algorithms Marco Donatelli Stefano Serra Capizzano Universit` a dellInsubria, Dip. Fisica e Matematica - Sede di Como Via Valleggio 11 - 22100 Como, Italy ( donatelli@uninsubria.it ) Outline


  1. On the regularizing power of multigrid-type algorithms Marco Donatelli Stefano Serra Capizzano Universit` a dell’Insubria, Dip. Fisica e Matematica - Sede di Como Via Valleggio 11 - 22100 Como, Italy ( donatelli@uninsubria.it )

  2. Outline • Image restoration using boundary conditions (BC) • Spectral properties of the coefficient matrices • Multi-Grid Methods (MGM) • Two-Level (TL) regularization • Multigrid regularization • Numerical experiments • Direct Multigrid regularization

  3. Image restoration with BCs Using boundary conditions (BC), the restored image f is obtained solving: A f = g + n • g = blurred image • n = noise (random vector) • A = two-level matrix depending on PSF and BC BC A Dirichlet Toeplitz periodic circulant Neumann (reflective) DCT III anti-reflective DST I + low-rank

  4. Generating function of PSF • 1D problem with gaussian PSF: 1.2 1 x = − 5 : 0 . 1 : 5 101 points a = e − x 2 PSF’s coefficients 0.8 a = ( a − 50 , . . . , a 0 , . . . , a 50 ) , a i = a − i 0.6 i = − 50 a i e − iy generating function z ( y ) = � 50 0.4 0.2 The eigenvalues of A ( z ) are about a uniform sampling of z in [0 , π ] 0 −0.2 π 0 0.5 1 1.5 2 2.5 3 3.5 • The ill-conditioned subspace is mainly constituted by the high frequencies.

  5. Smoothing • Iterative regularizing methods (e.g. Landweber, CG, . . . ) firstly reduce the error in the low frequencies (well-conditioned subspace). • Example: f = sin( x ) , x ∈ [0 , π ] and g = A f . Solving the linear system A ˜ f = g by Richardson 1 0.6 0.8 0.6 0.8 0.4 0.4 0.6 0.2 0.2 0.4 0 0 0.2 −0.2 −0.2 0 −0.4 −0.4 −0.2 −0.6 −0.6 −0.4 −0.8 −0.8 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 Initial error After 1 iteration After 5 iterations • The error is highly oscillating after ten iterations as well.

  6. Multigrid structure • Idea: project the system in a subspace of lower dimension, solve the resulting system in this space and interpolate the solution in order to improve the previous approximation in the greater space. • The j -th iteration of the Two-Grid Method(TGM) for the system A x = b : x = Smooth ( A, x ( j ) , b , ν ) ( 1 ) ˜ ( 2 ) r 1 = P ( b − A ˜ x ) ( 3 ) A 1 = P A P H ( 4 ) e 1 = A − 1 1 r 1 ( 5 ) x ( j +1) = x ( j ) + P H e 1 • Multigrid (MGM): the step (4) becomes a recursive application of the algorithm.

  7. Algebraic Multigrid (AMG) • The AMG uses information on the coefficient matrix and no geometric information on the problem. • Different classic smoothers have a similar behavior: in the initial iterations they are not able to reduce effectively the error in the subspace generated by the eigenvectors associated to small eigenvalues (ill-conditioned subspace) ⇓ the projector is chosen in order to project the error equation in such subspace. • A good choice for the projector leads to MGM with a rapid convergence. • For instance, for Toeplitz and algebra of matrices, see [Aric` o, Donatelli, Serra Capizzano, SIMAX, Vol. 26–1 pp. 186–214.].

  8. Geometric Multigrid • The MGM is an optimal solver for elliptic PDE Poisson’s problem 1 0.9 For elliptic PDE the ill-conditioned 0.8 0.7 subspace is made by low frequencies 0.6 (complementary with respect to the 0.5 0.4 gaussian blur). 0.3 0.2 0.1 0 0 0.5 1 1.5 2 2.5 3 π 3.5 • For the projector a simple and powerful choice is:   1 2 1 P = 1 1 2 1   (full weighting) ... ...   4   1 2 1 P T = 2 P (linear interpolation)

  9. Image restoration and Multigrid • In the images deblurring the ill-conditioned subspace is related to high frequencies, while the well-conditioned subspace is generated to low frequencies. • In order to obtain a rapid convergence the algebraic multigrid projects in the high frequencies where the noise “lives” = ⇒ noise explosion already at the first iteration (it requires Tikhonov regularization [NLAA in press]). • In this case the geometric multigrid projects in the well-conditioned subspace (low frequencies) = ⇒ it is slowly convergent but it can be a good iterative regularizer. If we have an iterative regularizing method we can improve its regu- larizing property using it as smoother in a Multigrid algorithm.

  10. Projector structure • In order to apply recursively the MGM it is necessary to maintain the same struc- ture at each level (Toeplitz, circulant, . . . ). • Projector: P i = K N i T N i (2 + 2 cos( x )) s.t. i is the recursion level and   2 1 ... 1 2   T N i (2 + 2 cos( x )) =   ... ... 1     1 2 N i × N i Toeplitz & DST − I DCT − III circulant � � � � � � 1 0 0 1 0 1 1 0 1 0 ... ... K N i ∈ R N i − 1 × N i 0 1 0 ... ... ... 1 1 0 ... ... ... 1 0 0 1 0 0 1 1

  11. Two-Level (TL) regularization • Two-Level (TL) regularization (specialization of the TGM): x = x ( j ) 1. No smoothing at step (1): ˜ 2. Step (4): e 1 = A − 1 1 r 1 → Smooth ( A 1 , e 1 , r 1 , ν ) As smoother a generic regularizing method can be used. • Since in the finer grid we do not apply the smoother we can project the system A x = b instead of the error equation A e = r . • The P = full weighting applied to the observed image b leads to a reblurring effect followed by a down-sampling (noise damping like a low-pass filter). • The P T = linear interpolation reconstruct exactly the piecewise linear function damping the high oscillation deriving by the noise.

  12. Multigrid regularization • Applying recursively the Two-Level algorithm, we obtain a Multigrid method. • V -cycle • Using a greater number of recursive calls (e.g. W -cycle), the algorithm “works” more in the well-conditioned subspace but it is more difficult to define an early stopping criterium.

  13. Computational cost • Let n 0 × n 0 = n × n be the problem size at the finer level, where n 0 = n = 2 α , α ∈ N , thus at the level j the problem size is n j × n j where n j = 2 α − j . • Projection j → j + 1 : 7 4 n 2 Interpolation j + 1 → j : 7 8 n 2 j flops. j flops. • Let W ( n ) be the computational cost of one smoother iteration for a problem of size n × n with W ( n ) = cn 2 + O ( n ) , c ≫ 1 . The computational cost at the j -th level is about c j = W ( n j ) + 21 8 n 2 j flops . • The total cost of one MGM iteration is: log 2 ( n ) − 1 21 c j < 4 n 2 + 4 � n ≈ 1 8 n 2 + � � 3 W 3 W ( n ) . 2 j =1

  14. Example 1 (airplane) • Periodic BCs Original • Gaussian PSF (A spd) Image • SNR = 100 Inner part 128 × 128 Blurred + SNR = 100 Restored with MGM

  15. Restoration error (example 1) Graph of the relative restoration error e j = � ¯ f − f ( j ) � 2 / � ¯ f � 2 increasing the number of iterations when solving A f = g + n (RichN = Landweber, CGN = CG for normal equations) . 0.2 CG 0.19 Richardson Method j =1 ,... ( e j ) arg min min j =1 ,... ( e j ) TL(CG) 0.18 TL(Richardson) CG 0.1215 4 MGM(Ric,1) 0.17 MGM(Ric,2) Richardson 0.1218 8 0.16 TL(CG) 0.1132 8 TL(Rich) 0.1134 16 0.15 MGM(Rich, 1) 0.1127 12 0.14 MGM(Rich, 2) 0.1129 5 0.13 CGN 0.1135 178 0.12 RichN 0.1135 352 0.11 Minimum restoration error 0.1 0 5 10 15 20 25 30 Relative error vs. number of iterations

  16. Example 2 (SNR = 10) • Same image and PSF but much more noise: SNR = 10. • For CG and Richardson, it is necessary to resort to normal equations. 0.21 Method j =1 ,... ( e j ) arg min min j =1 ,... ( e j ) CGN RichN CGN 0.1625 30 TL(CGN) 0.2 TL(RichN) MGM(RichN,1) RichN 0.1630 59 MGM(RichN,2) TL(CGN) 0.1611 48 0.19 TL(RichN) 0.1613 97 MGM(RichN,1) 0.1618 69 0.18 MGM(RichN,2) 0.1621 26 MGM(Rich,1) 0.1648 3 0.17 MGM(Rich,2) 0.1630 1 0.16 Minimum relative error 0 10 20 30 40 50 60 70 80 90 100 Relative error vs. number of iterations

  17. Example 3 (Saturn) • Periodic BCs (exacts) Original • Gaussian PSF ( λ ( A ) ≈ − 10 − 4 ) image • SNR = 50 Inner part 128 × 128 PSF Blurred + SNR = 50

  18. Restoration error (example 3) Graph of the relative restoration error e j = � ¯ f − f ( j ) � 2 / � ¯ f � 2 increasing the number of iterations when solving the linear system A f = g + n . 0.5 CG Method j =1 ,... ( e j ) arg min min j =1 ,... ( e j ) Richardson 0.45 TL(CG) CG 0.2033 6 TL(Ric) 0.4 MGM(Ric,1) Richardson 0.2035 12 MGM(Ric,2) TL(CG) 0.1539 18 0.35 TL(Rich) 0.1547 30 0.3 MGM(Rich,1) 0.1421 22 MGM(Rich,2) 0.1374 8 0.25 CGN 0.1302 2500 0.2 MGM(CGN,1) 0.1297 250 0.15 MGM(RichN,2) 0.1305 1700 0.1 Minimum relative error 0 5 10 15 20 25 30 Relative error vs. number of iterations

  19. Restored images MGM(Rich,2) CG CGN CG MGM(Rich,2) CGN (normal equation) Minimum error 0.2033 0.1374 0.1302 Number of iterations 6 8 2500

  20. Direct multigrid regularization • Trend of the error after only one iteration of MGM(Rich, γ ) varying γ . • It is a direct regularization method with regularization parameter γ . 0.3 MGM(Rich,1) 0.28 MGM(Rich,2) γ e 1 MGM(Rich,4) MGM(Rich,6) 0.26 1 0.25747 0.24 2 0.18944 The CGN 3 0.15723 reaches e 0.22 4 0.14241 minimum equal 0.2 5 0.13658 to 0.1302 after 0.18 6 0.13543 2500 iterations 0.16 7 0.13674 0.14 8 0.13947 0.12 0 5 10 15 20 25 30 Relative error vs. number of iterations • The computational cost increase with γ but not so much (e.g. γ = 8 ⇒ O ( N 1 . 5 ) ).

Recommend


More recommend