ill posed inverse problems in image processing
play

IllPosed Inverse Problems in Image Processing Introduction, - PowerPoint PPT Presentation

IllPosed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing a 1 , M. Ple singer 2 , Z. Strako s 3 I. Hn etynkov hnetynko@karlin.mff.cuni.cz ,


  1. Ill–Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing a 1 , M. Pleˇ singer 2 , Z. Strakoˇ s 3 I. Hnˇ etynkov´ hnetynko@karlin.mff.cuni.cz , martin.plesinger@sam.math.ethz.ch , strakos@cs.cas.cz 1 , 3 Faculty of Mathematics and Phycics, Charles University, Prague 2 Seminar of Applied Mathematics, Dept. of Math., ETH Z¨ urich 1 , 2 , 3 Institute of Computer Science, Academy of Sciences of the Czech Republic SNA ’11, January 24—28 1 / 58

  2. Recapitulation of Lecture I and II Linear system Consider an ill-posed (square nonsingular) problem b = b exact + b noise , A ∈ R N × N , x , b ∈ R N , Ax = b , where ◮ A is a discretization of a smoothing operator, ◮ singular values of A decay, ◮ singular vectors of A represent increasing frequencies, ◮ b exact is smooth and satisfies the discrete Picard condition, ◮ b noise is unknown white noise, � b exact � ≫ � b noise � , � A − 1 b exact � ≪ � A − 1 b noise � . but We want to approximate x exact = A − 1 b exact . 2 / 58

  3. Recapitulation of Lecture I and II Linear system Discrete Picard condition (DPC): | ( b exact , u j ) | On average, the components of the true right-hand b exact in the left singular subspaces of side decay faster A than the singular values σ j of A , j = 1 , . . . , N . White noise: The components | ( b noise , u j ) | , j = 1 , . . . , N do not exhibit any trend. Denote δ noise ≡ � b noise � � b exact � the (usually unknown) noise level in the data. 3 / 58

  4. Recapitulation of Lecture I and II Linear system Singular values and DPC (SHAW(400)): σ j , double precision arithmetic 0 10 σ j , high precision arithmetic |(b exact , u j )|, high precision arithmetic −10 10 −20 10 −30 10 −40 10 0 10 20 30 40 50 60 singular value number 4 / 58

  5. Recapitulation of Lecture I and II Linear system Violation of DPC for different noise levels (SHAW(400)): 5 10 σ j |(b, u j )| for δ noise = 10 −14 0 10 |(b, u j )| for δ noise = 10 −8 |(b, u j )| for δ noise = 10 −4 −5 10 −10 10 −15 10 −20 10 0 50 100 150 200 250 300 350 400 singular value number 5 / 58

  6. Recapitulation of Lecture I and II Naive solution The components of the naive solution u T j b exact u T j b noise � k � k x naive ≡ A − 1 b = v j + v j σ j σ j j =1 j =1 � �� � � �� � x exact amplified noise u T j b exact u T j b noise � N � N + + v j v j σ j σ j j = k +1 j = k +1 � �� � � �� � x exact amplified noise corresponding to small σ j ’s are dominated by amplified noise. Regularization is used to suppress the effect of errors and extract the essential information about the solution. 6 / 58

  7. Recapitulation of Lecture I and II Regularization methods Direct regularization (TSVD, Tikhonov regularization): Suitable for solving small ill-posed problems. Projection regularization: Suitable for solving large ill-posed problems. Regularization is often based on regularizing Krylov subspace iterations. Hybrid methods: Here the outer iterative regularization is combined with an inner direct regularization of the projected small problem (i.e. of the reduced model). The algorithm is stopped when the regularized solution of the reduced model matches some selected stopping criteria based, e.g., on the discrepancy principle, the generalized cross validation, the L-curve criterion, or the normalized cumulative periodograms. 7 / 58

  8. Outline of the tutorial ◮ Lecture I—Problem formulation: Mathematical model of blurring, System of linear algebraic equations, Properties of the problem, Impact of noise. ◮ Lecture II—Regularization: Basic regularization techniques (TSVD, Tikhonov), Criteria for choosing regularization parameters, Iterative regularization, Hybrid methods. ◮ Lecture III—Noise revealing: Golub-Kahan iterative bidiagonalization and its properties, Propagation of noise, Determination of the noise level, Noise vector approximation, Open problems. 8 / 58

  9. Outline of Lecture III ◮ 9. Golub-Kahan iterative bidiagonalization and its properties: Basic algorithm, LSQR method. ◮ 10. Propagation of noise: Spectral properties of bidiagonalization vectors, Noise amplification. ◮ 11. Determination of the noise level: Motivation, Connection of GK with the Lanczos tridiagonalization, Approximation of the Riemann-Stieltjes distribution function, Estimate based on distribution functions, Identification of the noise revealing iteration. ◮ 12. Noise vector approximation: Basic formula, Noise subtraction, Numerical illustration (SHAW and ELEPHANT image deblurring problem). ◮ 13. Open problems. 9 / 58

  10. 9. Golub-Kahan iterative bidiagonalization and its properties 10 / 58

  11. 9. Golub-Kahan iterative bidiagonalization and its properties Basic algorithm Golub-Kahan iterative bidiagonalization (GK) of A : given w 0 = 0 , s 1 = b / β 1 , where β 1 = � b � , for j = 1 , 2 , . . . A T s j − β j w j − 1 , α j w j = � w j � = 1 , β j +1 s j +1 = A w j − α j s j , � s j +1 � = 1 . Then w 1 , . . . , w k is an orthonormal basis of K k ( A T A , A T b ), and s 1 , . . . , s k is an orthonormal basis of K k ( AA T , b ). [Golub, Kahan: ’65]. 11 / 58

  12. 9. Golub-Kahan iterative bidiagonalization and its properties Basic algorithm Let S k = [ s 1 , . . . , s k ], W k = [ w 1 , . . . , w k ] be the associated matrices with orthonormal columns. Denote   α 1 � � β 2 α 2   L k   L k =  , L k + = ... ...   e T k β k +1  β k α k the bidiagonal matrices containing the normalization coefficients. Then GK can be written in the matrix form as A T S k = W k L T k , A W k = [ S k , s k +1 ] L k + = S k +1 L k + . 12 / 58

  13. 9. Golub-Kahan iterative bidiagonalization and its properties LSQR method Regularization based on GK belong among popular approaches for solving large ill-posed problems. First the problem is projected onto a Krylov subspace using k steps of bidiagonalization (regularization by projection), → S T k +1 A W k y = L k + y ≈ β 1 e 1 = S T A x ≈ b − k +1 b . Then, e.g., the LSQR method minimizes the residual, x ∈ x 0 + K k ( A T A , A T b ) � Ax − b � = min min y ∈ R k � L k + y − β 1 e 1 � , i.e. the approximation has the form x k = W k y k , where y k is a least squares solution of the projected problem, [Paige, Saunders: ’82] . 13 / 58

  14. 9. Golub-Kahan iterative bidiagonalization and its properties LSQR method Choice of the Krylov subspace: The vector b is dominated by low frequencies (data) and A T has the smoothing property. Thus A T b and also K k ( A T A , A T b ) = Span { A T b , ( A T A ) A T b , . . . , ( A T A ) k − 1 A T b } are dominated by low frequencies . 14 / 58

  15. 9. Golub-Kahan iterative bidiagonalization and its properties LSQR method Here k is in fact the regularization parameter : ◮ If k is too small, then the projected problem L k + y ≈ β 1 e 1 does not contain enough information about the solution of the original system. ◮ If k is too large, then the projected problem is contaminated by noise. Moreover, the projected problem may inherit a part of the ill-posedness of the original problem. 15 / 58

  16. 9. Golub-Kahan iterative bidiagonalization and its properties LSQR method Therefore, in hybrid methods , some form of inner regularization (TSVD, Tikhonov regularization) is applied to the (small) projected problem. The method then, however, requires: ◮ stopping criteria for GK, ◮ parameter choice method for the inner regularization. This usually requires solving the problem for many values of the regularization parameter and many iterations. 16 / 58

  17. 10. Propagation of noise 17 / 58

  18. 10. Propagation of noise Spectral properties of bidiagonalization vectors GK starts with the normalized noisy right-hand side s 1 = b / � b � . Consequently, vectors s j contain information about the noise. Consider the problem SHAW(400) from [Regularization Toolbox] with a noisy right-hand side (the noise was artificially added using the MatLab function randn ). As an example we set δ noise ≡ � b noise � � b exact � = 10 − 14 . 18 / 58

  19. 10. Propagation of noise Spectral properties of bidiagonalization vectors Components of several bidiagonalization vectors s j computed via GK with double reorthogonalization: s 1 s 6 s 11 s 16 s 17 0.2 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0 0 0 0 0 −0.1 −0.1 −0.1 −0.1 −0.1 −0.2 −0.2 −0.2 −0.2 −0.2 0 200 400 0 200 400 0 200 400 0 200 400 0 200 400 s 18 s 19 s 20 s 21 s 22 0.2 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0 0 0 0 0 −0.1 −0.1 −0.1 −0.1 −0.1 −0.2 −0.2 −0.2 −0.2 −0.2 0 200 400 0 200 400 0 200 400 0 200 400 0 200 400 19 / 58

  20. 10. Propagation of noise Spectral properties of bidiagonalization vectors The first 80 spectral coefficients of the vectors s j in the basis of the left singular vectors u j of A : U T s 1 U T s 6 U T s 11 U T s 16 U T s 17 0 0 0 0 0 10 10 10 10 10 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 U T s 18 U T s 19 U T s 20 U T s 21 U T s 22 0 0 0 0 0 10 10 10 10 10 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80 20 / 58

Recommend


More recommend