ill posed inverse problems in image processing
play

IllPosed Inverse Problems in Image Processing Introduction, - PowerPoint PPT Presentation

IllPosed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing a 1 , M. Ple singer 2 , Z. Strako s 3 I. Hn etynkov hnetynko@karlin.mff.cuni.cz ,


  1. Ill–Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing a 1 , M. Pleˇ singer 2 , Z. Strakoˇ s 3 I. Hnˇ etynkov´ hnetynko@karlin.mff.cuni.cz , martin.plesinger@sam.math.ethz.ch , strakos@cs.cas.cz 1 , 3 Faculty of Mathematics and Phycics, Charles University, Prague 2 Seminar of Applied Mathematics, Dept. of Math., ETH Z¨ urich 1 , 2 , 3 Institute of Computer Science, Academy of Sciences of the Czech Republic SNA ’11, January 24—28 1 / 60

  2. Recapitulation of Lecture I Linear system Consider the problem b = b exact + b noise , A ∈ R N × N , x , b ∈ R N , Ax = b , where ◮ A is a discretization of a smoothing operator, ◮ singular values of A decay, ◮ singular vectors of A represent increasing frequencies, ◮ b exact is smooth and satisfies the discrete Picard condition, ◮ b noise is unknown white noise, � b exact � ≫ � b noise � , � A − 1 b exact � ≪ � A − 1 b noise � . but We want to approximate x exact = A − 1 b exact . 2 / 60

  3. Recapitulation of Lecture I Right-hand side Smooth right-hand side (including noise): right−hand side B 50 100 150 200 250 50 100 150 200 250 300 3 / 60

  4. Recapitulation of Lecture I Violation of the discrete Picard condition Violation of the dicrete Picard condition in the noisy b : T b singular values of A and projections u i 4 10 T b right−hand side projections on left singular subspaces u i singular values σ i 2 10 noise level 0 10 −2 10 −4 10 −6 10 −8 10 −10 10 −12 10 0 1 2 3 4 5 6 7 8 4 x 10 4 / 60

  5. Recapitulation of Lecture I Solution A = U Σ V T the filtered solution is Using SVD u T j b � N x filtered = x filtered = V ΦΣ − 1 U T b , j =1 φ j v j , σ j where Φ = diag ( φ 1 , . . . , φ N ). Particularly in the image deblurring problem u T j vec ( B ) � N X filtered = j =1 φ j V j , where V j are singular images . σ j The filter factors φ j are given by some filter function φ j = φ ( j , A , b , . . . ) , for φ j = 1, j = 1 , . . . , N , we get the naive solution . 5 / 60

  6. Recapitulation of Lecture I Singular images Singular images V j (Gaußian blur, zero BC, artificial colors): 6 / 60

  7. Recapitulation of Lecture I Naive solution The naive solution is dominated by high-frequency noise: naive solution 50 100 150 200 250 50 100 150 200 250 300 7 / 60

  8. Outline of the tutorial ◮ Lecture I—Problem formulation: Mathematical model of blurring, System of linear algebraic equations, Properties of the problem, Impact of noise. ◮ Lecture II—Regularization: Basic regularization techniques (TSVD, Tikhonov), Criteria for choosing regularization parameters, Iterative regularization, Hybrid methods. ◮ Lecture III—Noise revealing: Golub-Kahan iteratie bidiagonalization and its properties, Propagation of noise, Determination of the noise level, Noise vector approximation, Open problems. 8 / 60

  9. Outline of Lecture II ◮ 5. Basic regularization techniques: Truncated SVD, Selective SVD, Tikhonov regularization. ◮ 6. Choosing regularization parameters: Discrepancy principle, Generalized cross validation, L-curve, Normalized cumulative periodogram. ◮ 7. Iterative regularziation: Landweber iteration, Cimmino iteration, Kaczmarz’s method, Projection methods, Regularizing Krylov subspace iterations. ◮ 8. Hybrid methods: Introduction, Projection methods with inner Tikhonov regularization. 9 / 60

  10. 5. Basic regularization techniques 10 / 60

  11. 5. Basic regularization techniques Truncated SVD The simplest regularization technique is the truncated SVD (TSVD) . Noise affects x naive through the components corresponding to the smalest singular values, u T u T j b j b � k � N x naive = v j + v j σ j σ j j =1 j = k +1 � �� � � �� � data dominated noise dominated u T j b exact u T j b noise � k � k = v j + v j σ j σ j j =1 j =1 u T j b exact u T j b noise � N � N + v j + v j . σ j σ j j = k +1 j = k +1 11 / 60

  12. Idea: Omit the noise dominated part. Define u T u T j b j b � k � N x TSVD ( k ) ≡ v j = j =1 φ j v j , σ j σ j j =1 where � 1 for j ≤ k φ j = . 0 for j > k u T j b noise � k A part of noise is still in the solution v j , j =1 σ j u T j b exact � N a part of useful information is lost v j . j = k +1 σ j If k is too small x TSVD ( k ) is overregularized (too smooth), if k is too large x TSVD ( k ) is underregularized (noisy). 12 / 60

  13. 5. Basic regularization techniques Truncated SVD The TSVD filter function, k = 2 983: T b singular values of A and TSDV filtered projections u i 4 10 T b filtered projections φ (i) u i singular values σ i 2 10 noise level filter function φ (i) 0 10 −2 10 −4 10 −6 10 −8 10 −10 10 −12 10 0 1 2 3 4 5 6 7 8 4 x 10 13 / 60

  14. 5. Basic regularization techniques Truncated SVD The TSVD solution, k = 2 983: TSVD solution, k = 2983 50 100 150 200 250 50 100 150 200 250 300 14 / 60

  15. 5. Basic regularization techniques Truncated SVD Advantages: ◮ Simple idea, simple implementation, simple analysis, U Φ † Σ V T , A is replaced by Φ = diag ( I k , 0 N − k ) , i.e. the rank- k approximation of A . Disadvantages: ◮ We have to compute the SVD of A (or the first k singular triplets). ◮ Choice of the regularization parameter k is usualy based on a knowledge of the norm of b noise which is either revealed from the SVD analysis, or given explictly as an additional information. ◮ The noise dominated part still contains some information useful for reconstruction which is lost (step filter function). 15 / 60

  16. 5. Basic regularization techniques Selective SVD Similar approach to TSVD is the selective SVD (SSVD) . Consider � b noise � is known. Then �� N � 1 / 2 j b noise | ≈ ε ≡ ∆ noise � b noise � = j b noise ) 2 ≡ ∆ noise , j =1 ( u T | u T N 1 / 2 , because u j represent frequencies and b noise represents white noise. We define u T u T j b j b � � N x SSVD ( ε ) ≡ v j = j =1 φ j v j , σ j σ j | u T j b | >ε where � 1 | u T for j b | > ε φ j = j b | ≤ ε . | u T 0 for 16 / 60

  17. 5. Basic regularization techniques Tikhonov approach Classical Tikhonov approach is based on penalizing the norm of the solution x Tikhonov ( λ ) ≡ arg min x {� b − Ax � 2 + λ 2 � Lx � 2 } , where ◮ � b − Ax � represents the residual norm, ◮ � Lx � represents ( L T L )–(semi)norm of the solution, often L = I N (we restrict to this case), or it is a discretized 1st or 2nd order derivative operator, ◮ λ is the (positive) penalty parameter; clearly → 0 x Tikhonov ( λ ) = x naive . lim λ − 17 / 60

  18. 5. Basic regularization techniques Tikhonov approach The Tikhonov minimization problem can be rewritten as x Tikhonov ( λ ) = arg min x {� b − Ax � 2 + λ 2 � Lx � 2 } � b �� 2 � � � � � A � � = arg min − x , � � 0 − λ L � � x i.e. to get the Tikhonov solution we solve a least squares (LS) problem � b � � � A x = . − λ L 0 In particular, we do not have to compute the SVD of A . 18 / 60

  19. 5. Basic regularization techniques Tikhonov approach A solution of the Tikhonov LS problem � b � � � A x = − λ L 0 can be analyzed through the system of normal equations � T � b � � T � � � � A A A x = , − λ L − λ L − λ L 0 ( A T A + λ 2 L T L ) x = A T b . With the SVD of A , A = U Σ V T , and L = I N = VV T we get (Σ 2 + λ 2 I N ) y = Σ U T b , where y = V T x and x = Vy . 19 / 60

  20. 5. Basic regularization techniques Tikhonov approach Thus x Tikhonov ( λ ) = V (Σ 2 + λ 2 I N ) − 1 Σ U T b , which gives σ j � N x Tikhonov ( λ ) = j + λ 2 ( u T j b ) v j σ 2 j =1 σ 2 u T u T j b j b � N � N j = v j = j =1 φ j v j , σ 2 j + λ 2 σ j σ j j =1 where � 1 σ 2 for σ j ≫ λ j φ j = j + λ 2 ≈ σ j ≪ λ , 0 < φ j < 1 . σ 2 j /λ 2 σ 2 for 20 / 60

  21. 5. Basic regularization techniques Tikhonov approach The behavior of the Tikhonov filter function: 21 / 60

  22. 5. Basic regularization techniques Tikhonov approach The Tikhonov filter function, λ = 8 × 10 − 4 : T b singular values of A and Tikhonov filtered projections u i 5 10 T b filtered projections φ (i) u i singular values σ i noise level 0 10 filter function φ (i) −5 10 −10 10 −15 10 −20 10 −25 10 0 1 2 3 4 5 6 7 8 4 x 10 22 / 60

  23. 5. Basic regularization techniques Tikhonov approach The Tikhonov solution, λ = 8 × 10 − 4 : Tikhonov solution, λ = 8*10 −4 50 100 150 200 250 50 100 150 200 250 300 23 / 60

  24. 5. Basic regularization techniques Tikhonov approach Advantages: ◮ Simple idea, with L = I N simple analysis, Φ = (Σ 2 + λ 2 I N ) − 1 Σ 2 . U Φ − 1 Σ V T , is replaced by A ◮ We do not have to compute SVD of A (compare with TSVD). ◮ The solution is given by some LS problem. ◮ The filter function is smooth (compare with TSVD). Disadvantages: ◮ With L � = I N the analysis is more complicated. ◮ We have to chose the penalty parameter λ (at this moment it is not clear how to do it). 24 / 60

Recommend


More recommend