fast and accurate numerical techniques for deblurring
play

Fast and accurate numerical techniques for deblurring models The - PowerPoint PPT Presentation

Pietro DellAcqua Fast and accurate numerical techniques for deblurring models The image restoration problem Recorded image True image By the knowledge of the observed data ( the effect ), we want to find an approximation of the true image (


  1. Pietro Dell’Acqua Fast and accurate numerical techniques for deblurring models

  2. The image restoration problem Recorded image True image By the knowledge of the observed data ( the effect ), we want to find an approximation of the true image ( the cause ).

  3. Blurring model Classical image deblurring problem with space invariant blurring. Under such assumption the image formation process is modelled by � x ( t ) dt + η ( s ) , s ∈ R 2 b ( s ) = h ( s − t )¯ where h is the known impulse response of the imaging system, i.e. the point spread function (PSF), ¯ x denotes the true physical object, η takes into account measurement errors and noise. Point spread function

  4. Discrete problem We have to solve the linear equation Ax = b, where A is the blurring matrix and b = A ¯ x + η is the blurred and noisy image. The associated system of normal equations A H Ax = A H b is solved in order to find an approximated least squares solution. A is a large ill-conditioned matrix A ( h PSF , BCs) is a structured matrix

  5. Structured matrices � Zero BCs: Block Toeplitz with Toeplitz blocks (BTTB) � Periodic BCs: Block circulant with circulant blocks (BCCB) � FFT (Fast Fourier Transform) � Reflective BCs: Block Toeplitz+Hankel with Toeplitz+Hankel blocks � DCT (Discrete Cosine Transform) { for symmetric PSFs } � Anti-Reflective BCs: Block Toeplitz+Hankel with Toeplitz+Hankel blocks + a low rank matrix � ART (Anti-Reflective Transform) { for symmetric PSFs }

  6. Research activity

  7. Optimal preconditioning Let A = A ( h ) be the Anti-Reflective matrix generated by the generic � � PSF h PSF = h i 1 ,i 2 i 1 = − q 1 ,...,q 1 ,i 2 = − q 2 ,...,q 2 and let P = P ( s ) ∈ AR 2 D be the Anti-Reflective matrix generated by the symmetrized n � � PSF s PSF = s i 1 ,i 2 i 1 = − q 1 ,...,q 1 ,i 2 = − q 2 ,...,q 2 . We are looking for the optimal preconditioner P ∗ = P ∗ ( s ∗ ) in the sense that P ∗ = F , s ∗ = arg min � A − P � 2 s min � A ( h ) − P ( s ) � 2 arg F , P ∈AR 2 D n �� � 2 . � � where �·� F is the Frobenius norm, defined as � A � F = � a i,j i,j

  8. Optimal preconditioning The result is known for Reflective BCs. Given a generic PSF h PSF , the optimal preconditioner in the DCT matrix algebra is generated by the strongly symmetric PSF s PSF given by 1D : s ± i = h − i + h i , 2 2D : s ± i 1 , ± i 2 = h − i 1 , − i 2 + h − i 1 ,i 2 + h i 1 , − i 2 + h i 1 ,i 2 , 4 which is a symmetrization of the original PSF.

  9. Geometrical idea of the proof - 1D R Q* s R A point R , its swapped point R S , the optimal approximation of both Q ∗ . We simply observe that if we consider in the Cartesian plane a point R = ( R x , R y ), its optimal approximation Q ∗ , among the points Q = ( Q x , Q y ) such that Q x = Q y , is obtained as the intersection between the line y = x with the perpendicular line that pass through R , that is � y − R y = − ( x − R x ) y = x hence Q ∗ x = Q ∗ y = ( R x + R y ) / 2. The same holds true if we consider the swapped point R S = ( R y , R x ), since they share the same distance, i.e. d ( R, Q ∗ ) = d ( R S , Q ∗ ). Clearly, due to linearity of obtained expression, this result can be extended also to the case of any linear combination of coordinates.

  10. Geometrical idea of the proof - 1D For the sake of simplicity we report a small example ω y   ˆ 0 0 0 ω y 0 ν x 1 ν x 2 ν x   0 3 1 ˆ 0 ˆ 2 ˆ 2 ˆ ω y ζ y ζ x ϑ x ϑ x ω y 1 ζ y 0 ζ x 2 ϑ x 2 ϑ x ˆ   3   3   ω y 2 ˆ ζ y 1 ˆ ϑ 0 ˆ 1 ˆ 2 ˆ ω y 2 ζ y ϑ x ϑ x ϑ x 1 ϑ 0 ϑ x 1 ϑ x 2 ϑ x   ˆ     3 3   ω y 3 ϑ y 2 ϑ y ω y 3 ˆ ϑ y 2 ˆ ϑ y 1 ˆ ϑ 0 ˆ 1 ˆ  1 ϑ 0 ϑ x 1 ϑ x 2 ω x  ϑ x ϑ x ω x A − P = − ˆ 2 ˆ     3 3   ϑ y 3 ϑ y 2 ϑ y  1 ϑ 0 ζ x 1 ω x  ϑ y ˆ 3 ˆ ϑ y 2 ˆ ϑ y 1 ˆ ϑ 0 ˆ ζ x ω x   1 ˆ   2 2   ϑ y 3 ϑ y 2 ζ y  2 ζ x 0 ω x  ˆ 3 ˆ 2 ˆ 2 ˆ ϑ y ϑ y ζ y  ζ x ω x  0 ˆ   1   1 ν y 3 ν y 2 ν y 1 ω x ω x 0 0 0 ˆ 0 0 Here, we set the points i , ϑ y Θ i = ( ϑ x i ) = ( h − i , h i ) q q i , ω y � j , ϑ y � ϑ y Ω i = ( ω x i ) = ( ϑ x ϑ x i + 2 i + 2 j ) j = i +1 j = i +1 i , ν y i , ϑ y i − ϑ Sy N i = ( ν x i ) = ( h − i − h i , h i − h − i ) = ( ϑ x i − ϑ Sx i ) 0 , ζ y 2 , ϑ y 0 − ϑ y Z 0 = ( ζ x 0 ) = ( h 0 − h − 2 , h 0 − h 2 ) = ( ϑ x 0 − ϑ x 2 ) 1 , ζ y 3 , ϑ y 1 − ϑ y Z 1 = ( ζ x 1 ) = ( h − 1 − h − 3 , h 1 − h 3 ) = ( ϑ x 1 − ϑ x 3 ) 2 , ζ y 3 , ϑ y 1 − ϑ Sy Z 2 = ( ζ x 2 ) = ( h − 1 − h 3 , h 1 − h − 3 ) = ( ϑ x 1 − ϑ Sx 3 )

  11. Geometrical idea of the proof - 2D We simply observe that if we consider in the 4-dimensional space a point R = ( R x , R y , R z , R w ), its optimal approximation Q ∗ among the points Q = ( Q x , Q y , Q z , Q w ) belonging to the line L  x = t   y = t  z = t   w = t  is obtained by minimizing the distance d 2 ( L , R ) = ( t − R x ) 2 + ( t − R y ) 2 + ( t − R z ) 2 + ( t − R w ) 2 = 4 t 2 − 2 t ( R x + R y + R z + R w ) + R 2 x + R 2 y + R 2 z + R 2 w . This is a trinomial of the form αt 2 + βt + γ , with α > 0 and we find the minimum by using the formula for computing the abscissa of the vertex of a parabola t ∗ = − β 2 α = R x + R y + R z + R w . 4 Hence the point Q ∗ is defined as Q ∗ x = Q ∗ y = Q ∗ z = Q ∗ w = t ∗ . The same holds true if we consider any swapped point R S , not unique but depending on the permutation at hand, since they share the same distance, i.e. d ( R, Q ∗ ) = d ( R S , Q ∗ ). Again, thanks to the linearity of obtained expression, this result can be extended also in the case of any linear combination of coordinates.

  12. Iterative regularization methods Van Cittert method x k = x k − 1 + τ ( b − Ax k − 1 ) Landweber method x k = x k − 1 + τA H ( b − Ax k − 1 ) Steepest descent method x k = x k − 1 + τ k − 1 A H ( b − Ax k − 1 ) τ k − 1 = � r k − 1 � 2 2 / � Ar k − 1 � 2 2 , with r k − 1 = A H ( b − Ax k − 1 ) Lucy-Richardson method (LR) x k = x k − 1 · A H � � b Ax k − 1 Image Space Reconstruction Algorithm (ISRA) � � A H b x k = x k − 1 · A H Ax k − 1

  13. The idea All the algorithms presented base the update of the iteration on the “key” quantities b b − Ax k − 1 or , Ax k − 1 which both give information on the distance between the blurred data b and the blurred iteration Ax k − 1 . A H can be seen as a reblurring operator, whose role is basically to help the method to manage the noise. Our idea is to pick a new matrix Z , which will replace A H . We notice that in principle one can think to choose Z as another operator, not necessarily related to a blurring process.

  14. Z variant Z -Landweber method x k = x k − 1 + τZ ( b − Ax k − 1 ) Z -Steepest descent method x k = x k − 1 + τ k − 1 Z ( b − Ax k − 1 ) r H k − 1 r k − 1 τ k − 1 = , with r k − 1 = Z ( b − Ax k − 1 ) r H k − 1 ZAr k − 1 Z -LR � � b x k = x k − 1 · Z Ax k − 1 Z -ISRA � � Zb x k = x k − 1 · ZAx k − 1

  15. Link with preconditioning The conventional preconditioned system is the following DA H Ax = DA H b, where D is the preconditioner, whose role is to suitably approximate the (generalized) inverse of the normal matrix A H A . The new strategy leads to the new preconditioned system ZAx = Zb , whose aim is to allow iterative methods to become faster and more stable.

  16. • p Low Pass Filter � < ζ � � � 0 if � λ j d j = � p if � ≥ ζ � � � � 1 / � λ j � λ j • p Hanke Nagy Plemmons Filter � < ζ � � � 1 if � λ j d j = � p if � ≥ ζ � � � � 1 / � λ j � λ j • p Tyrtyshnikov Yeremin Zamarashkin Filter � < ζ � � � 1 /ζ if � λ j d j = � p if � ≥ ζ � � � � 1 / � λ j � λ j • Tikhonov Filter 1 d j = � 2 + α � � � λ j By using each filter we can define the eigenvalues of Z as z j = ¯ λ j d j

  17. BCCB preconditioning: D vs Z Reflective and Anti-Reflective BCs RRE vs regularization parameter for Tikhonov filter ( α ) and T.Y.Z. filter ( ζ ). For all filters Z variant shows an higher stability , and with this word we mean that iterative methods compute a good restoration also when regularization parameters ζ and α are very small.

  18. A general Z algorithm Called c j the eigenvalues of the BCCB matrix associated with ( h PSF , ‘periodic’), for any BCs, we can perform the next algorithm. Z ← − Algorithm ( h PSF , BCs) ———————————————————– � n 2 � · get c j j =1 by computing FFT of h PSF · get z j by applying a filter to c j � n 2 � · get w PSF by computing IFFT of z j j =1 · generate Z from ( w PSF , BCs) The algorithm is consistent, in fact if the filter is identity, i.e. there is no filtering, we have Z = A H . Clearly an analogous algorithm can be applied to create the preconditioner D .

Recommend


More recommend