lake como school for advanced studies computational
play

Lake Como School for Advanced Studies Computational Methods for - PowerPoint PPT Presentation

Lake Como School for Advanced Studies Computational Methods for Inverse Problems and Applications in Image Processing SVD Filters James Nagy Emory University Atlanta, GA USA SVD Filters James Nagy Review: TSVD Regularization Suppose A = U


  1. Lake Como School for Advanced Studies Computational Methods for Inverse Problems and Applications in Image Processing SVD Filters James Nagy Emory University Atlanta, GA USA SVD Filters James Nagy

  2. Review: TSVD Regularization Suppose A = U Σ V T , and b = Ax true + η = b true + η . Naive inverse solution, x inv = A − 1 b = V Σ − 1 U T b n n n u T u T u T i b i b true � � � i η x inv = v i = v i + v i σ i σ i σ i i =1 i =1 i =1 � �� � � �� � x true error The goal is to balance: u T i b true reconstructing ”good” SVD components: (large σ i ) σ i u T i η avoid reconstructing ”bad” SVD components (small σ i ) σ i One approach: TSVD: k u T i b � x tsvd = v i σ i i =1 SVD Filters James Nagy

  3. SVD Filter-based Regularization The TSVD idea can be generalized: � 1 n u T i b for “large” σ i � x filt φ i v i , where φ i ≈ 0 for “small” σ i σ i i =1 Examples: � 1 i = 1 , 2 , . . . , k TSVD: φ i = 0 i = k + 1 , . . . , n We must choose regularization parameter k . σ 2 i Tikhonov: φ i = σ 2 i + α 2 We must choose regularization parameter α Exponential: φ i = 1 − e − σ 2 i /α 2 We must choose regularization parameter α SVD Filters James Nagy

  4. Tikhonov Regularization in Variational Form Tikhonov regularization is often formulated as: � A � b � � �� 2 � � � � � Ax − b � 2 2 + α 2 � x � 2 min ⇔ min x − (1) � � 2 α I 0 � � x x 2 Replacing A with its SVD, it is easy to show that the solution of the second minimization problem in (1) is n σ 2 u T i b � i x tik = v i σ 2 i + α 2 σ i i =1 We will return to variational forms of regularization, including generalizations, later. SVD Filters James Nagy

  5. SVD Filtering in Matrix Form We can write the filtered solution as: x filt = A † filt b = V Σ † filt U T b where � 1 � , . . . , 1 Σ † tsvd = diag , 0 , . . . , 0 σ 1 σ k � � σ i Σ † tik = diag σ 2 i + α 2 � � 1 − e − σ 2 i /α 2 Σ † exp = diag σ i Note: With this notation, we can replace singular value decomposition with unitary spectral decompositions. SVD Filters James Nagy

  6. FFT-based Filtering In some cases, A = F ∗ Λ F , where F is unitary Fourier transform. In this case, x filt = A † filt b = F ∗ Λ † = ifft2( Λ † fft2( b )) filt F b In other cases, can use cosine transform: A = C T Λ C , and x filt = A † filt b = C T Λ † = idct2( Λ † dct2( b )) filt Cb where, for example � 1 � , . . . , 1 Λ † tsvd = diag , 0 , . . . , 0 λ 1 λ k � � ¯ λ i Λ † tik = diag | λ i | 2 + α 2 SVD Filters James Nagy

  7. Finding “Optimal” Regularization Parameters Here we assume x true is known, and we want to find regularization parameters to minimize � x filt − x true � 2 2 x filt = A † filt b = V Σ † filt U T b where � x filt − x true � 2 � V T x filt − V T x true � 2 = 2 2 � V T V Σ † filt U T b − V T x true � 2 = 2 � Σ † filt ˆ x true � 2 = b − ˆ 2 where ˆ b = U T b and ˆ x true = V T x true SVD Filters James Nagy

  8. “Optimal” TSVD Regularization Parameter Observe that � Σ † filt ˆ e k = � x k − x true � 2 x true � 2 = b − ˆ 2 2 � ˆ � 2 k n b i � � x 2 = − ˆ x i + ˆ i σ i i =1 i = k +1 k n � � y 2 x 2 = + ˆ i i i =1 i = k +1 Then observe that x 2 k + y 2 e k = e k − 1 − ˆ k Recursively compute e k , and use MATLAB’s min function to find minimum value, and corresponding index. SVD Filters James Nagy

  9. “Optimal” Tikhonov Regularization Parameter Observe that � Σ † filt ˆ e k = � x k − x true � 2 x true � 2 = b − ˆ 2 2 � � 2 k σ i ˆ b i � = i + α 2 − ˆ x i σ 2 i =1 Find α to minimize the function � � 2 k σ i ˆ b i � E ( α ) = i + α 2 − ˆ x i σ 2 i =1 Use, for example, MATLAB’s fminbnd function to find minimum of E ( α ). Note that we can assume σ n ≤ α ≤ σ 1 . SVD Filters James Nagy

  10. Guides to Choosing Regularization Parameters In each filtering example, we must choose a regularization parameter. Do we need, or can we use additional information to help? Often we need to make assumptions about the noise, e.g., Gaussian white noise. It can be helpful to know prior information, such as � η � 2 . Some approaches attempt to extract noise information from the data. Methods we will discuss: 1 Discrete Picard Condition 2 Discrepancy Principle 3 Generalized Cross Validation 4 L-Curve SVD Filters James Nagy

  11. Using DPC to Choose TSVD Cutoff If we assume b = Ax true + η , Let k denote the TSVD truncation index. That is, k u T i b � x k = v i σ i i =1 Let k DPC = the index where the coefficients | u T i b | level off. Then, k DPC u T i b � x DPC = v i σ i i =1 SVD Filters James Nagy

  12. Discrepancy Principle We assume b = Ax true + η . Find a filtered solution, x filt , so that � Ax filt − b � 2 ≈ � η � 2 Remark: It is often recommended to use � Ax filt − b � 2 = δ � η � 2 where δ > 1 (e.g., δ = 1 . 01). SVD Filters James Nagy

  13. Discrepancy Principle: Implementations To implement, first recall that if A = U Σ V T , then we can write x filt = A † A † filt = V Σ † filt U T b filt b , Therefore, 2 = � U Σ V T V Σ † 2 = � ( ΣΣ † � Ax filt − b � 2 filt U T b − b � 2 filt − I ) U T b � 2 2 For each filtering method, we need to find regularization parameter so that � ( ΣΣ † filt − I ) U T b � 2 2 ≈ � η � 2 2 SVD Filters James Nagy

  14. Discrepancy Principle: Implementations TSVD n � i b ) 2 ≈ � η � 2 � ( ΣΣ † k − I ) U T b � 2 = ( u T 2 i = k +1 n � i b ) 2 ≤ � η � 2 ( u T Find smallest k so that 2 i = k +1 Tikhonov � α 2 u T n � 2 i b � � ( ΣΣ † k − I ) U T b � 2 = σ 2 i + α 2 i =1 � α 2 u T n � 2 i b � − ε 2 Let ε = � η � 2 , and define D ( α ) = σ 2 i + α 2 i =1 Find α so that D ( α ) = 0 (use, e.g., MATAB’s fzero function). SVD Filters James Nagy

  15. Generalized Cross Validation Statistically based approach that tries to find the the regularization parameter to minimize n � b − Ax filt � 2 2 G ( · ) = � � 2 trace( I − AA † filt ) To implement, we need: 2 = � b − U Σ V T V Σ † � b − Ax filt � 2 filt U T b � 2 2 = � ( I − ΣΣ † filt ) U T b � 2 2 trace( I − AA † filt ) = trace( I − U Σ V T V Σ † filt U T ) = trace( U ( I − ΣΣ † filt ) U T ) = trace( I − ΣΣ † filt ) SVD Filters James Nagy

  16. GCV for TSVD Suppose A is n × n . Then n � 2 = � ( I − ΣΣ † � b − Ax filt � 2 filt ) U T b � 2 ( u T i b ) 2 2 = i = k +1 trace( I − AA † filt ) = trace( I − ΣΣ † filt ) = n − k Therefore, we need to find k to minimize n � ( u T i b ) 2 n i = k +1 G ( k ) = ( n − k ) 2 Since this is a discrete function, compute all G ( k ) values, and use MATLAB min function to find minimum. Question: How does this change if A is m × n , m > n ? SVD Filters James Nagy

  17. GCV for Tikhonov Suppose A is n × n . Then � α 2 u T n � 2 i b � 2 = � ( I − ΣΣ † � b − Ax filt � 2 filt ) U T b � 2 2 = σ 2 i + α 2 i =1 n α 2 � trace( I − AA † filt ) = trace( I − ΣΣ † filt ) = σ 2 i + α 2 i =1 Therefore, we need to find α to minimize � α 2 u T � 2 � 2 n n � u T i b i b � � n n σ 2 σ 2 i + α 2 i + α 2 i =1 i =1 G ( α ) = � 2 = � n � n � 2 α 2 1 � � σ 2 i + α 2 σ 2 i + α 2 i =1 i =1 Here, G ( α ) is a continuous function; use, e.g., MATLAB’s fminbnd function, with σ n ≤ α ≤ σ 1 . Question: How does this change if A is m × n , m > n ? SVD Filters James Nagy

  18. MATLAB Demos Can use tomography and deblurring test problems from IR Tools. One approach to testing SVD filtering: Create small test problem, and use full to construct full matrix explicitly. Then use MATLAB’s svd . For example, [A, b true, x true] = PRtomo(32); A = full(A); [U, S, V] = svd(A); Another example, [A, b true, x true] = PRblurspeckle(64); A = full(A); [U, S, V] = svd(A); Larger problems are more challenging; for invariant deblurring, use wrappers for codes from Hansen, N., O’Leary (HNO) book: IRtik dct.m, IRtik fft, IRtik sep IRtsvd dct, IRtsvd fft, IRtsvd sep SVD Filters James Nagy

  19. Code for Demo SVD1 n = 32; PRoptions = PRset(’CTtype’, ’fancurved’); [A, b true, x true, ProbInfo] = PRtomo(n, PRoptions); s = svd(full(A)); figure(1), clf, semilogy(s, ’o’, ’LineWidth’, 2) PRoptions90 = PRset(PRoptions, ’angles’, 0:2:90); [A90, b90, x90, ProbInfo90] = PRtomo(n, PRoptions90); s90 = svd(full(A90)); figure(2), clf, semilogy(s90, ’o’, ’LineWidth’, 2) PRoptions45 = PRset(PRoptions, ’angles’, 0:2:45); [A45, b45, x45, ProbInfo45] = PRtomo(n, PRoptions45); s45 = svd(full(A45)); figure(3), clf, semilogy(s45, ’o’, ’LineWidth’, 2) SVD Filters James Nagy

  20. Code for Demo SVD2 PRoptions1 = PRset(’BlurLevel’, ’mild’); [A1, b1, x1, ProbInfo1] = PRblur(n, PRoptions1); s1 = svd(full(A1)); figure(1), clf, semilogy(s1, ’o’, ’LineWidth’, 2) PRoptions2 = PRset(’BlurLevel’, ’medium’); [A2, b2, x2, ProbInfo2] = PRblur(n, PRoptions2); s2 = svd(full(A2)); figure(2), clf, semilogy(s2, ’o’, ’LineWidth’, 2) PRoptions3 = PRset(’BlurLevel’, ’severe’); [A3, b3, x3, ProbInfo3] = PRblur(n, PRoptions3); s3 = svd(full(A3)); figure(3), clf, semilogy(s3, ’o’, ’LineWidth’, 2) SVD Filters James Nagy

Recommend


More recommend