fractional tikhonov regularization on graphs
play

Fractional-Tikhonov regularization on graphs (applied to signal and - PowerPoint PPT Presentation

Fractional-Tikhonov regularization on graphs (applied to signal and image restoration) by Davide Bianchi Universit` a degli Studi dellInsubria Dip. di Scienze e Alta Tecnologia 23 rd of May, 2018 1 of 22 Our model problem y = x


  1. Fractional-Tikhonov regularization on graphs (applied to signal and image restoration) by Davide Bianchi Universit` a degli Studi dell’Insubria Dip. di Scienze e Alta Tecnologia 23 rd of May, 2018 1 of 22

  2. Our model problem y δ = x ∗ + K noise • K represents the blur and it is severely ill-conditioned (compact integral operator of the first kind); • y δ are known measured data (blurred and noisy image); • � noise � ≤ δ . 2 of 22

  3. Singular value expansion and generalized inverse Since K is compact, we can write + ∞ � Kx = σ m � x, v m � u m , m =1 where ( σ m ; v m , u m ) m ∈ N is the singular value expansion of K . Generalized inverse We define K † : D ( K † ) ⊆ Y → X as � σ − 1 K † y = m � y, u m � v m , m : σ m > 0 � � m |� y, u m �| 2 < ∞ � σ − 2 D ( K † ) = y ∈ Y : . m : σ m > 0 3 of 22

  4. In the free-noise case, we have x † = K † y, but due to the ill-posedness of the problem, x δ = K † y δ is not a good approximation of x † . Since we are dealing with data affected by noise, i.e., with y δ , then we can not use K † to compute an approximated solution. We have to regularize the operator K † . 4 of 22

  5. Filter based regularization methods We substitute the K † operator with a one-parameter family of continuous linear operators { R α } α ∈ (0 ,α 0 ) , K † y δ = � σ − 1 m � y δ , u m � v m m : σ m > 0 ⇓ R α y δ = � F α ( σ m ) σ − 1 m � y δ , u m � v m m : σ m > 0 α = α ( δ, y δ ) is called rule choice. 5 of 22

  6. Fractional Tikhonov filter functions σ 2 m • Standard Tikhonov filter: F α ( σ m ) = m + α , with α > 0 . σ 2 � γ σ 2 � m • Fractional Tikhonov filter: F α,γ ( σ m ) = , with α > 0 σ 2 m + α and γ ∈ [1 / 2 , ∞ ) (Klann and Ramnlau, 2008). σ r +1 m • Weighted/Fractional Tikhonov filter: F α,r ( σ m ) = + α , with σ r +1 m α > 0 and r ∈ [0 , + ∞ ) (Hochstenbach and Reichel, 2011). For 1 / 2 ≤ γ < 1 and 0 ≤ r < 1 , fractional and weighted filters smooth the reconstructed solution less than standard Tikhonov. 6 of 22

  7. An easy 1 d example of oversmoothing - part 1 Blur taken from Heat ( n, κ ) in Regtools , n = 100 , κ = 1 and 2% noise . True solution: � 0 if 0 ≤ t ≤ 0 . 5 , x † : [0 , 1] → R x † ( t ) = s.t. 1 if 0 . 5 < t ≤ 1 . 1.2 true solution Tik 1 F- Tik, r=0.35 blur+2% noise 0.8 0.6 F-Tik, r=3 0.4 0.2 0 -0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 7 of 22

  8. Let’s reformulate the problem x ∈ R n � K x − y � 2 2 + α � x � 2 • Tikhonov: argmin 2 8 of 22

  9. Let’s reformulate the problem x ∈ R n � K x − y � 2 2 + α � x � 2 • Tikhonov: argmin 2 r − 1 x ∈ R n � K x − y � 2 W + α � x � 2 2 , with W = ( KK ∗ ) • F. Tikhonov: argmin 2 . 8 of 22

  10. Let’s reformulate the problem x ∈ R n � K x − y � 2 2 + α � x � 2 • Tikhonov: argmin 2 r − 1 x ∈ R n � K x − y � 2 W + α � x � 2 2 , with W = ( KK ∗ ) • F. Tikhonov: argmin 2 . x ∈ R n � K x − y � 2 2 + α � L x � 2 • Generalized Tikhonov: argmin 2 , with L semi-positive definite and ker( L ) ∩ ker( K ) = � 0 . ker( L ) should ‘ approximate the features ’of x † . 8 of 22

  11. Let’s reformulate the problem x ∈ R n � K x − y � 2 2 + α � x � 2 • Tikhonov: argmin 2 r − 1 x ∈ R n � K x − y � 2 W + α � x � 2 2 , with W = ( KK ∗ ) • F. Tikhonov: argmin 2 . x ∈ R n � K x − y � 2 2 + α � L x � 2 • Generalized Tikhonov: argmin 2 , with L semi-positive definite and ker( L ) ∩ ker( K ) = � 0 . ker( L ) should ‘ approximate the features ’of x † . x ∈ R n � K x − y � 2 W + α � L x � 2 • Generalized F. Tikhonov: argmin 2 8 of 22

  12. Laplacian - Finite Difference approximation Poisson (Sturm-Liouville) problem on [0 , 1] :  − ∆ x ( t ) = f ( t ) t ∈ (0 , 1) ,   α 1 x (0) + β 1 x ′ (0) = γ 1 ,  α 2 x (1) + β 2 x ′ (1) = γ 2 .  If we consider Dirichlet homogeneous boundary conditions ( x (0) = x (1) = 0 ) and 3-point stencil FD approximation: − ∆ x ( t ) ≈ − x ( t − h ) + 2 x ( t ) − x ( t + h ) , h 2 = n − 2 , h 2   2 − 1 0 · · · − 1 2 − 1 · · ·   ker( L ) = � L = 0 .   ... ... ...     0 − 1 2 9 of 22

  13. Laplacian - Finite Difference approximation · · · If we consider Neumann homogeneous boundary conditions ( x ′ (0) = x ′ (1) = 0 ) and 3-point stencil FD approximation: − ∆ x ( t ) ≈ − x ( t − h ) + 2 x ( t ) − x ( t + h ) , h 2 = n − 2 , h 2   1 − 1 0 · · · − 1 2 − 1 · · ·   ker( L ) = Span { � L = 1 } .   ... ... ...     0 − 1 1 10 of 22

  14. An easy 1 d example of oversmoothing - part 2 Blur taken from Heat ( n, κ ) in Regtools , n = 100 , κ = 1 and 2% noise . True solution: � 0 if 0 ≤ t ≤ 0 . 5 , x † : [0 , 1] → R x † ( t ) = s.t. 1 if 0 . 5 < t ≤ 1 . 1.2 True solution 1 Tik + L dirichlet 0.8 Tik + L neumann 0.6 0.4 0.2 0 -0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 11 of 22

  15. Graph Laplacian • An image/signal x can be represented by a weighted undirected graph G = ( V, E, w ) : ◦ the nodes v i ∈ V are the pixels of the image/signal and x i ≥ 0 is the color intensity of x at v i . ◦ an edge e i,j ∈ E ⊆ V × V exists if the pixels v i and v j are connected, i.e., v i ∼ v j . ◦ w : E → R is a similarity (positive) weight function, w ( e i,j ) = w i,j . � • The graph Laplacian is defined as ∆ ( n ) w x i = w i,j ( x i − x j ) . v j ∼ v i Remark � � x ′′ ( t ) φ ( t ) dµ ( t ) = x ( t ) φ ′′ ( t ) dµ ( t ) ([0 , 1] ,µ ) ([0 , 1] ,µ ) 12 of 22

  16. Graph Laplacian - Example Example . In the 1 d case, if we define � 1 if i � = j, v i ∼ v j iff i = j + 1 or i = j − 1 , w i,j = 0 if i = j, then it holds   1 − 1 0 · · · − 1 2 − 1 · · ·   ∆ ( n ) = L ( n ) =   ... ... ... w w     0 − 1 1 13 of 22

  17. Question Why should the red points be connected? 1.2 1 0.8 0.6 0.4 0.2 0 -0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 14 of 22

  18. Answer They should not, indeed 1.2 1 true Tik + graph 0.8 0.6 0.4 0.2 0 -0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 � � L ( n/ 2) 0 L ( n ) w = w L ( n/ 2) 0 w 15 of 22

  19. Fractional Tikhonov + Graph Laplacian 1.2 True 1 F. Tik. + graph, r=4 0.8 0.6 0.4 0.2 0 -0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 16 of 22

  20. Remark 1/2 1.2 True 1 Tik. + graph, 1 p. con. 0.8 0.6 0.4 0.2 0 -0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 17 of 22

  21. Remark 2/2 1.2 True 1 Tik. + graph, 5 p.con. 0.8 0.6 0.4 0.2 0 -0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 18 of 22

  22. another example - heat ( n, 1) 5% noise � � L ( n ) L ( n/ 4) , L ( n/ 4) , L ( n/ 4) , L ( n/ 4) = diag w w w w w 19 of 22

  23. another another example - deriv2 ( n, 3) , 2% noise 0.06 True 0.05 Tik. F. Tik. + graph 0.04 0.03 0.02 0.01 0 -0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1  0 0 · · · · · · 0  − 1 2 − 1 · · · 0 � �   L ( n/ 2) 0   ... ... ... L ( n ) w L ( n/ 2) = =   w w L ( n/ 2) 0   w   − 1 2 − 1   0 0 · · · · · · 0 ker( L ( n/ 2) ) = Span { � 1 ,� t } w 20 of 22

  24. (another) 3 example - heat ( n, 1) 2% noise 3 True 2.5 Tik. F. Tik. + graph 2 1.5 1 0.5 0 -0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 21 of 22

  25. Some references • Shuman, D. I., Narang, S. K., Frossard, P., Ortega, A., and Vandergheynst, P., The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains , IEEE Signal Processing Magazine, 30(3), 83-98 (2013). • Faber, X. W. C., Spectral convergence of the discrete Laplacian on models of a metrized graph , New York J. Math, 12, 97-121 (2016). • Bianchi, D., and Donatelli, M., On generalized iterated Tikhonov regularization with operator-dependent seminorms , Electronic Transactions on Numerical Analysis, 47, 73-99 (2017). • Gerth, D., Klann, E., Ramlau, R., and Reichel, L., On fractional Tikhonov regularization. Journal of Inverse and Ill-posed Problems , 23(6), 611-625 (2008). • Bianchi, D., and Donatelli, M., Fractional-Tikhonov regularization on graphs for image restoration , preprint. 22 of 22

Recommend


More recommend