total variation denoising with iterated conditional
play

Total variation denoising with iterated conditional expectation - PowerPoint PPT Presentation

TV-ICE denoising Other ICE tasks Discussion Total variation denoising with iterated conditional expectation ecile Louchet 1 and Lionel Moisan 2 C 1 Universit e dOrl eans, Institut Denis Poisson, France 2 Universit e Paris


  1. TV-ICE denoising Other ICE tasks Discussion Total variation denoising with iterated conditional expectation ecile Louchet 1 and Lionel Moisan 2 C´ 1 Universit´ e d’Orl´ eans, Institut Denis Poisson, France 2 Universit´ e Paris Descartes, MAP5, France Workshop IHP “Statistical modeling for shapes and imaging” March, 12th 2019

  2. TV-ICE denoising Other ICE tasks Discussion TV restoration of images Image formation model v = A u + n v ∈ R Ω ′ : observed image A : R Ω → R Ω ′ : linear operator ( A = Id → denoising; A = k ∗ · → deblurring...) n : Gaussian additive white noise ∼ N (0 , σ 2 ) u ∈ R Ω : image that we want to estimate. Rudin-Osher-Fatemi image recovery u ∈ R Ω E ( u ) := � Au − v � 2 + λ TV ( u ) Choose u ROF = arg min ˆ Total Variation: TV ( u ) = �∇ u � 1 λ ≥ 0 is a user-controlled regularity parameter.

  3. TV-ICE denoising Other ICE tasks Discussion TV restoration of images Bayesian viewpoint ˆ u ROF is a Maximum A Posteriori in a Bayes framework: u � Au − v � 2 + λ TV ( u ) u ROF = arg min ˆ Z e − � Au − v � 2 1 λ e − β TV ( u ) = arg max 2 σ 2 (where β = 2 σ 2 ) u P ( v | u ) P ( u ) = arg max P ( u | v ) = arg max u u  Z e − � Au − v � 2 P ( v | u ) = 1  (image formation model) 2 σ 2 with Z ′ e − β TV ( u ) 1 P ( u ) = (prior distribution) 

  4. TV-ICE denoising Other ICE tasks Discussion Restoration with TV-LSE We have ˆ u ROF = arg max u P ( u | v ). Definition: image restoration by TV- L east S quare E stimator [1] u LSE = E [ u | v ] = 1 � 2 σ 2 ( � Au − v � 2 + λ TV ( u )) du 1 R Ω u e − ˆ Z No staircasing in LSE denoising ( A = Id ) ∀ x , y ∈ Ω, the set { v ∈ R Ω : ˆ u LSE ( y ) } has measure 0. u LSE ( x ) = ˆ Computation of TV-LSE u LSE ( x ) = 1 � 2 σ 2 E ( u ) du . 1 R Ω u ( x ) e − For each x ∈ Ω , ˆ Z integral on R Ω where | Ω | = number of pixels ≈ 10 6 ... √ MCMC techniques but with convergence in O (1 / N ). [1] L., Moisan (2013). Posterior expectation of the total variation model: Properties and experiments. SIIMS .

  5. TV-ICE denoising Other ICE tasks Discussion Outline TV denoising with Iterated Conditional Expectations 1 Other (imaging?) tasks with ICE 2 Deblurring and inverse problems regularized with TV TV-ICE denoising for Poisson noise ICE of a convex functional ICE of a convex set

  6. TV-ICE denoising Other ICE tasks Discussion Outline TV denoising with Iterated Conditional Expectations 1 Other (imaging?) tasks with ICE 2 Deblurring and inverse problems regularized with TV TV-ICE denoising for Poisson noise ICE of a convex functional ICE of a convex set

  7. TV-ICE denoising Other ICE tasks Discussion The idea of TV-ICE denoising Recall in the case A = Id : R Ω u ( x ) e − � u − v � 2+ λ TV ( u ) � du 2 σ 2 u LSE ( x ) = ˆ R Ω e − � u − v � 2+ λ TV ( u ) � du 2 σ 2 Idea: integrating one variable at a time R u ( x ) e − � u − v � 2+ λ TV ( u ) � du ( x ) 2 σ 2 =: f v ( x ) ( u ( N x )) R e − � u − v � 2+ λ TV ( u ) � du ( x ) 2 σ 2 This is the posterior expectation of u ( x ) conditionally to u ( x c ). It depends on the values of u ( x c ). But we can iterate: convergence hopefully? From now on: N x = 4-neighbor system and TV ( u ) = 1 � � | u ( y ) − u ( x ) | . 2 x ∈ Ω y ∈N x

  8. TV-ICE denoising Other ICE tasks Discussion One iteration: closed formula If u ( N x ) = { a , b , c , d } with a ≤ b ≤ c ≤ d and if v ( x ) = t , then For any n ∈ N ∗ , for any sorted n -uple ( a j ) and for any n -uple ( β j ), we have � n i =0 µ i I t µ i ,ν i ( a i , a i +1 ) f t ( u ( N x )) = t − � n i =0 I t µ i ,ν i ( a i , a i +1 ) where { a 1 , . . . , a 4 } = { u ( N x ) } and −∞ = a 0 ≤ a 1 ≤ · · · ≤ a 5 = + ∞ , n n � µ i = λ 1 if i ≥ j � � ∀ i , ν i = − λ ε i , j , ε i , j a j ε i , j = 2 − 1 otherwise j =1 j =1 and � � b − t + µ � � a − t + µ �� 2 σ 2 (2 µ t − µ 2 + ν ) . 1 e − I t √ √ µ,ν ( a , b ) = erf − erf σ 2 σ 2

  9. TV-ICE denoising Other ICE tasks Discussion Theorem and definition of TV-ICE Consider an image v : Ω → R and λ, σ > 0. The sequence ( u n ) n ≥ 0 defined recursively by u 0 and u n +1 ( x ) = f v ( x ) ( u n ( N x )) ∀ x ∈ Ω , u ICE independent of u 0 and satisfies converges linearly to an image ˆ u ICE ( x ) = E u | v [ u ( x ) | u ( x c ) = ˆ u ICE ( x c )] . ∀ x ∈ Ω , ˆ Idea of the proof: we define F v by u n +1 = F v ( u n ). Then F v ( u )( x ) = f v ( x ) ( u ( N x )) . F v is C 1 and monotone: w 1 ≤ w 2 ⇒ F v ( w 1 ) ≤ F v ( w 2 ) f t − c ( w ( N x ) − c ) = f t ( w ( N x )) − c and implies � Jac F v � ∞ < 1 � Ω � K w = min(min Ω v , min Ω w ) , max(max Ω v , max Ω w ) satisfies F v ( K w ) ⊂ K w for any w .

  10. TV-ICE denoising Other ICE tasks Discussion Properties of TV-ICE denoising ICE is not LSE. Proof: LSE is a prox, ICE is not. No staircasing Let x and y be neighbor pixels. The set { v ∈ R Ω : ˆ u ICE ( x ) = ˆ u ICE ( y ) } has measure 0. u ICE is C 1 . Proof: v �→ ˆ Recovery of edges v ( x ) − 2 λ ≤ ˆ u ICE ( x ) ≤ v ( x ) + 2 λ. → a strong local contrast essentially persists. Proof: f t ( a , b , c , d ) is a weighted average (with positive coefficients) of t , t ± λ , and t ± 2 λ , it belongs to [ t − 2 λ, t + 2 λ ]. This latter property is shared with ˆ u ROF and ˆ u LSE .

  11. TV-ICE denoising Other ICE tasks Discussion noisy ROF ICE LSE

  12. TV-ICE denoising Other ICE tasks Discussion noisy ROF ICE LSE

  13. TV-ICE denoising Other ICE tasks Discussion noisy ROF ICE LSE

  14. TV-ICE denoising Other ICE tasks Discussion noisy ROF ICE LSE

  15. TV-ICE denoising Other ICE tasks Discussion Convergence curves for different algorithms of TV-denoising 100 1 0.01 0.0001 1e-06 1e-08 TV-LSE ROF-dual 1e-10 ROF-primal-dual ROF-Weiss-Nesterov Huber-ROF 1e-12 TV-ICE 0 100 200 300 400 500 600 700 800 900 1000

  16. TV-ICE denoising Other ICE tasks Discussion Outline TV denoising with Iterated Conditional Expectations 1 Other (imaging?) tasks with ICE 2 Deblurring and inverse problems regularized with TV TV-ICE denoising for Poisson noise ICE of a convex functional ICE of a convex set

  17. TV-ICE denoising Other ICE tasks Discussion Outline TV denoising with Iterated Conditional Expectations 1 Other (imaging?) tasks with ICE 2 Deblurring and inverse problems regularized with TV TV-ICE denoising for Poisson noise ICE of a convex functional ICE of a convex set

  18. TV-ICE denoising Other ICE tasks Discussion The idea of TV-ICE restoration Definition of the ICE sequence Start with an arbitrary u 0 and for all n ∈ N set u n +1 ( x ) = 1 u n ( x ) e − � Aun − v � 2+ λ TV ( un ) � du n ( x ) . 2 σ 2 Z R computable? convergence u n → ˆ u ICE ? The iterations are easy to deduce from the denoising case! Case where � A δ x � 2 does not depend on x : we have w n +1 = u n − γ A ∗ ( Au n − v ) � u n +1 = F w n +1 ( u n ) with parameters ( γσ 2 , γλ ), where γ = � A δ x � − 2 .

  19. TV-ICE denoising Other ICE tasks Discussion Convergence condition Assumptions γ = � A δ x � − 2 does not depend on x A 1 Ω = 1 Ω ′ Theorem If γ < 2, then ( u n ) n ∈ N linearly converges to a limit ˆ u ICE independent of u 0 such that u ICE ( x ) = E u | v [ u ( x ) | u ( x c ) = ˆ u ICE ( x c )] . ∀ x ∈ Ω , ˆ But for each γ ≥ 2 there are always cases of non-convergence. deconvolution: if A = k ∗ · with � k = 1, then γ = 1 / � k � 2 . Gaussian blur: γ < 2 ⇔ σ A � 0 . 5 pixel → k should be very concentrated. zooming: if A = block-averaging, blocks should have size < 2. → very limited applications!

  20. TV-ICE denoising Other ICE tasks Discussion 4 possible strategies to ensure convergence w n +1 = u n − γ A ∗ ( Au n − v ) = ( I − γ A ∗ A ) u n + γ A ∗ v � (1) u n +1 ( x ) = F w n +1 ( u n ) (2) 3rd strategy: set γ free. 1st strategy: averaging on u . Replace (1) step with Replace (2) step with w n +1 = u n − τ A ∗ ( Au n − v ), u n +1 = (1 − r ) u n + r F w n +1 ( u n ) τ > 0. 2 Observation: r ≤ min(1 , γρ ( A ∗ A ) ) Observation: τ < 2 = ⇒ linear = ⇒ linear convergence. convergence. 2nd strategy: averaging on w . 4th strategy: “implicitize”. Replace (1) step with w n +1 = Replace (1) step with (1 − s ) w n + s ( u n − γ A ∗ ( Au n − v )) w n +1 = with 2 Observation: s ≤ min(1 , γρ ( A ∗ A ) ) ( I + γ A ∗ A ) − 1 ( u n + γ A ∗ v ). = ⇒ linear convergence. Observation: linear convergence.

  21. TV-ICE denoising Other ICE tasks Discussion Application to image deblurring Framework γ = 1 / � k � 2 A = k ∗ · ⇒ If � k � 2 > 1 / 2, the natural strategy applies w n +1 = u n − γ ˇ k ∗ ( k ∗ u n − v ) � u n +1 = F w n +1 ( u n ) else, the averaging and free-gamma strategies always apply � w n +1 = (1 − s ) w n + s ( u n − τ ˇ k ∗ ( k ∗ u n − v )) u n +1 = (1 − r ) u n + r F w n +1 ( u n ) Implicit strategy available only for periodic boundary conds: w n +1 = ( I + γ A ∗ A ) − 1 ( u n + γ A ∗ v ) = F − 1 � F u + γ ( F k ) ∗ ·F ( v ) � � 1+ γ |F k | 2 u n +1 = F w n +1 ( u n ) .

  22. TV-ICE denoising Other ICE tasks Discussion Periodic constant blur on a 3.5-ray disc + Gauss. noise with sd. 2

Recommend


More recommend