A homotopy method in regularization of total variation denoising problems Luis A. Melara Colorado College Joint work with: A. J. Kearsley (NIST), R.A. Tapia (Rice University) MCSD, NIST, G AITHERSBURG , M ARYLAND USA Tuesday 30 August 2005
OUTLINE • Motivation • Previous Work • Problem Formulation • Computational Method • Results • Conclusions
Image Restoration Three categories: • Statistical methods. • Transform-based methods. • Optimization-based methods.
Optimization-based methods • Images containing sharp edges. • Image solves constrained optimization problem.
Bounded Variational Seminorm Let x ∈ Ω ⊂ R 2 . Let u : Ω → R 2 . � � 2 � 2 � � � ∂u ∂u � � � Ω �∇ u � 2 = + d x � ∂x ∂y Ω
Contaminated image u obs Observed image u obs : u obs ( x ) = u exact ( x ) + η ( x ) . • η ∈ H 1 (Ω) is noise. • u exact is exact image. Define � 1 / 2 �� Ω | u exact − u obs | 2 d x σ = > 0 .
Equality Constrained Optimization Problem (EC) Find u ∈ H 1 (Ω) s.t. � min Ω �∇ u � 2 d x u ∈ H 1 (Ω) � � u − u obs � 2 L 2 (Ω) − σ 2 � 1 / 2 = 0 . s.t.
Lagrangian Functional To solve EC, we write Lagrangian functional: � � � u − u obs � 2 L 2 (Ω) − σ 2 � ℓ ( u, λ ) = Ω �∇ u � 2 d x + λ/ 2 , where λ ∈ R is Lagrange multiplier.
Euler-Lagrange Equations Find u ∈ H 1 (Ω) s.t. � � ∇ u M ( u, λ ) = −∇ · + λ ( u − u obs ) = 0 , �∇ u � 2 for x ∈ Ω and ∂u ∂n = 0 , x ∈ Γ , Γ is boundary of Ω .
Previous Work Fatemi, Osher and Rudin (1992). � � ∂u ∇ u ∂s = −∇ · + λ ( u − u obs ) , �∇ u � 2 • s > 0 , x ∈ Ω , • u ( x, y, 0) given and • ∂u/∂n = 0 for x ∈ Γ .
At steady state solution: 1 Ω �∇ u � 2 − ∇ u obs · ∇ u � λ = d x . 2 σ 2 �∇ u � 2
Regularization If ∇ u = 0 M ( u, λ ) is undefined. ⇒ Regularized Lagrangian functional: � � 2 + ǫ 2 d x �∇ u � 2 ℓ ǫ ( u, λ ) = Ω � u − u obs � 2 L 2 (Ω) − σ 2 � � + λ/ 2 .
Associated Euler-Lagrange Equations Find u ∈ H 1 (Ω) s.t. ∇ u M ǫ ( u, λ ) = −∇ · + λ ( u − u obs ) = 0 , � �∇ u � 2 2 + ǫ 2 for x ∈ Ω with ∂u ∂n = 0 , x ∈ Γ For ǫ > 0 , M ǫ ( u, λ ) is well-defined.
Related Previous Work Dobson, Omen and Vogel (1996). Solve 1 � � 2 � u − u obs � 2 �∇ u � 2 2 + ǫ 2 , min L 2 (Ω) + γ u ∈ H 1 (Ω) Ω where γ ∈ R is small penalization parameter (Majava, 2001). Newton’s method is used.
Hessian of Lagrangian Hessian A of Lagrangian given by: � Am, m � = ( ∇ u · ∇ m ) ∇ u ∇ m � + ∇ · −∇ · m � 3 � �∇ u � 2 2 + ǫ 2 �� Ω �∇ u � 2 2 + ǫ 2 + λ m 2 d x , for m ∈ H 1 (Ω) . Denote J ǫ ( u ; m, m ) = � Am, m � .
Idea Newton’s method ⇒ Kantorovich Theorem. Show direct relationship between regularizaton parameter ǫ and radius of Kantorovich ball.
Proposition 1 Given m, p ∈ H 1 (Ω) , then the following inequality holds, ||| J ǫ ( m ; · , · ) − J ǫ ( p ; · , · ) ||| ≤ c 1 ( ǫ ) || m − p || H 1 (Ω) , (1) with � � 1 1 , 1 c 1 ( ǫ ) = 3 N ( Q ) ǫ 2 /h · max , , D 2 D 2 D 1 · D 2 Q k ⊂ Ω 1 2 where D 1 = �∇ m � 2 2 + ǫ 2 , and D 2 = �∇ p � 2 2 + ǫ 2 .
Proposition 2 Let u ∈ H 1 (Ω) . If the Jacobian J ǫ ( u ; · , · ) is symmetric positive definite, then it has a bounded inverse with bound, �� − 1 ǫ 2 � � ||| J ǫ ( u ; · , · ) − 1 ||| ≤ c 2 ( ǫ ) = min ( D 3 ) 3 , λ , Q k ⊂ Ω where � �∇ u � 2 2 + ǫ 2 . D 3 =
Definition Let B p ( u (0) , r ) = { u : � u − u (0) � p < r } , be p − ball of radius r centered at u (0) .
Theorem (Kantorovich) Consider a function, L ǫ : R n → R n that is defined on a convex set C ⊆ R n . Let the Jacobian operator of L ǫ be J ǫ and further assume that J ǫ is a Lipschitz function with Lipschitz constant α L . Assume that u (0) is some starting point selected from C and that the following are all satisfied for some u (0) ∈ C , 1. ||| J ǫ ( u ; · , · ) − J ǫ ( v ; · , · ) ||| ≤ α L � u − v � , ∀ u, v ∈ C , 2. ||| J ǫ ( u (0) ; · , · ) − 1 ||| ≤ α 1 , 3. ||| J ǫ ( u (0) ; · , · ) − 1 L ǫ ( u (0) ; · ) ||| ≤ α 2 . If δ = α L α 1 α 2 ≤ 1 / 2 , and if B p ( u (0) , r ) ⊆ C with √ r = α 2 (1 − 1 − 2 δ ) /δ, then the Newton sequence { u ( n ) } given by u ( n +1) = u ( n ) − J ǫ ( u ( n ) ; · , · ) − 1 L ǫ ( u ( n ) ; · ) , is well-defined, remains in the ball B p ( u (0) , r ) , and converges to the unique solution of L ǫ ( u ∗ ; · ) = 0 inside B p ( u (0) , r ) .
Summary Large ǫ ⇒ nice convergence but undesirable solution u ∗ . Small ǫ ⇒ bad convergence but desirable solution u ∗ . From Proposition 1 , Proposition 2 and Kantorovich Theorem: u ( n ) ( u (0) ) − u ∗ Large ǫ 0 ⇒ → 0 . u ( n ) ( u ∗ u ∗ 0 ) − Reduce ǫ 0 ⇒ → 1 . u ( n ) ( u ∗ u ∗ 1 ) − Reduce ǫ 0 ⇒ → 2 . . . . . . .
Homotopy ε 1 r( ) u (0) u * ε 3 r( ) ε r( ) 2
From Propositions 1 and 2 : α L and α 1 depend on ǫ radius r of B p ( u, r ) depends on ǫ . ⇒
Numerical Implementation Let N ( Q ) ˆ � Q = (0 , 1) × (0 , 1) , Ω = Q k , k =1 N ( Q ) = number of elements in Ω . Define affine map F : F (ˆ x ) = D ˆ x + d = x , x ∈ ˆ ˆ Q and x ∈ Q k , with
� � � � h 0 ih D = , d = , 0 h jh where i, j = 1 , . . . , N . N = number of partitions along x − and y − axes. h = 1 /N is mesh size along x − and y − axes.
Visual Description ^ Q Q F k where ˆ Q = (0 , 1) × (0 , 1) , and Q k = ( ih, jh ) × (( i + 1) h, ( j + 1) h ) ⊂ Ω
Function composition Use affine map F , � u ◦ F − 1 � u ( F − 1 ( x )) . u ( x ) = ˆ ( x ) = ˆ By chain rule: ∇ u ( x ) = ∇ F − 1 · ˆ � F − 1 ( x ) � ∇ ˆ u , with ∇ = ( ∂/∂x, ∂/∂y ) , and ˆ ∇ = ( ∂/∂ ˆ x, ∂/∂ ˆ y ) .
Function Approximation Approximation of u : N ( h ) � u = u k ϕ k , k =1 where u k = u ( ih, jh ) , for some i, j , N ( h ) = number of nodes in Ω . The k th basis function ϕ k ∈ P 1 where P 1 = span { 1 , x, y }
Objective Function N ( Q ) � � � � 2 + ǫ 2 d x 2 + ǫ 2 d x . �∇ u � 2 �∇ u � 2 � = Ω Q k k =1 Use F to obtain in each Q k : � � � � �∇ F − 1 ˆ 2 + ǫ 2 det ˆ 2 + ǫ 2 d x = �∇ u � 2 u � 2 ∇ ˆ ∇ Fd ˆ x . ˆ Q k Q
Equality Constraint Similarly, N ( Q ) 1 = 1 �� � � Ω | u − u obs | 2 − σ 2 | u − u obs | 2 − σ 2 . � 2 2 Q k k =1 Again, use mapping F . Idea: � � u | 2 det ˆ | u | 2 d x = Q | ˆ ∇ F d ˆ x . ˆ Q k
Optimization Method Let u ∈ H 1 (Ω) . Let � � 2 + ǫ 2 d x , �∇ u � 2 f ( u ) = Ω and �� � Ω | u − u obs | 2 − σ 2 g ( u ) = 1 / 2 . Augmented Lagrangian L ( u, λ, ρ ) : L ( u, λ, ρ ) = f ( u ) + λ g ( u ) + ( ρ/ 2) g ( u ) 2 , where λ ∈ R and ρ ∈ R .
Notation: Discrete Approximations • Denote f ( u ) by f ( u ) , u ∈ R N ( h ) , • Denote g ( u ) by g ( u ) , u ∈ R N ( h ) , • Simplify L ( u, λ, ρ ) and denote: L ( u ) = L ( u , λ, ρ ) , u ∈ R N ( h ) , N ( h ) = number of nodes in Ω .
Gradient of L ( u ) Find u ∈ R N ( h ) s.t. ∇ L ( u ) ∈ R N ( h ) , ∇ L ( u ) = 0 , with ( ∇ L ( u )) i = ( ∇ f ( u )) i + λ ( ∇ g ( u )) i + 2 ρ g ( u ) ( ∇ g ( u )) i = ∂f ( u ) /∂ u i + λ∂g ( u ) /∂ u i + 2 ρ g ( u ) ∂g ( u ) /∂ u i . for i = 1 , . . . , N ( h )
Hessian of L ( u ) The ij th component of Hessian A ∈ R N ( h ) × N ( h ) of L ( u ) is: ∂ 2 L ( u ) = , A ij ∂ u i ∂ u j where i, j = 1 , . . . , N ( h ) .
Newton’s Method Given u (0) , Newton sequence { u ( n ) } is generated by: u ( n +1) = u ( n ) + α s ( n ) , where α ∈ R and s ( n ) ∈ R N ( h ) solves A ( n ) s ( n ) = −∇ L ( u ( n ) ) .
Sufficient Decrease Criteria Set α := 1 . If inequality, L ( u ( n ) + α s ( n ) ) < L ( u ( n ) ) + 10 − 4 α ∇ L ( u ( n ) ) · s ( n ) , not met, then α := α/ 2 . Otherwise, update: u ( n +1) = u ( n ) + α s ( n ) , and reset α := 1 .
Lagrange multiplier update Least Squares update formula: λ = −∇ f ( u ) · ∇ g ( u ) ∇ g ( u ) · ∇ g ( u ) . Diagonalized multiplier method, convergence properties follow from Tapia (1977).
Algorithm Initialize λ , ρ . do i=1, ITERS1 do j=1, ITERS2 Solve ∇ L ( u ) = 0 (Newton’s method) λ + := Least Squares Update end do ǫ + = 10 − i Reinitialize λ + end do
Numerical Examples Let Ω = (0 , 1) × (0 , 1) ⊂ R 2 . True image u exact . Component ( u obs ) i given by: ( u obs ) i = ( u exact ) i + rand(1) ∗ C, where C ∼ 10 − 1 . rand.m produces uniformly distributed random numbers on (0 . 00 , 1 . 00) , (Matlab built-in function).
Recommend
More recommend