Compressive Strategies for Inverse Problems Compressive Strategies for Inverse Problems Gerd Teschke joint work with C. Borries and R. Ramlau, M. Zhariy Institute for Computational Mathematics in Science and Technology Neubrandenburg University of Applied Sciences, Germany Konrad-Zuse-Institute Berlin (ZIB), Germany AIP 2009, Wien, 2009 Gerd Teschke (23. Juli 2009) 1/29
Compressive Strategies for Inverse Problems 1 Compressive strategies 2 Compression by adaptive recovery 3 Compression by acceleration 4 Nonlinear sensing and sparse recovery Gerd Teschke (23. Juli 2009) 2/29
Compressive Strategies for Inverse Problems Compressive strategies Compressive strategies Gerd Teschke (23. Juli 2009) 3/29
Compressive Strategies for Inverse Problems Compressive strategies Inverse problems and sparse recovery since a decade growing research topic, breakthrough for many applications several approaches: statistic and deterministic, numerous generalizations and extensions on the theme observation: in praxis numerical schemes often rather slow compressive algorithm = reduction of operations and iterations steepest descent, domain decomposition, semi-smooth Newton methods, projection methods [Bredies, Daubechies, Fornasier, Lorenz, Loris, ...] adaptive approximation [Dahmen, Dahlke, DeVore, Fornasier, Raasch, Stevenson, ...] compressive sampling techniques [Candes, Donoho, DeVore, Eldar, Rauhut, ...] Gerd Teschke (23. Juli 2009) 4/29
Compressive Strategies for Inverse Problems Compression by adaptive recovery Compression by adaptive recovery Gerd Teschke (23. Juli 2009) 5/29
Compressive Strategies for Inverse Problems Compression by adaptive recovery minimize D ( x ) = � Fx − y � 2 Solve Fx = y , Often noisy data � y δ − y � ≤ δ Solve normal equation F ∗ Ff = F ∗ x δ Landweber iteration: x δ m +1 = x δ m − γ F ∗ ( Fx δ m − y δ ) Gerd Teschke (23. Juli 2009) 6/29
Compressive Strategies for Inverse Problems Compression by adaptive recovery Preassigned system of functions (frame) { φ λ : λ ∈ Λ } ⊂ H Associated analysis and synthesis operator A : X → ℓ 2 via x �→ x = {� x , φ λ �} λ ∈ Λ F ∗ : ℓ 2 → X via x �→ � x λ φ λ . λ ∈ Λ Discretized normal equation: S = AF ∗ FA ∗ , y δ = AF ∗ y δ x = A ∗ x and Sx = y δ Corresponding sequence space Landweber iteration, x δ m +1 = x δ m − β ( Sx δ m − y δ ) Gerd Teschke (23. Juli 2009) 7/29
Compressive Strategies for Inverse Problems Compression by adaptive recovery Assume that we have the following three routines at our disposal: RHS ε [ g ] → g ε . This routine determines a finitely supported g ε ∈ ℓ 2 satisfying � g ε − AF ∗ g � ≤ ε . APPLY ε [ f ] → w ε . This routine determines, for a finitely supported f ∈ ℓ 2 and an infinite matrix S , a finitely supported w ε satisfying � w ε − Sf � ≤ ε . COARSE ε [ f ] → f ε . This routine creates, for a finitely supported with f ∈ ℓ 2 , a vector f ε by replacing all but N coefficients of f by zeros such that � f ε − f � ≤ ε . Gerd Teschke (23. Juli 2009) 8/29
Compressive Strategies for Inverse Problems Compression by adaptive recovery This leads to an inexact/approximative variant of the sequence space iteration x δ x δ x δ m ]+ β RHS r δ ( m ) [ x δ ] � ˜ � ˜ m +1 = COARSE r δ ( m ) m − β APPLY r δ ( m ) [˜ . Subscript m is related to the iteration index by specific refinement strategies of the form r δ : N → N Choice of proper refinement strategies r δ ( m ) enables convergence and regularization results Gerd Teschke (23. Juli 2009) 9/29
Compressive Strategies for Inverse Problems Compression by adaptive recovery A-posteriori parameter rules are preferable Several rules require a frequent evaluation of the residual discrepancy � Fx δ m − y δ � Definition (approximate residuum) For some x ∈ ℓ 2 and some integer m ≥ 0 the approximate residual discrepancy RES is defined by ( RES m [ x , y ]) 2 := � APPLY m [ x ] , x � − 2 � RHS m [ y ] , x � + � y � 2 . (1) Gerd Teschke (23. Juli 2009) 10/29
Compressive Strategies for Inverse Problems Compression by adaptive recovery Lemma (monotone decay) Let δ > 0 , 0 < c < 1 , 0 < β < 2 / (3 � S � ) and m 0 ≥ 1 . If there exists for 0 ≤ m ≤ m 0 a refinement strategy r δ ( m ) such that the x δ m , y δ ] fulfill approximate discrepancies RES r δ ( m ) [˜ δ 2 + C r δ ( m ) (˜ x δ m ) m , y δ ]) 2 > x δ c ( RES r δ ( m ) [˜ , (2) 1 − 3 2 β � S � x δ m − x † � then, for 0 ≤ m ≤ m 0 , the approximation errors � ˜ decrease monotonically. Gerd Teschke (23. Juli 2009) 11/29
Compressive Strategies for Inverse Problems Compression by adaptive recovery U(i) Let r (0) be the smallest integer ≥ 0 with C r (0) ( ˜ f 0 ) f 0 , g ]) 2 ≥ c ( RES r (0) [ ˜ (3) , 1 − 3 2 β � S � if r (0) with (3) does not exist, stop the iteration, set m 0 = 0. U(ii) if for m ≥ 1 C r ( m − 1) ( ˜ f m ) f m , g ]) 2 ≥ c ( RES r ( m − 1) [ ˜ , (4) 1 − 3 2 β � S � set r ( m ) = r ( m − 1) U(iii) if C r ( m − 1) ( ˜ f m ) f m , g ]) 2 < c ( RES r ( m − 1) [ ˜ , (5) 1 − 3 2 β � S � set r ( m ) = r ( m − 1) + j , where j is the smallest integer with C r ( m − 1)+ j ( ˜ f m ) f m , g ]) 2 ≥ c ( RES r ( m − 1)+ j [ ˜ , (6) 1 − 3 2 β � S � U(iv) if there is no integer j with (6), then stop the iteration, set m 0 = m . Gerd Teschke (23. Juli 2009) 12/29
Compressive Strategies for Inverse Problems Compression by adaptive recovery Theorem (regularization result) Let x † denote the solution of the inverse problem and let x † be the sequence of associated frame coefficients, i.e. x † = A ∗ x † . Suppose ˜ x m is computed by the inexact Landweber iteration with exact data y in combination with the latter updating rule for the refinement strategy r. Then, for ˜ x 0 arbitrarily chosen, the sequence ˜ x m converges in norm towards x † , i.e. x † . m →∞ ˜ lim x m → ˜ Gerd Teschke (23. Juli 2009) 13/29
Compressive Strategies for Inverse Problems Compression by acceleration Compression by acceleration Gerd Teschke (23. Juli 2009) 14/29
Compressive Strategies for Inverse Problems Compression by acceleration So far sparse recovery by adaptive variant of x n +1 = x n + γ F ∗ ( y − Fx n ) Directly involving sparsity x n +1 = S α ( x n + γ F ∗ ( y − Fx n )) Extension to nonlinear operators x n +1 = S α ( x n + γ F ′ ( x n +1 ) ∗ ( y − F ( x n ))) rather slow and numerically intensive iteration Consider instead min {D ( x ) = � F ( x ) − y � 2 } on B R := { x ∈ ℓ 2 ; � x � 1 ≤ R } x n +1 = P R ( x n + γ n F ∗ ( y − Fx n )) . Gerd Teschke (23. Juli 2009) 15/29
Compressive Strategies for Inverse Problems Compression by acceleration Lemma (necessary condition) x R ∈ ℓ 2 is a minimizer of D ( x ) = � F ( x ) − y � 2 on B R If the vector ˜ then for any γ > 0 , x R + γ F ′ (˜ x R ) ∗ ( y − F (˜ P R (˜ x R )) = ˜ x R , which is equivalent to � F ′ (˜ x R ) ∗ ( y − F (˜ x R )) , w − ˜ x R � ≤ 0 , for all w ∈ B R . Gerd Teschke (23. Juli 2009) 16/29
Compressive Strategies for Inverse Problems Compression by acceleration Define r := max { 2 sup x ∈ B R � F ′ ( x ) � 2 , 2 L � D ( x 0 ) } reason: � F ( x n +1 ) − F ( x n ) � 2 ≤ r 2 � x n +1 − x n � 2 The sequence { β n } n ∈ N satisfies Condition (B) if there exists n 0 such that: ¯ β := sup { β n ; n ∈ N } < ∞ inf { β n ; n ∈ N } ≥ 1 ( B1 ) and β n � F ( x n +1 ) − F ( x n ) � 2 ≤ r 2 � x n +1 − x n � 2 ( B2 ) ∀ n ≥ n 0 D ( x n ) ≤ r β n L � ( B3 ) 2 . Gerd Teschke (23. Juli 2009) 17/29
Compressive Strategies for Inverse Problems Compression by acceleration Lemma (surrogate functional technique and γ = β/ r ) Assume F to be twice Fr´ echet differentiable and β ≥ 1 . For � arbitrary fixed x ∈ B R assume β L D ( x ) ≤ r / 2 and define the functional Φ β ( · , x ) by Φ β ( w , x ) := � F ( w ) − y � 2 − � F ( w ) − F ( x ) � 2 + r β � w − x � 2 . (7) Then there exists a unique w ∈ B R that minimizes the restriction to B R of Φ β ( w , x ) . We denote this minimizer by ˆ w which is given by � x + β � r F ′ (ˆ w ) ∗ ( y − F ( x )) w = P R ˆ . Gerd Teschke (23. Juli 2009) 18/29
Compressive Strategies for Inverse Problems Compression by acceleration Lemma (contraction ) � Assume β L D ( x ) ≤ r / 2 . Then the map w ) := P R ( x + β/ rF ′ (ˆ w ) ∗ ( y − F ( x ))) Ψ(ˆ is contractive and therefore converges to a unique fixed point. Gerd Teschke (23. Juli 2009) 19/29
Compressive Strategies for Inverse Problems Compression by acceleration Lemma (monotone decay) Let now the iteration be given by x n + β n � � x n +1 = P R r F ′ ( x n +1 ) ∗ ( y − F ( x n )) , where the β n satisfy Condition (B) with respect to { x n } n ∈ N , then the sequence D ( x n ) is monotonically decreasing and lim n →∞ � x n +1 − x n � = 0 . Gerd Teschke (23. Juli 2009) 20/29
Recommend
More recommend