Inverse Problems and Regularization – An Introduction Stefan Kindermann Industrial Mathematics Institute University of Linz, Austria Introduction to Regularization
What are Inverse Problems ? One possible definition [Engl, Hanke, Neubauer ’96] : Inverse problems are concerned with determining causes for a desired or an observed effect. Direct Problem ⇒ = Effect Cause ( Parameter, Unknown, ( Data, Observation, . . . ) Inverse Problem Solution of Inv. Prob, . . . ) ⇐ = Introduction to Regularization
Direct and Inverse Problems The classification as direct or inverse is in the most cases based on the well/ill-posedness of the associated problems: Stable = ⇒ Cause Effect Unstable ⇐ = Inverse Problems ∼ Ill-posed/(Ill-conditioned) Problems Introduction to Regularization
What are Inverse Problems? A central feature of inverse problems is their ill-posedness Well-Posedness in the sense of Hadamard [Hadamard ’23] Existence of a solution (for all admissible data) Uniqueness of a solution Continuous dependence of solution on the data Well-Posedness in the sense of Nashed [Nashed, ’87] A problem is well posed if the set of Data/Observations is a closed set. (The range of the forward operator is closed). Introduction to Regularization
Abstract Inverse Problem Abstract inverse problem: Solve equation for x ∈ X (Banach/Hilbert- ... space), given data y ∈ Y (Banach/Hilbert- ... space) F ( x ) = y , where F − 1 does not exist or is not continuous. F . . . forward operator We want ′′ x † = F − 1 ( y ) ′′ x † .. (generalized) solution Introduction to Regularization
Abstract Inverse Problem • If the forward operator is linear ⇒ linear inverse problem. • A linear inverse problem is well-posed in the sense of Nashed if the range of F is closed. Theorem: An linear operator with finite dimensional range is always well-posed (in Nashed’s sense). “Ill-posedness lives in infinite dimensional spaces” Introduction to Regularization
Abstract Inverse Problem “Ill-posedness lives in infinite dimensional spaces” Problems with a few number of parameters usually do not need regularization. Discretization acts as Regularization/Stabilization Ill-posedness in finite dimensional space ∼ Ill-conditioning Measure of ill-posedness: decay of singular values of forward operator Introduction to Regularization
Methodologies in studying Inverse Problems Deterministic Inverse Problems (Regularization, worst case convergence, infinite di- mensional, no assumptions on noise) Statistics (Estimators, average case analysis, often finite di- mensional, noise is random variable, specific struc- ture ) Bayesian Inverse Problems (Posteriori distribution, finite dimensional, analysis of post. dist. by estimators, specific assumptions on noise and prior) Control Theory ( x = control, F ( x )= state, convergence of state not control, infinite dimensional, no assumptions) Introduction to Regularization
Deterministic Inverse Problems and Regularization Try to solve F ( x ) = y , when ′′ x † = F − 1 ( y ) ′′ does not exist. Notation: x † the “true” (unknown) solution (minimal norm solution) Even if F − 1 ( y ) exists, it might not be computable [Pour-El, Richards ’88] Introduction to Regularization
Deterministic Inverse Problems and Regularization Data noise: Usually we do not have the exact data y = F ( x † ) but only noisy data y δ = F ( x † ) + noise Amount of noise: noiselevel δ = � F ( x † ) − y δ � Introduction to Regularization
Deterministic Inverse Problems and Regularization Method to solve Ill-posed problems: Regularization: Approximate the inverse F − 1 by a family of stable operators R α F ( x ) = y “ x † = F − 1 ( y )“ ⇒ x α = R α ( y ) R α ∼ F − 1 R α Regularization operators α Regularization parameter Introduction to Regularization
Regularization α small ⇒ R α good approximation for F − 1 , but unstable α large ⇒ R α stable but bad approximation for F − 1 , α ... controls Trade-off between approximation and stability. Total error = approximation error + propagated data error ||x α −x|| Total Error ↓ Approximation Propagated Error → ← Data Error α How to select α : Parameter choice rules Introduction to Regularization
Example: Tikhonov Regularization Tikhonov Regularization: [Phillips ’62; Tikhonov ’63] Let F : X → Y be linear between Hilbertspaces: A least squares solution to F ( x ) = y is given by the normal equations F ∗ Fx = F ∗ y Tikhonov regularization: Solve regularized problem F ∗ Fx + α x = F ∗ y x α = ( F ∗ F + α I ) − 1 F ∗ y Introduction to Regularization
Example: Tikhonov Regularization Error estimates (under some conditions) δ 2 � x α − x † � 2 ≤ C α ν + α total Error (Stability) Approx. Theory of linear and nonlinear problems in Hilbert spaces: [Tikhonov, Arsensin ’77; Groetsch ’84; Hofmann ’86; Baumeister ’87, Louis ’89; Kunisch, Engl, Neubauer ’89; Bakushinskii, Goncharskii ’95; Engl, Hanke, Neubauer ’96; Tikhonov, Leonov, Yagola ’98; . . . ] Introduction to Regularization
Example: Landweber iteration Landweber iteration [Landweber ’51] Solve normal equation by Richardson iteration Landweber iteration x k +1 = x k − F ∗ ( F ( x k ) − y ) k = 0 , . . . Iteration index is the regularization parameter α = 1 k Introduction to Regularization
Example: Landweber iteration Error estimates (under some conditions) C � x k − x † � 2 ≤ + k δ k ν total Error (Stability) Approx. Semiconvergence Iterative Regularization Methods: Parameter choice = choice of stopping index k Theory: [Landweber ’51; Fridman ’56; Bialy ’59; Strand ’74; Vasilev ’83; Groetsch ’85; Natterer ’86; Hanke, Neubauer, Scherzer ’95; Bakushinskii, Goncharskii ’95; Engl, Hanke, Neubauer ’96;. . . ] Introduction to Regularization
Notion of Convergence Does the regularized solution converges to the true solution as the noise level tends to 0 δ → 0 x α → x † lim (Worst case) convergence δ → 0 sup {� x α − x † � | ∀ y δ : � y δ − F ( x † ) � ≤ δ } = 0 lim (for a given parameter choice rule) Convergence in expectation E � x α − x † � 2 → 0 as E � y δ − F ( x † ) � 2 → 0 Introduction to Regularization
Theory of Regularization of Inverse Problems Convergence depends on x † Question of speed: convergence rates � x α − x † � ≤ f ( α ) or � x α − x † � ≤ f ( δ ) Introduction to Regularization
Theoretical Results [Schock ’85] : Convergence can be arbitrarily slow ! Theorem: For ill-posed problems in the sense of Nashed, there cannot be a function f with lim δ → f ( δ ) = 0 such that for all x † � x α − x † � ≤ f ( δ ) Uniform bounds on the convergence rates are impossible Convergence rates are possible if x † in some smoothness class Introduction to Regularization
Theoretical Results Convergence rates: requires a source condition x † ∈ M Convergence rates ∼ modulus of continuity of the inverse Ω( δ, M ) = sup {� x † 1 − x † 2 � | � F ( x † 1 ) − F ( x † 2 ) � ≤ δ, x † 1 , x † 2 ∈ M} Theorem [Tikhonov, Arsenin ’77, Morozov ’92, Traub, Wozniakowski ’80] For an arbitrary regularization map, arbitrary parameter choice rule (with R α (0) = 0) � x α − x † � ≥ Ω( δ, M ) Introduction to Regularization
Theoretical Results Standard smoothness classes: For linear ill-posed problems in Hilbert spaces we can form M = X µ = { x † = ( F ∗ F ) ν ω | ω ∈ X } (H¨ older) source condition (=abstract smoothness condition) 2 µ Ω( δ, X µ ) = C δ 2 µ +1 Best convergence rate for H¨ older source conditions A regularization operator and a parameter choice rule such that 2 µ � x α − x † � = C δ 2 µ +1 is called order optimal. Introduction to Regularization
Theoretical Results Special case x † = F ∗ ω Such source conditions can be generalized to nonlinear problems e.g. x † = F ′ ( x † ) ∗ ω x † = ( F ′ ( x † ) ∗ F ( x † )) ν ω Introduction to Regularization
Theoretical Results Many regularization method have shown to be order optimal. A significant amount of theoretical results in regularization theory deals with this issue: Convergence of method and parameter choice rule Optimal order convergence under source condition. Knowledge of the source condition does not have to be known. Introduction to Regularization
Parameter Choice Rules How to choose the regularization parameter: Classification a-priori α = α ( δ ) a-posteriori α = α ( δ, y ) heuristic α = α ( y ) Introduction to Regularization
Bakushinskii veto Bakushinskii veto: [Bakushinskii ’84] A parameter choice without knowledge of δ cannot yield a convergent regularization in the worst case (for ill-posed problems). Knowledge of δ is needed ! ⇒ heuristic parameter choice rules are nonconvergent in the worst case Introduction to Regularization
a-priori-rules Example of a-priori rule: If x † ∈ X µ , then 1 α = δ 2 µ +1 yields optimal order for Tikhonov regularization + Easy to implement − Needs information on source condition Introduction to Regularization
Recommend
More recommend