regularization of ill posed problems
play

Regularization of ill-posed problems Uno H amarik University of - PDF document

Regularization of ill-posed problems Uno H amarik University of Tartu, Estonia Content 1. Ill-posed problems (definition and examples) 2. Regularization of ill-posed problems with noisy data 3. Parameter choice rules for exact noise level


  1. Regularization of ill-posed problems Uno H¨ amarik University of Tartu, Estonia Content 1. Ill-posed problems (definition and examples) 2. Regularization of ill-posed problems with noisy data 3. Parameter choice rules for exact noise level 4. Iterative methods 5. Discretization methods 6. Lavrentiev and Tikhonov methods and modifications 7. Parameter choice rules for approximate noise level Talk is based on joint research with G. Vainikko, T. Raus, R. Palm (all Tartu, Estonia), U. Tautenhahn (Zittau, Germany), R. Plato (Berlin, Germany). Ideas for colloboration with Finnish Inverse Problems Society and Finnish Centre of Excellence in Inverse Problems are welcome!

  2. 1 Ill-posed problems Problem Au = f (1) A ∈ L ( H, F ) , H, F − Hilbert spaces 1.1 Definition. Problem (1) is well-posed, if 1) for all f ∈ F (1) has solution u ∗ ∈ H 2) for all f ∈ F solution of (1) is unique 3) solution u ∗ depends continuously on the data. If one of 1)–3) is not satisfied, (1) is ill-posed problem. If A compact, then R ( A ) nonclosed ⇒ A − 1 (if exists) unbounded ⇒ 3) not satisfied. 1.2 Example 1. Differentiation of function f ∈ C 1 [0 , 1]. If f n ( t ) = f ( t ) + 1 n sin n 2 t , then for n → ∞ � f ′ n − f ′ � ∞ → ∞ � f n − f � ∞ → 0 , 1.3 Example 2. Integral equation of the first kind 1 � Au ( t ) ≡ K ( t, s ) u ( s ) ds = f ( t ) (0 ≤ t ≤ 1) , K ( t, s ) smooth . 0 1 � Ex. 2.1. K ( t, s ) ≡ 1 � u ( s ) ds = f ( t ) 0 ∃ u ∗ ⇔ f ( t ) ≡ const; set of solutions very large. t � Ex. 2.2. u ( s ) ds = f ( t ) 0 u ∗ = f ′ f ∈ H 1 [0 , 1]; ∃ u ∗ ∈ L 2 [0 , 1] ⇔ f (0) = 0 , 2

  3. 1.4 Example 3. System of linear equations with large condition number of the matrix Example: � x 1 + 10 x 2 = 11 + ε 10 x 1 +100 . 1 x 2 = 110 . 1 ε = 0 ⇒ x 1 = 1 , x 2 = 1 ε = 0 . 1 ⇒ x 1 = 100 . 1 , x 2 = − 9 2 Regularization of ill-posed problems with noisy data 2.1 Noisy data. Instead of f ∈ F available f δ ∈ F : � f δ − f � ≤ δ well-posed problems ill-posed problems ∃ ! A − 1 f δ A − 1 f δ may not exists for ∀ δ > 0 A − 1 f δ → A − 1 f = u ∗ if ∃ A − 1 f δ , generally A − 1 f δ �→ A − 1 f for δ → 0 Hence noisy data are no problem for well-posed problems, but serious problem for ill-posed problems. � f δ − f � ≤ c , c is unknown Remark. In Section 7 we assume: lim δ δ → 0 constant. 2.2 Regularization. 1. Choose some parametric solution method converging in case of exact data: for approximate solution u r it holds u r → u ∗ as r → ∞ (typically this convergence is monotonical; parameter r = n ∈ N in iterative and projection methods, r ∈ R in Tikhonov method). 2. In case of noisy data choose regularization parameter r = r ( δ ) so that u r ( δ ) → u ∗ δ → 0 . as (2) 3

  4. Main problem: how to choose r to quarantee (2) � u ∗ − u r � ≤ � u ∗ − u 0 r � + � u 0 r − u r � u 0 r – approximate solution for f δ = f 2.3 Special regularization methods. 1) Case A = A ∗ ≥ 0: Lavrentiev method ( A + αI ) u α = f δ , (3) α > 0 – regularization parameter ( r = α − 1 ) � ( A + αI ) − 1 � ≤ α − 1 ⇒ (3) well-posed for ∀ α > 0 2) General case: Tikhonov method A ∗ A =( A ∗ A ) ∗ ≥ 0 Au = f δ � A ∗ Au = A ∗ f δ ( A ∗ A + αI ) u α = A ∗ f δ � Lavr . m . 4

  5. 3 Parameter choice rules for exact noise level 3.1 Discrepancy principle. r ( δ ) = r D : � Au r D − f δ � ≈ bδ, b = const > 1 3.2 Monotone error rule (ME-rule). The idea: choose r = r ME ( δ ) as largest r , for which we are able to prove, that � u r − u ∗ � is monotonically decreasing for r ∈ [0 , r ME ] (assuming � f δ − f � ≤ δ ): a) in methods with r ∈ R d dr � u r − u ∗ � 2 ≤ 0 r ∈ (0 , r ME ) for b) in methods with r = n ∈ N � u n − u ∗ � < � u n − 1 − u ∗ � for n = 1 , 2 , . . . , n ME In Tikhonov method α = α ME is solution of the equation ( Au α − f δ , Av α − f δ ) v α = ( αI + A ∗ A ) − 1 ( αu α + A ∗ f δ ) = δ , � Av α − f δ � ME-rule is quasioptimal: � u α ME − u ∗ � ≤ const inf α> 0 � u α − u ∗ � and order-optimal: if u ∗ ∈ R (( A ∗ A ) p/ 2 ) , then p � u α ME − u ∗ � ≤ const δ ( p ≤ 2) . p +1 Discrepancy principle is not quasioptimal, but is order optimal for p ≤ 1. If u ∗ ∈ R ( A ∗ A ), then � u α ME − u ∗ � = O ( δ 2 / 3 ), � u α D − u ∗ � = O ( δ 1 / 2 ). 5

  6. 4 Iterative methods u n = u n − 1 + A ∗ z n − 1 , n = 1 , 2 , . . . ( u 0 = 0) (4) Here n – regularization parameter. Discrepancy principle: stopping index n D is first n with � Au n − f δ � ≤ bδ, b = const > 1 ME-rule: stopping index n ME is first n with ( A ( u n + u n +1 ) / 2 − f δ , z n ) ≤ δ � z n � 4.1 Linear methods a) Landweber method: (4) with z n = β ( f δ − Au n ), β ∈ (0 , � A ∗ A � − 2 ) b) implicit iteration method βu n + A ∗ Au n = βu n − 1 + A ∗ f δ , n = 1 , 2 , . . . ; u 0 = 0 , β > 0 is (4) with z n = β − 1 ( f δ − Au n +1 ). In both methods n ME = n D or n ME = n D − 1 and both rules are quasioptimal and order optimal for all p > 0. 4.2 Conjugate gradient (CG) type methods a) CGLS applies CG to equation A ∗ Au = A ∗ f and gives u k = arg min {� f δ − Au � , u ∈ K k } K k = SPAN { A ∗ f δ , A ∗ AA ∗ f δ , . . . , ( A ∗ A ) k − 1 A ∗ f δ } Algorithm: take u 0 = 0, r 0 = f δ , v − 1 = 0, p − 1 = ∞ and compute for n = 0 , 1 , 2 , . . . p n = A ∗ r n , σ n = � p n � 2 / � p n − 1 � 2 , v n = r n + σ n v n − 1 , q n = A ∗ v n , β n = � p n � 2 / � s n � 2 , s n = Aq n , u n +1 = u n + β n q n , r n +1 = r n − β n s n . 6

  7. b) CGME applies CG to equation AA ∗ w = f δ with u = A ∗ w and in case f δ = f gives u k = arg min {� u ∗ − u � , u ∈ K k } . Algorithm: take u 0 = 0, r 0 = f δ , v − 1 = 0, r − 1 = ∞ and compute for n = 0 , 1 , 2 , . . . σ n = � r n � 2 / � r n − 1 � 2 , q n = A ∗ v n , v n = r n + σ n v n − 1 , β n = � r n � 2 / � q n � 2 , u n +1 = u n + β n q n , r n +1 = r n − β n Aq n . In both methods ME-rule is applicable with z n = β n v n . In GGLS ordinary discrepancy principle is good, but in GGME one can stop by first n with n � − 1 / 2 � � � f δ − Au n � − 2 ≤ bδ, b = const > 1 . i =0 5 Discretization methods 5.1 Numerical differentiation f ∈ C m [0 , 1] , m ∈ { 1 , 2 , 3 } , � f δ − f � C [0 , 1] ≤ δ Approximate u ∗ = f ′ by u h ( t ) = f δ ( t + h ) − f δ ( t − h ) (for t ∈ [ h, 1 − h ]) . 2 h Here h – regularization parameter. h m − 1 + δ � � 1 � u h − u ∗ � C [0 , 1] ≤ c → min for h ≈ δ m , giving h m − 1 m ) , � u h − u ∗ � C [0 , 1] = O ( δ m ∈ { 2 , 3 } . 7

  8. 5.2 Projection methods H n ⊂ H, F n ⊂ F, dim H n = dim F n < ∞ P n , Q n − orthoprojectors , P n : H → H n , Q n : F → F n Find u n ∈ H n : ( Au n − f δ , v n ) = 0 ∀ v n ∈ F n . Here n – regularization parameter. 5.2.1 Least error method: H n = A ∗ F n . If f δ = f , then u n = P n u ∗ . Let N ( A ) = { 0 } , N ( A ∗ ) = { 0 } , � v − Q n v � → 0 ( n → ∞ , ∀ v ∈ F ), F n ⊂ F n +1 ( n ≥ 1). ME-rule: find n = n ( δ ) as first n ∈ N in u n = A ∗ v n ( v n ∈ F n ), for which ( v n − v n +1 , f δ ) ≤ δ . � v n − v n +1 � Then � u n ME − u ∗ � → 0 as δ → 0. 5.2.2 Least squares method: F n = AH n Let N ( A ) = { 0 } and � u − P n u � → 0 n → ∞ ( ∀ u ∈ H ) . as (5) Let there exists m ∈ N , for which � ≤ const ( κ n + κ n +1 ) 1 /m � � ( I − P n )( A ∗ A ) 1 / (2 m ) � ( n ≥ 1) , (6) � w n � where κ n ≡ sup � Aw n � . w n ∈ H n If n D = n ( δ ) is chosen by discrepancy principle, then � u n D − u ∗ � → 0 δ → 0 . as (7) 5.2.3 Galerkin method: F n = H n Let H = F , F n = H n , A = A ∗ > 0 and (5), (6) hold. If n D = n ( δ ) is chosen by discrepancy principle with b large enough, then (7) holds. 8

  9. 5.2.4 Collocation method 1 � ( Au )( t ) ≡ K ( t, s ) u ( s ) ds = f ( t ) (0 ≤ t ≤ 1) 0 A : L 2 (0 , 1) → L 2 (0 , 1) , N ( A ) = { 0 } , f ∈ C [0 , 1] 1 � |K ( t, s ) | 2 ds ≤ const (0 ≤ t ≤ 1) , 0 1 � � 2 ds → 0 as t ′ → t (0 ≤ t, t ′ ≤ 1) � K ( t ′ , s ) − K ( t, s ) � � 0 Given is knot set { t i ∈ [0 , 1] , t i � = t j for i � = j, i, j ∈ I } , I – index set. Let {K ( t i , s ) , i ∈ I } be linear independent system. Let indexsets I n satsfy I n ⊂ I n +1 ⊂ . . . ⊂ I ( n ≥ 1) and ∆ n = sup i ∈ I n | t − t i | → 0 as n → ∞ . inf t ∈ [0 , 1] c ( n ) i K ( t i , s ), where { c ( n ) Approximate solution u n = � i } is solution of i ∈ I n system 1 � c ( n ) � K ( t i , s ) K ( t j , s ) ds = f ( t j ) ( j ∈ I n ) . i i ∈ I n 0 � ≤ δ i � � Given { δ i } , i ∈ I : � f δ ( t i ) − f ( t i ) ME-rule for choice of discretization level n ME = n ( δ ): n ME is the first n = 1 , 2 , . . . , for which � u n +1 � 2 − � u n � 2 ≤ � � c ( n +1) � � c ( n ) − c ( n +1) � � � � � δ i + � δ i i i i i ∈ I n +1 /I n i ∈ I n δ 2 Then � u n ME − u ∗ � L 2 (0 , 1) → 0 for lim � i = 0. n →∞ i ∈ I n 9

Recommend


More recommend