1 Local Convergence of the Lavrentiev Method for the Cauchy Problem via a Carleman Inequality DU Duc Thang (Vietnam National University, Hanoi) Faten JELASSI (LAMSIN, ´ Ecole Nationale d’Ing´ enieurs de Tunis, Tunisie) In collaboration with: Faker BEN BELGACEM (Universit´ e de Technologie de Compi` egne, France) Partially granted by: - NAFOSTED, Vietnam (for DU Duc Thang) - MERST, Tunisie (the LR99ES-20 contract for Faten JELASSI)
2 The Data Completion problem Find u such that − div( a ∇ u ) + bu = f, in Ω, u = g, on Γ C , a∂ n u = ϕ, on Γ C , u = ? , on Γ I . The problem is ill-posed • Uniqueness : TRUE (= ⇒ ) Holmgren Theorem • Existence : Not guaranteed • Stability : Not valid (= ⇒ ) A big problem when computations are aimed
3 Many Motivations 1. Geophysics/seismic prospect 2. Identification of Cracks, Contact resistivity, Corrosion Factor, (. . . ) —many examples in Inverse Problems and Engineering— 3. Electrical activities in the brain cortex (EEG, MEG), and in the heart myocardial (ECG) 4. Computer Tomography; Electrical Impedance Tomography; ...
4 Different Approaches 1. Bi-Laplacian problem, Quasi-Reversibility : Klibanov, Santosa (1991), Cao, Pereverez (2007), Bourgeois, Dard´ e (2010) 2. Backus-Gilbert method, Moment problem : Cheng, Hon, Wei, Yamamoto (2001) 3. Optimal Control Problem : Fursikov (1987), Kabanikhin, Karchevski (1995), Chakib, Nachaoui (2006), Ben Abda, Henry, Jday (2009) 4. Variational Formulation via Holmgren Theorem : Ben Belgacem, El Fekih, Aza¨ ıez (2005), Andrieux, Baranger, Ben Abda (2006)
5 The Variational Formulation (1) Duplicate u into u D ( λ, g ) and u N ( λ, ϕ ) − div( a ∇ u D ) + bu D = f, in Ω − div( a ∇ u N ) + bu N = f, in Ω and u D ( λ, g ) = g, on Γ C a∂ n u N ( λ, ϕ ) = ϕ, on Γ C u D ( λ, g ) = λ, on Γ I u N ( λ, ϕ ) = λ, on Γ I Cauchy problem (= ⇒ ) Steklov-Poincar´ e problem By Holmgren Theorem, the right λ ∈ H 1 / 2 (Γ I ) is such that a∂ n u D ( λ, g ) = a∂ n u N ( λ, ϕ ) , on Γ I Then (again by Holmgren) u D ( λ, g ) = u N ( λ, ϕ ) = u in Ω
6 The Variational Formulation (2) ∈ H 1 / 2 (Γ I ) � � Find λ such that � [( a ∇ u D ( λ ) ∇ u D ( µ ) + bu D ( λ ) u D ( µ )) − ( a ∇ u N ( λ ) ∇ u N ( µ ) + bu N ( λ ) u N ( µ ))] d x Ω � � = − a ∇ ˘ u D ( g ) ∇ u D ( µ ) + b ˘ u D ( g ) u D ( µ ) d x − ϕu N ( µ ) d x , ∀ µ Ω Γ C ( ⇐ ⇒ ) Find λ such that s ( λ, µ ) = ( s D ( λ, µ ) − s N ( λ, µ )) = ℓ ( µ ) ∀ µ Steklov-Poincar´ e operator: Find λ such that Sλ = ( S D − S N ) λ = ℓ or in the preconditioned form ( S D - preconditioner) Tλ = f
7 Lavrentiev’s Regularization - Global Convergence Find λ ̺ such that ̺s D ( λ ̺ , µ ) + s ( λ ̺ , µ ) = ℓ ( µ ) , ∀ µ. If λ is an exact solution. Then � ̺ ̺ → 0 � λ − λ ̺ � s D = 0 , lim and � λ − λ ̺ � s ≤ 2 � λ � s D ( g ǫ , ϕ ǫ ) = ( g, ϕ ) + (( δg ) , ( δϕ )), size (( δg ) , ( δϕ )) = ǫ Lavrentiev regularization problem for noisy data: Find λ ǫ such that ̺s D ( λ ǫ , µ ) + s ( λ ǫ , µ ) = ℓ ǫ ( µ )
8 Extended-Domain Lavrentiev Regularization Find λ ♭ ( ∈ H 1 / 2 (Γ ♭ I )) such that s ♭ ( λ ♭ , µ ) = ℓ ♭ ( µ ) , ∀ µ ∈ H 1 / 2 (Γ ♭ I ) Extended-Domain Lavrentiev Regularization: Find λ ♭ ǫ,̺ such that ̺s ♭ D ( λ ♭ ǫ,̺ , µ ) + s ♭ ( λ ♭ ǫ,̺ , µ ) = ℓ ♭ ǫ ( µ ) , ∀ µ Retrieve the solution on the real domain u ♭ N ( λ ♭ Ω ( ∈ H 1 (Ω)) , u ♭ N ( λ ♭ �� �� � � u ǫ,̺ = ǫ,̺ , ϕ ǫ ) λ ǫ,̺ = ǫ,̺ , ϕ ǫ ) � � Γ I
9 Discrepancy Principle of Morozov The Kohn-Vogelius functional on Ω ♭ � ǫ ( λ ♭ ) = | u ♭ D ( λ ♭ , g ǫ ) − u ♭ N ( λ ♭ , ϕ ǫ ) | H 1 (Ω ♭ ) ≈ ǫ 2 KV ♭ We have that � � 2 KV ǫ ( λ ♭ ) = | u ♭ D ( λ ♭ , g ǫ ) − u ♭ N ( λ ♭ , ϕ ǫ ) | H 1 (Ω) ≈ ǫ 2 KV ♭ ǫ ( λ ♭ ) ≈ The Discrepancy Principle of Morozov: Fix σ > 1 . Find ̺ = ̺ ( ǫ ) verifying � 2 KV ǫ ( λ ♭ ǫ,̺ ) = σǫ
10 Variance? Bias? Thm. 1 (Variance) . There holds that ( ǫ = noise size) 1 � λ ǫ,̺ − λ ̺ � H 1 / 2 (Γ I ) ≤ Cǫ̺ − 2(1+2 β ) . Thm. 2 (Bias) . If Cauchy data are exact (noise free) then ̺ → 0 � λ ̺ − λ � H 1 / 2 (Γ I ) = 0 . lim Rem. 1 . Lavrentiev regularization method converges if ǫ → 0 ǫ̺ − 1 / 2 = 0 . lim The Extended-Domain Lavrentiev method is more resistant to noise. Rem. 2 . The deduced analytical results are only for particular geometry domains (circles, annulus, rectangles, etc).
11 Harmonical Extension and the General Source Condition General Source Condition (GSC) for the problem ( Tλ = f ) λ ∈ R ( T p ) λ = T p χ ⇐ ⇒ This condition is widely used in the Analysis of Regularization Methods. Controversial! Rejected by some mathematicians (M. Klibanov, ...). A concrete meaning of it, in the Cauchy problem, is provided in Thm. 3. Recall that λ = u | Γ I where u is the solution of the Cauchy problem. Then ⇒ ∃ u ♭ ∈ H 1 (Ω ♭ ) , harmonic : u = ( u ♭ ) | Ω λ ∈ R ( T p ) ⇐
12 Convergence Results Thm. 4 ( A-priori convergence) . 2(1+2 β ) Assume that λ satisfies (GSC). The choice ̺ = ǫ yields that 1+2 p 2 p � λ ǫ,̺ − λ � H 1 / 2 (Γ I ) ≤ Cǫ 2 p +1 Thm. 5 ( A-posteriori convergence) . Assume that λ satisfies (GSC). The choice of ̺ = ̺ ( ǫ ) from (DP) of Morozov provides 2 p � λ ǫ,̺ − λ � H 1 / 2 (Γ I ) ≤ Cǫ 2 p +1
13 A Carleman’s Inequality Γ C Given θ ∈ C 2 (¯ Ω) satisfies Γ I ��������������� ��������������� ��������������� ��������������� ��������������� ��������������� Ω |∇ θ ( x ) | > 0 , ∀ x ∈ ¯ τ Ω, ��������������� ��������������� ��������������� ��������������� ��������������� ��������������� θ ( x ) > 0 , ∀ x ∈ ¯ Ω \ Γ I , θ ( x ) = 0 , x ∈ Γ I . ��������������� ��������������� ��������������� ��������������� ��������������� ��������������� Provided a weight function ��������������� ��������������� ψ ( x ) = e θ ( x ) , x ∈ ¯ Ω. Carleman’s Estimate with boundary condition: Ω τ = { x ∈ Ω , ψ ( x ) ≥ 1 + τ } � 1 � � [ a ( ∇ v ) 2 + ζ 2 bv 2 ] e 2 ζψ d x ≤ C [ − div( a ∇ v ) + bv ] 2 e 2 ζψ d x ζ Ω Ω � � [( a∂ n v ) 2 + ζ 2 v 2 ] e 2 ζψ dγ + Γ for all v ∈ H 2 (Ω).
14 Local Bias Estimate? Thm. 6. β > 0 a small parameter. ∃ q = q ( τ ) ∈ [0 , 1 / 2[ and C = C ( τ ) such that � u N ( λ ̺ , ϕ ) − u N � H 1 (Ω τ ) ≤ C̺ q � λ � . Proof (Sketch.) We set w ̺ = u N ( λ ̺ , ϕ ) − u , consider two small parameters ( τ, η ) such that β > τ > η > 0. A cut-off function ξ = ξ τ,η is such that 0 ≤ ξ τ,η ( x ) ≤ 1 ∀ x ∈ Ω ξ τ,η ( x ) = 1 ∀ x ∈ Ω τ ξ τ,η ( x ) = 0 ∀ x ∈ Ω \ Ω η .
15 Proof (cont.) The Carleman Inequality is applied to v := w ̺ ξ as follows � 1 � � a ( ∇ ( w ̺ ξ )) 2 + ζ 2 b ( w ̺ ξ ) 2 � e 2 ζψ d x ≤ C � 2 e 2 ζψ d x � � − div( a ∇ ( w ̺ ξ )) + bw ̺ ξ ζ Ω β Ω � ( a∂ n w ̺ ξ ) 2 + ζ 2 ( w ̺ ξ ) 2 � e 2 ζψ dγ � � + . Γ which yields, after some calculations and simplifications � e 2 ζ ( τ − β ) � � � a ( ∇ ( w ̺ )) 2 + ζ 2 b ( w ̺ ) 2 � [ a ( ∇ w ̺ ) 2 + bw 2 � ̺ ] d x + ζ 2 e 2 ζ ( σ − β ) w 2 � d x ≤ C ̺ dγ . ζ Ω β Ω η \ Ω τ Γ C where σ = max { ψ ( x ) − 1 , x ∈ Γ C } .
16 Proof (cont.) An important inequality � w N,̺ � H 1 / 2 (Γ C ) + � a∂ n w D,̺ � H − 1 / 2 (Γ C ) ≤ C √ ̺ � λ � s D leads to � ρ 2 � λ ̺ − λ � s D + ̺ � � w ̺ � H 1 (Ω β ) ≤ C ρ 2 s � λ � s D √ ̺ e − ζ ( β − τ ) tends to zero as ζ large. This yields where s = σ − β 1 β − τ and ρ = 1 2(1+ s ) � λ � s D � w ̺ � H 1 (Ω β ) ≤ C̺ 1 The proof is complete with q = 2(1+ s ) .
17 Some remarks • No use of the General Source Condition on the restricted area Ω β . • Super-convergence result of the bias. Under a smoothness of λ , we may have, � λ ̺ − λ � s D ≤ C̺ p for some p ∈ [0 , 1 / 2[, and an interpolation inequality 1 � w ̺ � H 1 (Ω β ) ≤ C̺ (1 − µ ) p + q = C̺ (1 − µ ) p + µ/ 2 , µ = 1 + s. • β → 0 ⇒ µ → 0 • β grows then Ω β reduces to a thin band concentrated around Γ C ⇒ µ → 1.
Recommend
More recommend