a guaranteed a posteriori error estimator for certified
play

A guaranteed a posteriori error estimator for certified boundary - PowerPoint PPT Presentation

A guaranteed a posteriori error estimator for certified boundary variation algorithm Matteo Giacomini ees, CMAP - Centre de Math ematiques Appliqu Ecole Polytechnique IPSA - Institut Polytechnique des Sciences Avanc ees 6 th


  1. A guaranteed a posteriori error estimator for certified boundary variation algorithm Matteo Giacomini ees, ´ CMAP - Centre de Math´ ematiques Appliqu´ Ecole Polytechnique IPSA - Institut Polytechnique des Sciences Avanc´ ees 6 th Workshop FreeFem++ Days - LJLL UPMC, December 11 th 2014 Joint work with Olivier Pantz (CMAP ´ Ecole Polytechnique) and Karim Trabelsi (IPSA) Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 1 / 20

  2. Outline 1 Shape optimization and shape identification problems ◮ A scalar model problem ◮ Differentiation with respect to the shape ◮ The boundary variation algorithm 2 A posteriori estimators of the error in the shape derivative ◮ Goal-oriented residual-type error estimator ◮ Validation of the goal-oriented estimator 3 Adaptive shape optimization procedure ◮ A first test case ◮ Improved adaptive shape optimization procedure Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 2 / 20

  3. Outline Shape optimization and shape identification problems 1 A scalar model problem Differentiation with respect to the shape The boundary variation algorithm A posteriori estimators of the error in the shape derivative 2 Goal-oriented residual-type error estimator Validation of the goal-oriented estimator Adaptive shape optimization procedure 3 A first test case Improved adaptive shape optimization procedure Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 3 / 20

  4. Electrical Impedance Tomography Neumann problem (N):  − k ∆ u N + u N = 0 in Ω � Γ     � u N � = 0 on Γ  � k ∇ u N · n � = 0 on Γ    k 1 ∇ u N · n = g on ∂ Ω Dirichlet problem (D):  − k ∆ u D + u D = 0 in Ω � Γ     � u D � = 0 on Γ  � k ∇ u D · n � = 0 on Γ    k = k 0 χ ω + k 1 (1 − χ ω ) u D = U D on ∂ Ω Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 4 / 20

  5. Electrical Impedance Tomography Neumann problem (N):  − k ∆ u N + u N = 0 in Ω � Γ     � u N � = 0 on Γ  � k ∇ u N · n � = 0 on Γ    k 1 ∇ u N · n = g on ∂ Ω Dirichlet problem (D):  − k ∆ u D + u D = 0 in Ω � Γ     � u D � = 0 on Γ  � k ∇ u D · n � = 0 on Γ    k = k 0 χ ω + k 1 (1 − χ ω ) u D = U D on ∂ Ω Objective functional: � � J ( ω ) = 1 k ∇ ( u N ( ω ) − u D ( ω )) · ∇ ( u N ( ω ) − u D ( ω )) d x +1 ( u N ( ω ) − u D ( ω )) 2 d x 2 2 Ω Ω Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 4 / 20

  6. Electrical Impedance Tomography � � � a ( u , v ) = k ∇ u · ∇ v + uv d x Ω Neumann problem (N): u N ∈ H 1 (Ω) ∀ v ∈ W N = H 1 (Ω) a ( u N , v ) = F N ( v ) � F N ( v ) = gv d σ ∂ Ω Dirichlet problem (D): u D ∈ H 1 U D (Ω) ∀ v ∈ W D = H 1 a ( u D , v ) = F D ( v ) 0 (Ω) F D ( v ) = 0 k = k 0 χ ω + k 1 (1 − χ ω ) Objective functional: J ( ω ) = 1 2 a ( u N ( ω ) − u D ( ω ) , u N ( ω ) − u D ( ω )) Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 4 / 20

  7. Shape optimization approach PDE-constrained optimization problem: ω ∗ = argmin ω J ( ω ) = ⇒ Optimization variable: Shape and location of the inclusion ω Shape optimization startegy: Given the domain Ω (0) , set ℓ = 0 and iterate: solutions u ( ℓ ) and u ( ℓ ) 1. Compute the D ; N 2. Compute a descent direction θ ( ℓ ) and an admissible step µ ( ℓ ) ; interface Γ ( ℓ +1) = ( I + µ ( ℓ ) θ ( ℓ ) )Γ ( ℓ ) ; 3. Move the not fulfilled , ℓ = ℓ + 1 and repeat. 4. While stopping criterion Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 5 / 20

  8. Shape optimization approach PDE-constrained optimization problem: ω ∗ = argmin ω J ( ω ) = ⇒ Optimization variable: Shape and location of the inclusion ω Shape optimization startegy: Given the domain Ω (0) , set ℓ = 0 and iterate: solutions u ( ℓ ) and u ( ℓ ) 1. Compute the D ; N 2. Compute a descent direction θ ( ℓ ) and an admissible step µ ( ℓ ) ; interface Γ ( ℓ +1) = ( I + µ ( ℓ ) θ ( ℓ ) )Γ ( ℓ ) ; 3. Move the not fulfilled , ℓ = ℓ + 1 and repeat. 4. While stopping criterion Shape optimization Classical optimization ω ∈U ad J ( ω ) min , J : U ad → R f : R n → R x ∈ R n f ( x ) min , U ad = { Open sets in Ω ⊂ R d } Steepest descent direction Gradient-based descent direction at point x : v ( x ) = − ∇ f ( x ) at ω : θ s.t. � dJ ( ω ) , θ � < 0 x ( ℓ +1) = x ( ℓ ) + µ ( ℓ ) v ( x ( ℓ ) ) ω ( ℓ +1) = ( I + µ ( ℓ ) θ ( ℓ ) ) ω ( ℓ ) Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 5 / 20

  9. Shape derivative Let θ ∈ W 1 , ∞ (Ω , R 2 ) be an admissible smooth deformation of Ω s.t. the external boundary is fixed: θ = 0 on ∂ Ω = ⇒ Small perturbation of the domain: Ω( θ ) = ( I + t θ )Ω If the map J Ω : θ �→ J (Ω( θ )) is differentiable at θ = 0 , we define the shape derivative: J (( I + t θ )Ω) − J (Ω) � dJ ( ω ) , θ � = �J ′ Ω (0) , θ � = lim t t ց 0 Shape derivative of the objective functional J ( ω ) 1 : � � dJ ( ω ) , θ � =1 k M ( θ ) ∇ ( u N ( ω ) + u D ( ω )) · ∇ ( u N ( ω ) − u D ( ω )) d x 2 Ω � − 1 ∇ · θ ( u N ( ω ) + u D ( ω ))( u N ( ω ) − u D ( ω )) d x 2 Ω M ( θ ) = ∇ θ + ∇ θ T − ( ∇ · θ ) I 1 O. Pantz. Sensibilit´ e de l’´ equation de la chaleur aux sauts de conductivit´ e . C. R. Acad. Sci. Paris, Ser. I(341):333-337 (2005) Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 6 / 20

  10. The boundary variation algorithm Gradient-based strategy: ∀ δθ ∈ [ H 1 (Ω)] d � θ , δθ � [ H 1 (Ω)] d + � dJ ( ω ) , δθ � = 0 = ⇒ Descent direction: θ such that � dJ ( ω ) , θ � < 0 There are two possible approaches to numerical shape optimization: Discretize-then-Optimize Optimize-then-Discretize Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 7 / 20

  11. The boundary variation algorithm Gradient-based strategy: ∀ δθ ∈ [ H 1 (Ω)] d � θ , δθ � [ H 1 (Ω)] d + � dJ ( ω ) , δθ � = 0 = ⇒ Descent direction: θ such that � dJ ( ω ) , θ � < 0 There are two possible approaches to numerical shape optimization: Discretize-then-Optimize Optimize-then-Discretize = ⇒ Discretized gradient-based strategy: ∀ δθ h ∈ [ X p � θ h , δθ h � [ X p h ] d + � d h J , δθ h � ≃ 0 h ] d ⇒ Certified descent direction: θ h � d h J , θ h � + | E h | < 0 = such that Numerical error in the shape derivative: E h = � dJ ( ω h ) , θ h � − � d h J , θ h � Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 7 / 20

  12. Why coupling error estimates with shape optmization? Initial interface Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 8 / 20

  13. Why coupling error estimates with shape optmization? Reconstructed interface Objective functional Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 8 / 20

  14. The adaptive boundary variation algorithm Given the domain Ω (0) and tol = 10 − 8 , set the ℓ = 0 and iterate: mesh T ( ℓ ) ⊂ Ω ( ℓ ) ; 1. Construct a coarse h solutions u h ( ℓ ) and u h ( ℓ ) 2. Compute the primal ; N D direction θ h ( ℓ ) ; 3. Compute a descent 4. Compute the upper bound E of the numerical error | E h | ; 5. While � d h J , θ h ( ℓ ) � + E > 0 , repeat: mesh T ( ℓ ) (a) Adapt the computational ; h solutions u h ( ℓ ) and u h ( ℓ ) (b) Re -compute the primal ; N D direction θ h ( ℓ ) ; (c) Re -compute a descent (d) Re-compute the upper bound E of | E h | and � d h J , θ h ( ℓ ) � + E ; step size µ h ( ℓ ) ; 6. Compute an admissible = ( I + µ h ( ℓ ) θ h ( ℓ ) ) T ( ℓ ) 7. Move the mesh T ( ℓ +1) ; h h 8. While |� d h J , θ h ( ℓ ) �| + E > tol, ℓ = ℓ + 1 and repeat. Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 9 / 20

  15. The adaptive boundary variation algorithm Given the domain Ω (0) and tol = 10 − 8 , set the ℓ = 0 and iterate: mesh T ( ℓ ) ⊂ Ω ( ℓ ) ; 1. Construct a coarse h solutions u h ( ℓ ) and u h ( ℓ ) 2. Compute the primal ; N D direction θ h ( ℓ ) ; 3. Compute a descent 4. Compute the upper bound E of the numerical error | E h | ; 5. While � d h J , θ h ( ℓ ) � + E > 0 , repeat: mesh T ( ℓ ) (a) Adapt the computational ; h solutions u h ( ℓ ) and u h ( ℓ ) (b) Re -compute the primal ; N D direction θ h ( ℓ ) ; (c) Re -compute a descent (d) Re-compute the upper bound E of | E h | and � d h J , θ h ( ℓ ) � + E ; step size µ h ( ℓ ) ; 6. Compute an admissible = ( I + µ h ( ℓ ) θ h ( ℓ ) ) T ( ℓ ) 7. Move the mesh T ( ℓ +1) ; h h 8. While |� d h J , θ h ( ℓ ) �| + E > tol, ℓ = ℓ + 1 and repeat. Dec. 11 th 2014 ( FreeFem++ ) M. Giacomini Error estimates for shape optimization 9 / 20

Recommend


More recommend