a priori and a posteriori analyses of the dpg method
play

A priori and a posteriori analyses of the DPG method Jay - PowerPoint PPT Presentation

A priori and a posteriori analyses of the DPG method Jay Gopalakrishnan Portland State University ICERM Workshop on Robust Discretization and Fast Solvers for Computable Multi-Physics Models Brown University, May 2013 AFOSR, NSF Thanks: Jay


  1. A priori and a posteriori analyses of the DPG method Jay Gopalakrishnan Portland State University ICERM Workshop on Robust Discretization and Fast Solvers for Computable Multi-Physics Models Brown University, May 2013 AFOSR, NSF Thanks: Jay Gopalakrishnan 1/38

  2. Contents Principal Collaborator in DPG research: Leszek Demkowicz. Three avenues to DPG methods ◮ ◮ ◮ A priori error analysis ◮ ◮ A posteriori error analysis Fast solvers Examples ◮ ◮ ◮ ◮ ◮ Jay Gopalakrishnan 2/38

  3. Three avenues to DPG methods Least-squares Galerkin method Petrov-Galerkin DPG with optimal methods test space Mixed Galerkin method Jay Gopalakrishnan 3/38

  4. “Petrov-Galerkin” schemes (PG) PG schemes are distinguished by different trial and test (Hilbert) spaces. � P.D.E.+ The problem: boundary conditions. ↓  Find x in a trial space X satisfying  Variational form: b ( x , y ) = ℓ ( y )  for all y in a test space Y. ↓  Find x h in a discrete trial space X h ⊂ X satisfying  Discretization: b ( x h , y h ) = ℓ ( y h )  for all y h in a discrete test space Y h ⊂ Y . For PG schemes, X h � = Y h in general. Jay Gopalakrishnan 4/38

  5. � Elements of theory Variational formulation:   Exact inf-sup condition � � a uniqueness   | b ( x , y ) |  + = ⇒ wellposedness  C � x � X ≤ sup condition � y � Y y ∈ Y Babuˇ ska-Brezzi theory:   Discrete inf-sup condition   | b ( x h , y h ) |  = ⇒ � x − x h � X ≤ C inf � x − w h � X .  C � x h � X ≤ sup w h ∈ X h � y h � Y y h ∈ Y h Difficulty: Exact inf-sup condition = ⇒ Discrete inf-sup condition Jay Gopalakrishnan 5/38

  6. � Elements of theory Variational formulation:   Exact inf-sup condition � � a uniqueness   | b ( x , y ) |  + = ⇒ wellposedness  C � x � X ≤ sup condition � y � Y y ∈ Y Babuˇ ska-Brezzi theory:   Discrete inf-sup condition   | b ( x h , y h ) |  = ⇒ � x − x h � X ≤ C inf � x − w h � X .  C � x h � X ≤ sup w h ∈ X h � y h � Y y h ∈ Y h Difficulty: Exact inf-sup condition = ⇒ Discrete inf-sup condition Is there a way to find a stable test space for any given trial space (thus giving a stable method automatically)? Jay Gopalakrishnan 5/38

  7. The ideal method Pick any X h ⊆ X . The ideal DPG method finds x h ∈ X h such that ∀ y ∈ Y opt def b ( x h , y ) = ℓ ( y ) , = T ( X h ) , h where T : X �→ Y is defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . [ Demkowicz+G 2011 ] Rationale: Jay Gopalakrishnan 6/38

  8. The ideal method Pick any X h ⊆ X . The ideal DPG method finds x h ∈ X h such that ∀ y ∈ Y opt def b ( x h , y ) = ℓ ( y ) , = T ( X h ) , h where T : X �→ Y is defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . [ Demkowicz+G 2011 ] Rationale: Which function y maximizes | b ( x , y ) | Q: for any given x ? � y � Y A: y = Tx is the maximizer. ← Optimal test function. DPG Idea: If the discrete test space contains the optimal test functions, exact inf-sup condition = ⇒ discrete inf-sup condition . Jay Gopalakrishnan 6/38

  9. The ideal method Pick any X h ⊆ X . The ideal DPG method finds x h ∈ X h such that ∀ y ∈ Y opt def b ( x h , y ) = ℓ ( y ) , = T ( X h ) , h where T : X �→ Y is defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . [A.1] { w ∈ X : b ( w , y ) = 0 ∀ y ∈ Y } = { 0 } . | b ( w , y ) | [A.2] ∃ C 1 , C 2 > 0 such that C 1 � y � Y ≤ sup ≤ C 2 � y � Y . � w � X w ∈ X Theorem (DPG Quasioptimality) ⇒ � x − x h � X ≤ C 2 [A.1–A.2] = inf � x − w h � X . C 1 w h ∈ X h Jay Gopalakrishnan 6/38

  10. The ideal method Pick any X h ⊆ X . The ideal DPG method finds x h ∈ X h such that ∀ y ∈ Y opt def b ( x h , y ) = ℓ ( y ) , = T ( X h ) , h where T : X �→ Y is defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . But . . . can we really compute Tx? For a few problems, Tx can be calculated in closed form. When Tx cannot be hand calculated, we overcome two difficulties: ◮ Redesign formulation so that T is local (by hybridization). ◮ Approximate T by a computable (finite-rank) T r . Jay Gopalakrishnan 6/38

  11. The ideal method Pick any X h ⊆ X . The ideal DPG method finds x h ∈ X h such that ∀ y ∈ Y opt def b ( x h , y ) = ℓ ( y ) , = T ( X h ) , h where T : X �→ Y is defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . The ideal DPG method = i DPG method Jay Gopalakrishnan 7/38

  12. Trivial Example 1 Standard FEM is an iDPG method � Given F ∈ H − 1 ( Ω ) , � Problem ∇ u · � � ∀ v ∈ H 1 ∇ v = F ( v ) , 0 ( Ω ) . find u ∈ H 1 0 ( Ω ) solving: Ω Pick any X h ⊆ X . The ideal DPG method finds x h ∈ X h such that Recall def ∀ y ∈ Y opt b ( x h , y ) = ℓ ( y ) , = T ( X h ) , h where T : X �→ Y is defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . Jay Gopalakrishnan 8/38

  13. Trivial Example 1 Standard FEM is an iDPG method � Given F ∈ H − 1 ( Ω ) , � Problem ∇ u · � � ∀ v ∈ H 1 ∇ v = F ( v ) , 0 ( Ω ) . find u ∈ H 1 0 ( Ω ) solving: Ω Set X = Y = H 1 0 ( Ω ) and � ∇ v · � � ( v , y ) Y = ∇ y . Ω Pick any X h ⊆ X . The ideal DPG method finds x h ∈ X h such that Recall def ∀ y ∈ Y opt b ( x h , y ) = ℓ ( y ) , = T ( X h ) , h where T : X �→ Y is defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . Jay Gopalakrishnan 8/38

  14. Trivial Example 1 Standard FEM is an iDPG method � Given F ∈ H − 1 ( Ω ) , � Problem ∇ u · � � ∀ v ∈ H 1 ∇ v = F ( v ) , 0 ( Ω ) . find u ∈ H 1 0 ( Ω ) solving: Ω Set X = Y = H 1 0 ( Ω ) and � ∇ v · � � ( v , y ) Y = ∇ y . Ω Then ( · , · ) Y = b ( · , · ) = ⇒ T = identity, so Y opt = X h . h Pick any X h ⊆ X . The ideal DPG method finds x h ∈ X h such that Recall def ∀ y ∈ Y opt b ( x h , y ) = ℓ ( y ) , = T ( X h ) , h where T : X �→ Y is defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . Jay Gopalakrishnan 8/38

  15. Next Three avenues to DPG methods ◮ Petrov-Galerkin with optimal test functions . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Least-squares Galerkin method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ◮ A priori error analysis ◮ Ideal DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ A posteriori error analysis Fast solvers Examples ◮ Example 1 (Standard FEM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ ◮ ◮ ◮ Jay Gopalakrishnan 9/38

  16. Trivial Example 2 L 2 -based least squares method is an ideal DPG method � Given an f ∈ L 2 ( Ω ) and a linear continuous bijective A : X → L 2 ( Ω ) , Problem find u ∈ X satisfying Au = f . Pick any X h ⊆ X . The ideal DPG method finds x h ∈ X h such that Recall def ∀ y ∈ Y opt b ( x h , y ) = ℓ ( y ) , = T ( X h ) , h where T : X �→ Y is defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . Jay Gopalakrishnan 10/38

  17. Trivial Example 2 L 2 -based least squares method is an ideal DPG method � Given an f ∈ L 2 ( Ω ) and a linear continuous bijective A : X → L 2 ( Ω ) , Problem find u ∈ X satisfying Au = f . Set Y = L 2 ( Ω ), b ( x , y ) = ( Ax , y ) Y , ℓ ( y ) = ( f , y ) Y . Pick any X h ⊆ X . The ideal DPG method finds x h ∈ X h such that Recall def ∀ y ∈ Y opt b ( x h , y ) = ℓ ( y ) , = T ( X h ) , h where T : X �→ Y is defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . Jay Gopalakrishnan 10/38

  18. Trivial Example 2 L 2 -based least squares method is an ideal DPG method � Given an f ∈ L 2 ( Ω ) and a linear continuous bijective A : X → L 2 ( Ω ) , Problem find u ∈ X satisfying Au = f . Set Y = L 2 ( Ω ), b ( x , y ) = ( Ax , y ) Y , ℓ ( y ) = ( f , y ) Y . ⇒ Y opt Then ( Tw , y ) Y = ( Aw , y ) = ⇒ T = A = = AX h = ⇒ h iDPG equations become Normal equations: ( Ax h , Aw h ) Y = ( f , Aw h ) Y ∀ w h ∈ X h . Pick any X h ⊆ X . The ideal DPG method finds x h ∈ X h such that Recall def ∀ y ∈ Y opt b ( x h , y ) = ℓ ( y ) , = T ( X h ) , h where T : X �→ Y is defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . Jay Gopalakrishnan 10/38

  19. The least-squares avenue Least-squares Galerkin method Petrov-Galerkin DPG with optimal methods test space Mixed Galerkin method Jay Gopalakrishnan 11/38

  20. Definitions Riesz map: R Y : Y → Y ∗ : ( R Y y )( v ) = ( y , v ) Y , ∀ y , v ∈ Y . Operator generated by the form: B : X → Y ∗ : Bx ( y ) = b ( x , y ) , ∀ x ∈ X , y ∈ Y . Trial-to-Test operator T : X �→ Y was defined by ( Tw , y ) Y = b ( w , y ) , ∀ w ∈ X , y ∈ Y . T = R − 1 = ⇒ ◦ B . Y Energy norm on X : def | | | z | | | X = � Tz � Y . Jay Gopalakrishnan 12/38

Recommend


More recommend