treating time as just another space variable
play

Treating Time as Just Another Space Variable Randolph E. Bank - PowerPoint PPT Presentation

Overview Paradigm DD Solver Treating Time as Just Another Space Variable Randolph E. Bank Department of Mathematics University of California, San Diego With Panayot Vassilevski and Ludmil Zikatanov Space-Time Methods for PDEs RICAM


  1. Overview Paradigm DD Solver Treating Time as Just Another Space Variable Randolph E. Bank Department of Mathematics University of California, San Diego With Panayot Vassilevski and Ludmil Zikatanov Space-Time Methods for PDEs RICAM November 7, 2016 UCSD Center for Computational Mathematics Slide 1/19, November 7, 2016

  2. Overview Paradigm DD Solver Outline of Talk Overview 1 Parallel Adaptive Meshing Paradigm 2 Domain Decomposition Solver 3 UCSD Center for Computational Mathematics Slide 2/19, November 7, 2016

  3. Overview Paradigm DD Solver Time as a Space Variable Think of this L u = u t − ∇ · ( a ∇ u ) + b · ∇ u + cu = f as this L u = − ˜ ∇ · ( A ˜ ∇ u ) + B · ˜ ∇ u + cu = f � ∇ u � ˜ ∇ u = u t � a � 0 A = 0 0 � b � B = 1 UCSD Center for Computational Mathematics Slide 3/19, November 7, 2016

  4. Overview Paradigm DD Solver Why? This provides new and expanded opportunities for Discretization Adaptivity Parallel Computation but.... One now has a d + 1 space dimensional problem. UCSD Center for Computational Mathematics Slide 4/19, November 7, 2016

  5. Overview Paradigm DD Solver PLTMG examples Discretization: Artificial Diffusion is inspired by Scharfetter-Gummel discretization Adaptivity: hp adaptivity based on interpolation error estimates, and recovered derivatives. Parallel Computation: Bank-Holst parallel adaptive meshing paradigm. UCSD Center for Computational Mathematics Slide 5/19, November 7, 2016

  6. Overview Paradigm DD Solver Example I u t − u xx + 100 sin(2 π t ) u x = 1 in Ω = (0 , 1) × (0 , 2) u (0 , t ) = u (1 , t ) = 0 for 0 ≤ t ≤ 2 u ( x , 0) = 0 for 0 ≤ x ≤ 1 Weak form: Find u h ∈ S h such that B ( u h , v ) = (1 , v ) for all v ∈ S h , where � B ( u , v ) = u x v x + ǫ u t v t + 100 sin(2 π t ) u x v + u t v Ω and ǫ = 10 − 6 . UCSD Center for Computational Mathematics Slide 6/19, November 7, 2016

  7. Overview Paradigm DD Solver Example II u t + uu x = 0 in Ω u (0 , t ) = 1 0 ≤ t ≤ 2  1 0 ≤ x ≤ . 25  u ( x , 0) = 1 . 5 − 2 x . 25 ≤ x ≤ . 75 0 . 75 ≤ x ≤ 2  where Ω = { ( x , y ) | x > 0 , y > 0 , x 2 + y 2 < 4 } . Weak form: Find u h ∈ S h , D such that B ( u h , v ) = 0 for all v ∈ S h , 0 , where � B ( u , v ) = ǫ ( u x v x + u t v t ) + u t v + uu x v Ω and ǫ = 10 − 3 . UCSD Center for Computational Mathematics Slide 7/19, November 7, 2016

  8. Overview Paradigm DD Solver Some Remarks on Analysis Much analysis for static problems applies, possibly with minor technical challenges, eg � a � � a � 0 0 A = → 0 0 0 ǫ Rescale time if needed to avoid thin domains. Space length scale (0 , L ); Time scale (0 , T ). t = Lt ∂ u ∂ t = κ∂ u ˆ T = κ t ; ∂ ˆ t (see Bank, Vassilevski, Zikatanov, 2015). Possibly take VERY big time steps. UCSD Center for Computational Mathematics Slide 8/19, November 7, 2016

  9. Overview Paradigm DD Solver Motivation for Parallel Adaptive Paradigm 1 Make existing sequential adaptive meshing codes parallel with minimal recoding. 2 Allow adaptive meshing with low load balancing and communication costs. UCSD Center for Computational Mathematics Slide 9/19, November 7, 2016

  10. Overview Paradigm DD Solver Parallel Adaptive Mesh Paradigm joint with Michael Holst Step I: On coarse mesh, solve the entire problem. Compute a posteriori error estimates. Partition coarse mesh to achieve equal error. Step II: Each processor gets complete coarse mesh. Each processor independently solves the entire problem but adaptively refines mainly its subregion. Step III: Glue together meshes provided by each processor. Compute global solution using initial guess provided by local solutions. UCSD Center for Computational Mathematics Slide 10/19, November 7, 2016

  11. Overview Paradigm DD Solver Load Balance - 16 Processors UCSD Center for Computational Mathematics Slide 11/19, November 7, 2016

  12. Overview Paradigm DD Solver Dual Problem Weights in Step II thesis of Jeff Ovall We weight error estimates outside Ω i to discourage refinement there. Weights based on dual problems: Find ψ i ∈ S h (Ω − Ω i ) B ∗ ( ψ i , v ) ≡ B ( v , ψ i ) = 0 for all v ∈ S h (Ω − Ω i ) where ψ i ≡ 1 on ¯ Ω i . Provides some extra refinement outside inflow (upwind) part of ∂ Ω i The goal of Step II is to create a good adaptive mesh (accurate solution computed in Step III) UCSD Center for Computational Mathematics Slide 12/19, November 7, 2016

  13. Overview Paradigm DD Solver Motivation for DD Solver An Embarrassment of Riches We follow the same philosophy as the adaptive meshing paradigm. 1 Want low communication. 2 Use existing partition generated by Steps I-II. 3 Use existing sequential multigraph solver on each processor. 4 Initial guess provided by fine grid part of solution on all processors. 5 Use meshes generated by adaptive refinement – built-in coarse grid (Maximum Overlap). UCSD Center for Computational Mathematics Slide 13/19, November 7, 2016

  14. Overview Paradigm DD Solver Global Saddle Point System – 2 Subdomains thesis of Shaoying Lu  A 11 A 1 γ 0 0 0   δ U 1   R 1  A γ 1 A γγ 0 0 I δ U γ R γ             0 0 − I δ U ν = . A νν A ν 2 R ν             0 0 A 2 ν A 22 0 δ U 2 R 2       0 − I 0 0 Λ U ν − U γ I I appears because global mesh is conforming. A 11 , A 22 correspond to interior mesh points. A γγ , A νν correspond to interface. Λ is Lagrange multiplier (not computed or updated). UCSD Center for Computational Mathematics Slide 14/19, November 7, 2016

  15. Overview Paradigm DD Solver Local Saddle Point System – 2 Subdomains       A 11 A 1 γ 0 0 0 δ U 1 R 1 0 0 A γ 1 A γγ I δ U γ R γ       ¯ ¯ δ ¯       0 0 A νν A ν 2 − I U ν = R ν        ¯ ¯   δ ¯    0 0 0 0 A 2 ν A 22 U 2       0 I − I 0 0 Λ U ν − U γ  0 − I 0 I 0   Λ   U ν − U γ  ¯ ¯ δ ¯ − I 0 0 A νν A ν 2 U ν R ν             0 0 A 11 A 1 γ 0 δ U 1 = R 1             I 0 A γ 1 A γγ 0 δ U γ R γ       ¯ ¯ δ ¯ 0 0 0 0 A 2 ν A 22 U 2 UCSD Center for Computational Mathematics Slide 15/19, November 7, 2016

  16. Overview Paradigm DD Solver Local Schur Complement System – 2 Subdomains       A 11 A 1 γ 0 δ U 1 R 1 A γγ + ¯ ¯ R γ + R ν + ¯  =  . δ U γ A νν ( U ν − U γ ) A γ 1 A νν A γ 2     ¯ ¯ δ ¯ 0 + ¯ 0 A 2 ν A 22 U 2 A 2 ν ( U ν − U γ ) The matrix is the stiffness matrix for the conforming mesh on processor 1. We expect R 1 ≈ 0, R 2 ≈ 0 at all steps. This approximation substantially cuts communication and calculation costs. Processor 1 sends R γ , U γ , and receives R ν , U ν We use δ U 1 and δ U γ to update U 1 and U γ ; we discard δ ¯ U 2 . UCSD Center for Computational Mathematics Slide 16/19, November 7, 2016

  17. Overview Paradigm DD Solver Summary of Calculation on Processor 1 1 locally compute R 1 and R γ . 2 exchange boundary data (send R γ and U γ ; receive R ν and U ν ). 3 locally compute the right-hand-side of Schur complement system. 4 locally solve Schur complement system via the multigraph iteration. 5 update U 1 and U γ using δ U 1 and δ U γ . The update could be local ( U 1 ← U 1 + δ U 1 ; U γ ← U γ + δ U γ ) or could require communication. Here we do a Newton line search. UCSD Center for Computational Mathematics Slide 17/19, November 7, 2016

  18. Overview Paradigm DD Solver The Rate of Convergence joint with Panayot Vassilevski Theorem: Under suitable hypotheses, the rate of convergence of the DD algorithm is bounded by � 2 � H γ ≤ C d where C is independent of N , p , h , H , and d . In practice, H ∼ d and the observed rate of convergence is constant. (at least for p ≤ 256 and N ≤ 25 m ) The proof makes heavy use of interior estimates . UCSD Center for Computational Mathematics Slide 18/19, November 7, 2016

  19. Overview Paradigm DD Solver Global Solution N = 14095115 UCSD Center for Computational Mathematics Slide 19/19, November 7, 2016

Recommend


More recommend