SpECTRE: Towards improved simulations of relativistic astrophysical systems Nils Deppe May 1, 2019 github.com/sxs-collaboration/spectre 1
Table of Contents 1 Background and motivation 2 Numerical methods 3 SpECTRE implementation github.com/sxs-collaboration/spectre 2
Simulations of GRMHD coupled to Einstein’s equations are complicated, difficult, and interesting github.com/sxs-collaboration/spectre 3
Simulation Goals • Accretion disks • Binary neutron star mergers • Core-collapse supernova explosions Event Horizon Telescope Collaboration github.com/sxs-collaboration/spectre 4
Need For High Accuracy • Gravitational waveforms for LIGO/Virgo and space-based detectors • LIGO/Virgo follow-up waveforms • Accretion for Event Horizon Telescope • Improved understanding of heavy element generation Abbott et al. 2017 github.com/sxs-collaboration/spectre 5
General Equations to Solve • Hyperbolic equations in general form: ∂ t U + ∂ i F i ( U ) + B i · ∂ i U = S ( U ) • Elliptic equations of the form: ∂ 2 U = S ( U , ∂ U ) github.com/sxs-collaboration/spectre 6
Table of Contents 1 Background and motivation 2 Numerical methods 3 SpECTRE implementation github.com/sxs-collaboration/spectre 7
Vacuum Evolutions: Spectral Methods • Smooth solutions • Non-overlapping grids • Exponential convergence • General grids: github.com/sxs-collaboration/spectre 8
Hydrodynamics: Finite Volume Methods • Work on shocks • Typically Cartesian grids • Polynomial convergence • Overlapping grids u ( x ) x github.com/sxs-collaboration/spectre 9
Parallelism Current codes: • Message passing (MPI) + some threading • Spectral Einstein Code (SpEC): • Spectral methods: one element per core • Finite volume: ∼ 100 , 000 − 150 , 000 cells per core • Pseudospectral methods ∼ 50 cores • Finite volume methods ∼ 20 , 000 cores github.com/sxs-collaboration/spectre 10
Discontinuous Galerkin Method • Exponential convergence for smooth solutions • Shock capturing • Non-overlapping deformed grids • hp -adaptivity • Local time stepping • Nearest-neighbor communication github.com/sxs-collaboration/spectre 11
Boundary Data • Boundary fluxes communicated between elements • Nearest-neighbor only, good for parallelization Ω k − 1 Ω k Fluxes github.com/sxs-collaboration/spectre 12
Boundary Correction • Consider element Ω k − 1 : G k − 1 = 1 − C F i, + n + i + F i, − n − u + − u − � � � � i 2 2 n + Ω k − 1 n − Ω k i i F i, − F i, + u + u − github.com/sxs-collaboration/spectre 13
The DG Algorithm Summary 1 Compute time derivatives 2 Send data for boundary data 3 Integrate in time 4 Send data for limiting 5 Apply limiter github.com/sxs-collaboration/spectre 14
Table of Contents 1 Background and motivation 2 Numerical methods 3 SpECTRE implementation github.com/sxs-collaboration/spectre 15
SpECTRE Design Goals • Modular and extensible github.com/sxs-collaboration/spectre 16
SpECTRE Design Goals • Modular and extensible • Correctness: unit tests, integration tests, physics tests, etc. github.com/sxs-collaboration/spectre 16
SpECTRE Design Goals • Modular and extensible • Correctness: unit tests, integration tests, physics tests, etc. • Maintainability: GitHub, documentation, tools, etc. github.com/sxs-collaboration/spectre 16
SpECTRE Design Goals • Modular and extensible • Correctness: unit tests, integration tests, physics tests, etc. • Maintainability: GitHub, documentation, tools, etc. • Scalability: task-based parallelism ( Charm++ ) github.com/sxs-collaboration/spectre 16
SpECTRE Design Goals • Modular and extensible • Correctness: unit tests, integration tests, physics tests, etc. • Maintainability: GitHub, documentation, tools, etc. • Scalability: task-based parallelism ( Charm++ ) • Efficiency: vectorization, hardware specific code ( Blaze , LIBXSMM ) github.com/sxs-collaboration/spectre 16
SpECTRE Design Goals • Modular and extensible • Correctness: unit tests, integration tests, physics tests, etc. • Maintainability: GitHub, documentation, tools, etc. • Scalability: task-based parallelism ( Charm++ ) • Efficiency: vectorization, hardware specific code ( Blaze , LIBXSMM ) • General framework for hyperbolic (Cornell, Caltech, CalState Fullerton, UNH) and elliptic (AEI) PDEs github.com/sxs-collaboration/spectre 16
Available Physical Systems • Scalar wave • Curved scalar wave (mostly) • Newtonian Euler (in code review) • Relativistic Euler (mostly) • GRMHD • Generalized harmonic (in code review) github.com/sxs-collaboration/spectre 17
Numerical Schemes Limiters: • Minmod (MUSCL, ΛΠ 1 , ΛΠ N ) Numerical fluxes: • Krivodonova • Rusanov (local Lax-Friedrichs) • SimpleWENO (in code review) • HLL • HWENO (in code review) • Upwind • Multipatch FV/FD subcell (in Planned numerical fluxes: progress) • HLLC Planned limiters: • Roe • Moe-Rossmanith-Seal (MRS) • Marquina • Hierarchical Barth-Jespersen and vertex-based github.com/sxs-collaboration/spectre 18
Convergence for Smooth Problems: Alfv´ en Wave 10 1 N x = 4 10 3 N x = 8 10 3 N x = 16 4 x 10 5 N x = 32 10 5 L 1 ( ( v z )) L 1 ( ( v z )) 10 7 10 7 6 x 9 10 P 3 10 9 P 5 8 x P 7 10 11 10 11 10 P 9 x 1 2 4 8 16 32 P 1 P 3 P 5 P 7 P 9 P 11 N x github.com/sxs-collaboration/spectre 19
Single Black Hole Evolutions • Generalized harmonic system • Excised cube in center 5 10 10 6 10 7 L 2 ( H a + a ) Error( iab ) Error( g ab ) Error( ab ) 0 500 1000 1500 2000 Time/Mass github.com/sxs-collaboration/spectre 20
Komissarov Slow Shock 256 × 1 × 1 elements, 3 3 points per element 3.5 3.5 3.5 3.0 3.0 3.0 2.5 2.5 2.5 2.0 2.0 2.0 1.5 1.5 1.5 t = 0.00 t = 0.00 t = 0.00 t = 0.48 t = 0.48 t = 0.48 t = 0.96 t = 0.96 t = 0.96 1.0 1.0 1.0 t = 1.44 t = 1.44 t = 1.44 t = 1.92 t = 1.92 t = 1.92 0.0 0.5 1.0 1.4 0.0 0.5 1.0 1.4 0.0 0.5 1.0 1.4 x x x Krivodonova SimpleWENO HWENO github.com/sxs-collaboration/spectre 21
Cylindrical Blast Wave 128 2 × 1 elements, 2 3 points per element Krivodonova SimpleWENO HWENO github.com/sxs-collaboration/spectre 22
Cylindrical Blast Wave 128 2 × 1 elements, 3 3 points per element Krivodonova SimpleWENO HWENO github.com/sxs-collaboration/spectre 23
Fishbone-Moncrief Disk • Torus around a black hole • Code comparison project • χ = 0 . 9375, ρ max ≈ 77 • Orbital period T orb ≈ 247 • Hexahedron: [ − 40 , 40] × [2 , 40] × [ − 8 , 8] github.com/sxs-collaboration/spectre 24
Fishbone-Moncrief Disk Rest mass density ρ at t = 600 github.com/sxs-collaboration/spectre 25
Fishbone-Moncrief Disk Error in rest mass density ρ at t = 600 github.com/sxs-collaboration/spectre 26
Scaling Bondi Accretion GRMHD • Run on BlueWaters supercomputer, NCSA, UIUC, IL, USA • Green is perfect speedup for fixed problem size (strong scaling) • Blue shows actual weak scaling (flat is ideal) 10 3 Runtime (s) 10 2 1,646,592 elements 245,760 elements 10 3 10 4 10 5 Number of Threads github.com/sxs-collaboration/spectre 27
Summary • Improved vacuum and GRMHD simulations necessary for experiment github.com/sxs-collaboration/spectre 28
Summary • Improved vacuum and GRMHD simulations necessary for experiment • Current methods difficult to scale to new machines github.com/sxs-collaboration/spectre 28
Summary • Improved vacuum and GRMHD simulations necessary for experiment • Current methods difficult to scale to new machines • Discontinuous Galerkin as alternative new method github.com/sxs-collaboration/spectre 28
Summary • Improved vacuum and GRMHD simulations necessary for experiment • Current methods difficult to scale to new machines • Discontinuous Galerkin as alternative new method • SpECTRE as general hyperbolic and elliptic PDE solver (not just DG) github.com/sxs-collaboration/spectre 28
Summary • Improved vacuum and GRMHD simulations necessary for experiment • Current methods difficult to scale to new machines • Discontinuous Galerkin as alternative new method • SpECTRE as general hyperbolic and elliptic PDE solver (not just DG) • Successful scaling to largest machines available github.com/sxs-collaboration/spectre 28
Summary • Improved vacuum and GRMHD simulations necessary for experiment • Current methods difficult to scale to new machines • Discontinuous Galerkin as alternative new method • SpECTRE as general hyperbolic and elliptic PDE solver (not just DG) • Successful scaling to largest machines available • Limiting and primitive recovery an open problem github.com/sxs-collaboration/spectre 28
Recommend
More recommend