multilevel optimization by space filling curves in
play

Multilevel optimization by space-filling curves in adaptive - PowerPoint PPT Presentation

Multilevel optimization by space-filling curves in adaptive atmospheric modeling Jrn Behrens Technische Universitt Mnchen Center for Math. Sciences (M3) 85747 Garching, Germany behrens@ma.tum.de www-m3.ma.tum.de/m3/behrens


  1. Multilevel optimization by space-filling curves in adaptive atmospheric modeling Jörn Behrens Technische Universität München Center for Math. Sciences (M3) 85747 Garching, Germany behrens@ma.tum.de www-m3.ma.tum.de/m3/behrens behrens@ma.tum.de Scientific Computing DEKLIM Projekt Nr. 01 LD 0037 . Jörn Behrens TU München DFG-Stipendium Nr. BE2314/3-1 .

  2. Introduction – why adaptive modeling? Scale interaction Sensitivity analysis (local – global scale) Fronts (large gradients) Embedded local phenomena Filamentation in tracers Point sources for tracers behrens@ma.tum.de Scientific Computing Efficient utilization of Jörn Behrens TU München computing resources

  3. Jörn Behrens TU München Scientific Computing behrens@ma.tum.de Adaptive Algorithm yes no

  4. Modular Adaptive Software Dynamic kernel Sub-grid System solver (conserv. SLM) processes Diagnostics I/O, Data main visualization management Grid generator amatos (http://www.amatos.info) behrens@ma.tum.de Scientific Computing Jörn Behrens TU München

  5. Refinement Strategy 2D 3D behrens@ma.tum.de Scientific Computing Jörn Behrens TU München Rivara (1984), Bänsch (1991), Grids created with amatos

  6. Complex Geometries behrens@ma.tum.de Scientific Computing Jörn Behrens TU München Polygonal domain Bitmapped domain Grid created by amatos (with F. Klaschka)

  7. Jörn Behrens TU München Scientific Computing behrens@ma.tum.de Data Management and Numerics

  8. Levels Grid level: domain decomposition  parallelization  DD solvers System level: matrix ordering  sparse storage  prevention of fill-in Cache level: data layout  access optimization behrens@ma.tum.de Scientific Computing Jörn Behrens TU München

  9. Grid Level: Parallelization Partitioning problem Distribute cells in equally sized sets (partitions) Partitions shall be connected behrens@ma.tum.de Scientific Computing Partitions have to be re-calculated frequently Jörn Behrens TU München Data movement has to be minimized Algorithm has to be parallel/low computational effort

  10. Jörn Behrens TU München Scientific Computing behrens@ma.tum.de Data Management and Parallelization

  11. Grid Level: Parallelization Space-filling curve for load balancing 1 4 5 6 2 3 8 7 9 Proc. Proc. Proc. Proc. 1 2 3 4 behrens@ma.tum.de Scientific Computing Jörn Behrens TU München Roberts et al. 1997, Griebel & Zumbusch, 1999

  12. Grid Level: Parallelization Algorithm for triangles 1. One bit per refinement level 0111 2. Set bits while refining 1000 0110 0110 1010 1010 0100 1011 1101 0000 0000 1000 1100 1110 1100 1100 0000 behrens@ma.tum.de Scientific Computing Jörn Behrens TU München J. B., J. Zimmermann (2000), N. Rakowski (2003)

  13. Results: Tracer Advection Artificial tracer in Arctic stratosphere Load balancing Edge-cut behrens@ma.tum.de Scientific Computing Jörn Behrens TU München SFC: J. B., J. Zimmermann (2000), Metis: G. Karypis, V. Kumar (1998)

  14. Cache Level: Data Management Nearest neighbor communication (vertex-wise) Connectivity matrix with different orderings behrens@ma.tum.de Scientific Computing Jörn Behrens TU München

  15. Cache Level: Data Management Connectivity matrix with different orderings behrens@ma.tum.de Scientific Computing Jörn Behrens TU München Cache misses Distance structure

  16. Cache Level: Data Management Nearest neighbor communication (element-wise) Cache misses behrens@ma.tum.de Scientific Computing Jörn Behrens TU München Distance structure

  17. System Level: FEM support Main data objects: nodes, edges, triangles FEM-Signature: • Unknowns on nodes • Unknowns on edges • Unknowns on triangles behrens@ma.tum.de Scientific Computing • Position in barycentric coordinates Jörn Behrens TU München (for edges and triangles)

  18. System Level: Matrix Ordering Structure of matrix tree-sorted quotient reverse reverse SFC minimum degree Cuthill-McKee System with ~200.000 unknowns Utilizes preconditioned BiCGStab behrens@ma.tum.de Scientific Computing ILU pre-conditioning Jörn Behrens TU München Iterations Time J. B., N. Rakowski, S. Frickenhaus, et al. (2003)

  19. Example 1: linear advection Simulation of tracer transport Resolution of wind data: 50 x 50 km Situation in January 1990, behrens@ma.tum.de Scientific Computing 70 hPA layer (18.000 m) Jörn Behrens TU München A. Rinke et al., 1997

  20. Example 1: linear advection Simulation of tracer transport behrens@ma.tum.de Scientific Computing Jörn Behrens TU München Resolution: 50 km uniform Resolution: 5 km local J. B., K. Dethloff, W. Hiller, A. Rinke (2000)

  21. Example 1: linear advection Simulation of tracer transport Costs: Uniform vs. adaptive behrens@ma.tum.de Scientific Computing Jörn Behrens TU München

  22. Example 2: shallow water equations Flow over isolated mountain Equations in vorticity-divergence form behrens@ma.tum.de Scientific Computing Jörn Behrens TU München Geopotential Vorticity M. Läuter (2003)

  23. Example 3: Inverse Modeling Problem: Given: Question: - wind source - tracer density of tracer? distribution behrens@ma.tum.de Scientific Computing Jörn Behrens TU München

  24. Conclusions Triangular grid generation for simplicity and complex domains Adaptive grid refinement for accuracy and efficiency SFC for partitioning in parallel applications SFC ordering for efficient data access and matrix reordering Examples from tracer transport to dynamical core behrens@ma.tum.de Scientific Computing Jörn Behrens TU München

  25. Jörn Behrens TU München Scientific Computing behrens@ma.tum.de www-m3.ma.tum.de/m3/behrens / behrens@ma.tum.de

Recommend


More recommend