application performance evaluation studies of multi core
play

Application Performance Evaluation Studies of Multi- Core Nodes and - PowerPoint PPT Presentation

Application Performance Evaluation Studies of Multi- Core Nodes and the Gemini Network Mike Ashworth, Xiaohu Guo, Charles Moulinec, Stephen Pickles, Martin Plummer, Andrew Porter, Andrew Sunderland and Ilian Todorov Computational Science &


  1. Application Performance Evaluation Studies of Multi- Core Nodes and the Gemini Network Mike Ashworth, Xiaohu Guo, Charles Moulinec, Stephen Pickles, Martin Plummer, Andrew Porter, Andrew Sunderland and Ilian Todorov Computational Science & Engineering Department, STFC Daresbury Laboratory, Warrington WA4 4AD, UK 23 rd May 2011 CUG 2011 Fairbanks

  2. Outline Introduction to the UK HECToR system Applications • DL_POLY_4 (see paper) • fd3d • Fluidity- ICOM • PFARM • POLCOMS • ScaLAPACK • Telemac • WRF Conclusions 23 rd May 2011 CUG 2011 Fairbanks

  3. HECToR - High End Computing Technology Resource UK National HPC service http://www.hector.ac.uk/ 23 rd May 2011 CUG 2011 Fairbanks

  4. HECToR XT4 vs. XE6 HECToR phase2a XT4 HECToR phase2b XE6 Processor Processor - AMD Barcelona quad- core - AMD Magny- Cours 24- core Core Core – 2.3Ghz clock frequency – 2.1Ghz clock frequency – SSE SIMD FPU (4flops/ cycle = – SSE SIMD FPU (4flops/ cycle = 9.2GF peak) 8.4 GF peak) Memory Memory – 16 GB/ node symmetric – 16 GB/ node symmetric – DDR2 – DDR3 – 12GB/ s peak @ 800MHz – 85GB/ s peak @ 1333MHz Interconnect Interconnect - SeaStar - Gemini HECToR ‘interim’ phase2b XT6 Same network as XT4, same processors as XE6 23 rd May 2011 CUG 2011 Fairbanks 6

  5. fd3d 23 rd May 2011 CUG 2011 Fairbanks

  6. Large subduction earthquakes On 19 th Sep 1985 a large Ms 8.1 subduction earthquake occurred on the Mexican Pacific coast with an epicentre at about 340 km from Mexico City with about 30,000 deaths and losses of $7 billion. On 12 th May 2008 the Ms 7.9 Sichuan, China, earthquake produced about 70,000 deaths and $80 billion losses 11 th On March 2011 the Mw 9.0 Tohoku, J apan, earthquake resulted in about 15,000 deaths, $15- $30 billion losses Therefore, there is a seismological, engineering and socio economical interest to model these types of events, particularly, due to the scarcity of observational instrumental data for them 23 rd May 2011 CUG 2011 Fairbanks

  7. fd3d earthquake simulation code Seismic wave propagation 3D velocity- stress equations Structured grid Explicit scheme • 2nd order accurate in time 5 0 0 k m • 4th order accurate in space 60 0 k m P ´ Mexico Regular grid partitioning City  1 8  0 H k m y p o 1 4 0 k m c e n t e r Halo exchange    C aleta j i P 0 124 km k 23 rd May 2011 CUG 2011 Fairbanks

  8. Classic signature of 3D halo exchange 23 rd May 2011 CUG 2011 Fairbanks

  9. fd3d performance on XT4, XT6 and XE6 62.5m resolution Cray XT4 model of the 30 Cray XT6 Parkfield, CA, quake Cray XE6 Little performance difference 20 Performance Comms speed- up on XE6 more than offset by memory 10 contention 0 0 1024 2048 3072 4096 5120 6144 Number of cores 23 rd May 2011 CUG 2011 Fairbanks

  10. Fluidity- ICOM 23 rd May 2011 CUG 2011 Fairbanks

  11. Unstructured Mesh Ocean Modelling Fluidity- ICOM is build on top of Fluidity, an adaptive unstructured finite element code for computational fluid dynamics The Imperial College Ocean Model (ICOM) has the capability to efficiently resolve a wide range of scales simultaneously This offers the opportunity to simultaneously resolve both basin- scale circulation and small- scale processes 23 rd May 2011 CUG 2011 Fairbanks

  12. Fluidity- ICOM on the Cray XT4 and XE6 25 10 million vertex benchmark case Cray XE6 20 Cray XT4 Performance of momentum- solve Performance shows much worse 15 performance on XE6 Presumed due to 10 memory contention between 24 cores 5 on a node vs. quad- core XT4 0 Part of ongoing 1024 2048 4096 performance Number of Cores investigations 23 rd May 2011 CUG 2011 Fairbanks

  13. Fluidity- ICOM on the Cray XT4 and XE6 Current work 1 focusing on hybrid MPI- OpenMP 0.9 Momentum matrix 0.8 assembly Efficiency 0.7 Efficiency is good out to 6 threads / 4 0.6 tasks per node 0.5 Allows us to reduce MPI tasks to 4 tasks 0.4 per node and decrease memory 0.3 footprint 1 2 4 6 12 24 Number of Cores 23 rd May 2011 CUG 2011 Fairbanks

  14. PFARM 23 rd May 2011 CUG 2011 Fairbanks

  15. Atomic Molecular and Optical Physics Electron and photon collisions with atoms and ions Applications in ... Astrophysics: understanding of scattering and excitation processes which power light emission from nebulae Lasers: exciting, new field of high- powered lasers. Short, very high intensity pulses of light can blow atoms apart. This process could one day be used to control the outcome of chemical reactions - among the many applications envisaged. 23 rd May 2011 CUG 2011 Fairbanks

  16. External region code EXAS on XT4 and XE6 16384 FeIII scattering case, XE6 XT4 involving 21080 scattering energies Number of Cores 8192 Timing reveals that initialization costs increase markedly on the XE6 and grow with 4096 core count Subject for future optimization 2048 0 500 1000 1500 2000 2500 Time in Seconds 23 rd May 2011 CUG 2011 Fairbanks

  17. Internal region code RAD on XT4 and XE6 6 Electron- oxygen atom scattering case 5 XE6 OpenMP utilized for up Number of Threads XT4 to 6 threads per task 4 (XE6), 4 threads (XT4) XE6 (initial) Subject for current 3 optimization project – initial improvement shown 2 XE6 slower by clock ratio 2.1/ 2.3 e.g. 3 1 threads 0 500 1000 1500 2000 2500 Time in seconds 23 rd May 2011 CUG 2011 Fairbanks

  18. POLCOMS 23 rd May 2011 CUG 2011 Fairbanks

  19. High- Resolution Coastal Ocean Modelling POLCOMS is the finest resolution model to- date to simulate the circulation, temperature and salinity of the Northwest European continental Shelf Important for understanding the transport of nutrients, pollutants and dissolved carbon around shelf seas We have worked with POL on coupling with ERSEM, WAM, CICE, data assimilation and optimisation for HPC platforms Summer surface temperature, 2km resolution 23 rd May 2011 CUG 2011 Fairbanks

  20. Coupled Marine Ecosystem Model Irradiation Heat Flux Cloud Cover Pelagic Ecosystem Model Wind Stress River Inputs o C C, N, P, Si Sediments Open Boundary o C Physical Model Benthic Model 23 rd May 2011 CUG 2011 Fairbanks

  21. POLCOMS Halo Exchange on XT4, XT6, XE6 Performance – high is good 240 cores 360 cores Operation XT4 XT6 XE6 XT4 XT6 XE6 2D 6818 2700 36913 7366 2272 30628 3D 3273 1174 6451 3841 1229 7552 Mixed- D 3250 1171 6032 3670 1194 7292 2D latency limited Pure MPI, one task per core XE6/XT6 10x speed- up XT6 performance poor – network 3D bandwidth limited poorly matched to 24- way nodes XE6/XT6 5x speed- up XE6 MUCH improved See paper for multi- core aware partitioning (Pickles, CUG 2010) 23 rd May 2011 CUG 2011 Fairbanks

  22. ScaLAPACK 23 rd May 2011 CUG 2011 Fairbanks

  23. Subset of LAPACK routines redesigned for distributed memory MIMD parallel computers Widely used in a range of STFC applications include PRMAT, CRYSTAL, GAMESS- UK, KPPW and CASTEP dependent upon efficient parallel symmetric diagonalizations 23 rd May 2011 CUG 2011 Fairbanks

  24. ScaLAPACK: Timings on XT4, XT6, XE6 XE6 50% faster on 6144 cores 6144 Timings for parallel XE6 libsci_mc12_mp PDSYEVD- based XE6 libsci 4608 eigensolves for the XT6 libsci_mc12_mp CRYSTAL 20480 XT4 libsci matrix on Cray XT4, 3072 XT6/XE6 42% faster XT6, XE6 platforms Number of Cores on 3072 cores 1536 “mp” indicates hybrid multi- threading 2 MPI 1152 tasks per 12- core processor 768 XE6 faster on high 576 core counts (lower MPI overheads) and 384 for hybrid execution 0 20 40 60 80 100 23 rd May 2011 CUG 2011 Fairbanks Time in Seconds

  25. Telemac 23 rd May 2011 CUG 2011 Fairbanks

  26. Telemac: free surface flows Simulation of the Malpasset dam break flood wave in 1959, with a 26000 The software elements mesh (the run, 1000 time suite Telemac, steps of 4 s, takes 10 s on an 8- core dedicated to desktop computer) free surface flows, has seen a growing success since 1993 and has been widely distributed throughout the world, with more than 200 licences and several hundreds of users. 23 rd May 2011 CUG 2011 Fairbanks

  27. Telemac: time to solution Model from a study the impact of fresh water release from a hydro- electric power plant in the Berre lagoon (in the south of France) 3- D model based on 0.4 M 2- D triangles 31 layers would yield 12M triangles 23 rd May 2011 CUG 2011 Fairbanks

  28. Telemac: Craypat Cray XT4 1600 breakdown 1400 1200 256 cores 1000 512 cores 800 1024 cores 600 2048 cores 400 4096 cores 4096 200 2048 cores 1024 cores 0 512 cores 1 USER 1 2 3 4 256 cores Time (seconds) cores 2 MPI_SYNC Note different scales 3 MPI Cray XE6 1000 384 cores 900 512 cores 800 768 cores 700 1024 cores 600 1536 cores 500 2048 cores 400 3072 cores 300 4096 cores 4096 cores 200 3072 cores 2048 cores 100 1536 cores 1024 cores 0 768 cores 512 cores 1 2 3 384 cores 23 rd May 2011 CUG 2011 Fairbanks

  29. WRF 23 rd May 2011 CUG 2011 Fairbanks

  30. WRF Weather Model Craypat timings for Great North Run, nested model of three grids Great North Run nested grids for regional climate modelling 69k, 103k, 128k points resp. 23 rd May 2011 CUG 2011 Fairbanks

Recommend


More recommend