the integrative role of cow s and supercomputers in
play

The Integrative Role of COWs and Supercomputers in Research and - PowerPoint PPT Presentation

The Integrative Role of COWs and Supercomputers in Research and Education Activities Don Morton, Ganesh Prabu, Daniel Sandholdt, Lee Slater Department of Computer Science The University of Montana COW's and Supercomputers - CUG 1999


  1. The Integrative Role of COWÕs and Supercomputers in Research and Education Activities Don Morton, Ganesh Prabu, Daniel Sandholdt, Lee Slater Department of Computer Science The University of Montana COW's and Supercomputers - CUG 1999

  2. Introduction ¥ Thesis - workstation clusters and supercomputers can be used together in environments that benefit everybody ¥ COWÕs (e.g. Beowulf) - training and development activities in HPC ¥ Supercomputers (e.g. Cray T3E) - large-scale production runs COW's and Supercomputers - CUG 1999

  3. Acknowledgements ¥ Arctic Region Supercomputing Center ¥ SGI/CRI ¥ National Science Foundation ¥ Pallas COW's and Supercomputers - CUG 1999

  4. Outline ¥ Background ¥ Current Computing Environments ¥ Case Study - Parallel Programming Course ¥ Research and Development Activities ¥ COW/Supercomputer Integration Issues ¥ Conclusions COW's and Supercomputers - CUG 1999

  5. Background ¥ 1991 - 80486, Linux ¥ 1993-94 - PVM, RS6000, T3D ¥ 1994-97 - Cameron University, ARSC ¥ 1997-Present - U. Montana, ARSC COW's and Supercomputers - CUG 1999

  6. Current Computing Environments UM Scientific Computing Lab p1 p2 p3 p4 p5 p6 p7 p8 100BaseT Hub LittleHouse.prairie.edu frontend.scinet.prairie.edu elk.prairie.edu scinet.prairie.edu 10BaseT Hub Internet COW's and Supercomputers - CUG 1999

  7. Case Study - Parallel Programming Course ¥ Graduate (masters) course ¥ Goals Ð Hands-on experience using common, portable, programming tools Ð Explore concept of training on COWÕs, then moving to supercomputers COW's and Supercomputers - CUG 1999

  8. Parallel Programming Course Outline ¥ Discuss basic concepts of parallel programming ¥ Implement solution to n -body problem with PVM, then MPI, then HPF ¥ Introduce performance analysis tools ¥ Lab session based on Linux/T3E portability issues ¥ Special projects COW's and Supercomputers - CUG 1999

  9. Lab Session - Linux/T3E ¥ Port Linux PVM n -body code to T3E PVM ¥ Port Linux MPI n -body code to T3E MPI ¥ Vampir analysis of MPI n -body code ¥ Performance modeling and analysis of MPI Jacobi program on T3E ¥ Analysis and improvement of an MPI code COW's and Supercomputers - CUG 1999

  10. Linux PVM to T3E PVM ¥ Network PVM and Cray MPP PVM have significant differences Ð Heterogeneous vs. Homogeneous SPMD Ð Dynamic vs. static task allocation Ð Cray-specific PVM calls Ð Need to be aware of different size datatypes ¥ Portable codes must be written in SPMD, with conditional compilation COW's and Supercomputers - CUG 1999

  11. Conditional Compilation for Portable PVM #ifdef _CRAYMPP // In Cray MPP, the "global" group is indicated by null pointer #define GROUPNAME (char *) 0 #else #define GROUPNAME "alltasks" #endif ...... #ifdef _CRAYMPP // Cray MPP does not support joining a "global" group, so we simply // use the Cray-specific routine for getting the PE number mype = pvm_get_PE(mytid); #else mype = pvm_joingroup(GROUPNAME); #endif ........ #ifndef _CRAYMPP // This is not executed for Cray MPP PVM - pvm_spawn() is not // implemented - all tasks startup SPMD at beginning if(mype == 0) // I'm the master, spawn the others info = pvm_spawn(argv[0], (char**) 0, PvmTaskDefault, (char*) 0, ntasks-1, &tid_list[1]); #endif COW's and Supercomputers - CUG 1999

  12. Comments on Porting PVM and MPI Codes ¥ PVM difficult to port, until network vs. Cray MPP differences are understood ¥ MPI ports easily ¥ Cray MPP is less forgiving of programmer errors than other systems ¥ In general, experienced students found transition from Linux to T3E straightforward COW's and Supercomputers - CUG 1999

  13. Performance Analysis ¥ Use of Vampir as a common tool Ð Vampirtrace - library of routines for generating tracefiles Ð Vampir - viewer for looking at tracefiles COW's and Supercomputers - CUG 1999

  14. Vampir COW's and Supercomputers - CUG 1999

  15. Special Projects ¥ Conversion of C++ MPI Jacobi program to Fortran ¥ Conversion of C++ MPI Jacobi program to C++ PVM ¥ Porting of Linux C++ parallel finite element code to T3E COW's and Supercomputers - CUG 1999

  16. Porting of Linux C++ Parallel Finite Element Code to T3E Linear Diffusion - CrayT3E Linear Diffusion - SCINET 120 600 100 N = 120 500 N = 120 N = 240 N = 240 N = 360 N = 360 N = 480 80 400 N = 480 N = 600 N = 600 Time (sec) 60 Time (sec) 300 40 200 N = 600 20 N = 600 100 N = 480 N = 480 N = 360 Elements N = 360 0 Elements N = 240 0 1 N = 240 2 N = 120 1 4 2 6 P N = 120 4 8 P 6 8 COW's and Supercomputers - CUG 1999

  17. Research and Development Activities ¥ Parallel, adaptive, finite element methods ¥ Parallelisation of hydrologic model for arctic ecosystems ¥ Coupling of parallel thermal and hydrologic models COW's and Supercomputers - CUG 1999

  18. Parallel, Adaptive Finite Element Methods Heterogeneous absolute Homogeneous absolute permeabilities. permeabilities. COW's and Supercomputers - CUG 1999

  19. 3D Isosurface (Oil/Water Interface) COW's and Supercomputers - CUG 1999

  20. Timings - Linux Cluster Wall time (seconds) required for single timestep with 4548 unknowns. -100 MHz Pentiums 10000 -100 Mbs Fast Ethernet 1000 Seconds 100 Mesh Distribution 1130.5 Mesh Modification 209.7 10 Distributed Solution 1 4 8 Processors COW's and Supercomputers - CUG 1999

  21. Timings - Cray T3E Wall time (seconds) required for single timestep with 4548 unknowns. 1000 100 Seconds Mesh Distribution Mesh Modification 165.7 10 Distributed Solution 26.1 4.5 4.2 1 4 8 16 32 Processors COW's and Supercomputers - CUG 1999

  22. Parallelisation of Hydrologic Model COW's and Supercomputers - CUG 1999

  23. Time Measurements ¥ 6448 elements ¥ Use of MPI+METIS+Shmem on Cray, MPI+METIS on Linux Wall Time for Single Timestep 20 18 16 14 2 12 Seconds 10 4 8 8 6 16 4 2 0 T3D T3E Linux COW's and Supercomputers - CUG 1999

  24. Coupling of Thermal and Hydro Models ¥ Background - previously existing hydro and thermal models ¥ Benefits of coupling - increased detail, capture feedback loops inherent in arctic ecosystems COW's and Supercomputers - CUG 1999

  25. Coupled Models COW's and Supercomputers - CUG 1999

  26. MPI Inter-communicators MPI_COMM_WORLD COW's and Supercomputers - CUG 1999

  27. Non-coupled vs. Coupled Simulation COW's and Supercomputers - CUG 1999

  28. COW/Supercomputer Integration Issues ¥ Code written on COWÕs should run on the T3E, and vice versa ¥ Integration should focus on creating similar programming environments Ð Users should be able to run programs identically on COWÕs and supercomputers Ð Scripts (mostly on COW side) can aid in this COW's and Supercomputers - CUG 1999

  29. COW/Supercomputer Integration Issues (continued) ¥ Portable analysis tools (e.g. Vampir, pgprof) ¥ Affordable, portable, integrated debuggers (Totalview?) COW's and Supercomputers - CUG 1999

  30. Conclusions ¥ COWÕs and supercomputers have complementary roles in HPC ¥ Local COWÕs are ideal training and development platform ¥ Supercomputers always needed ¥ Increased usage of COWÕs for training and development should result in more HPC experts, and greater demand for supercomputers COW's and Supercomputers - CUG 1999

Recommend


More recommend