a view on mpi s recent past present and future
play

A View on MPI's Recent Past, Present, and Future Argonne National - PowerPoint PPT Presentation

spcl.inf.ethz.ch @spcl_eth 25 Years of MPI Symposium A View on MPI's Recent Past, Present, and Future Argonne National Lab/EuroMPI/USA Conference, Chicago, IL Torsten Hoefler (on behalf of nobody, neither my institution, nor myself, not MPI


  1. spcl.inf.ethz.ch @spcl_eth 25 Years of MPI Symposium A View on MPI's Recent Past, Present, and Future Argonne National Lab/EuroMPI/USA Conference, Chicago, IL Torsten Hoefler (on behalf of nobody, neither my institution, nor myself, not MPI collectives WG!) “Abstract is good, but ... a bit much like a technical talk?” Thanks for organizing this!

  2. spcl.inf.ethz.ch @spcl_eth My personal journey with MPI Disposable income distribution <17k <20k <23k <26k <29k <33k 2

  3. spcl.inf.ethz.ch @spcl_eth Nonblocking collective operations – first discussed in MPI-1! ▪ MPI_I<collective>(args, MPI_Request *req); EuroPVM /MPI’06 Speedup for Jacobi/CG six years later ten years later 3

  4. spcl.inf.ethz.ch @spcl_eth But wait, nonblocking barriers, seriously? … turns out to be very useful after all: Dynamic Sparse MPI_Ibarrier() Data Exchange MPI_Issend() 4

  5. spcl.inf.ethz.ch @spcl_eth Neighborhood Collectives ▪ Just datatypes for collectives – default collectives are “contiguous”, neighbor collectives are user -defined 1994  2004 2004  2014 We need to focus on optimizing MPI-3 now (similar issues for RMA etc.) 5

  6. spcl.inf.ethz.ch @spcl_eth State of MPI today – programming has changed dramatically today’s programming until 10 years ago And the domain scientists? 6

  7. spcl.inf.ethz.ch @spcl_eth HPC community codes towards the end of Moore’s law (i.e., age of acceleration) ‘07: Fortran + MPI ‘12: Fortran + MPI + C++ (DSL) + CUDA ‘13: Fortran + MPI + C++ (DSL) + CUDA + OpenACC ‘??: Fortran + MPI + C++ (DSL) + CUDA + OpenACC + XXX What is with the MPI community and how can we help? 7

  8. spcl.inf.ethz.ch @spcl_eth MPI’s own Innovator’s Dilemma MPI+X Data-Centric Parallel Programming Turn MPI’s principles into a language! Replace MPI? up ▪ We should have a bold research strategy to go forward! down Rethink MPI! Distributed CUDA streaming Processing in Network MPI for Big Data Distributed Join Run MPI right CUDA for Algorithms on on your GPU Network Cards Thousands of Cores (SC’16) (SC’17) (VLDB’17) 8

  9. spcl.inf.ethz.ch @spcl_eth Let’s move MPI to new heights! Torsten https://spcl.inf.ethz.ch/Jobs/ 9

Recommend


More recommend