introduction to parallel computing
play

Introduction to Parallel Computing Irene Moulitsas Programming - PowerPoint PPT Presentation

Introduction to Parallel Computing Irene Moulitsas Programming using the Message-Passing Paradigm MPI Background MPI : Message Passing Interface Began in Supercomputing 92 Vendors IBM, Intel, Cray Library writers PVM


  1. Introduction to Parallel Computing Irene Moulitsas Programming using the Message-Passing Paradigm

  2. MPI Background � MPI : Message Passing Interface � Began in Supercomputing ’92 � Vendors � IBM, Intel, Cray � Library writers � PVM � Application specialists � National Laboratories, Universities

  3. Why MPI ? � One of the oldest libraries � Wide-spread adoption. Portable. � Minimal requirements on the underlying hardware � Explicit parallelization � Intellectually demanding � Achieves high performance � Scales to large number of processors

  4. MPI Programming Structure � Asynchronous � Hard to reason � Non-deterministic behavior � Loosely synchronous � Synchronize to perform interactions � Easier to reason � SPMD � S ingle P rogram M ultiple D ata

  5. MPI Features � Communicator Information � Point to Point communication � Collective Communication � Topology Support � Error Handling

  6. Six Golden MPI Functions � MPI is 125 functions � MPI has 6 most used functions

  7. MPI Functions: Initialization � Must be called by all processes � MPI_SUCCESS � “mpi.h”

  8. MPI Functions: Communicator � MPI_Comm � MPI_COMM_WORLD

  9. Hello World !

  10. Hello World ! (correct)

  11. MPI Functions: Send, Recv � source � MPI_ANY_SOURCE � MPI_Status � MPI_SOURCE � MPI_TAG � MPI_ERROR

  12. MPI Functions: Datatypes

  13. Send/Receive Examples

  14. Blocking Non-Buffered Communication

  15. Send/Receive Examples

  16. Blocking Buffered Communication

  17. Send/Receive Examples

  18. MPI Functions: SendRecv

  19. MPI Functions: ISend, IRecv � Non-blocking � MPI_Request

  20. MPI Functions: Test, Wait � MPI_Test tests if operation finished. � MPI_Wait blocks until operation is finished.

  21. Non-Blocking Non-Buffered Communication

  22. Example

  23. Example

  24. Example

  25. MPI Functions: Synchronization

  26. Collective Communications � One-to-All Broadcast � All-to-One Reduction � All-to-All Broadcast & Reduction � All-Reduce & Prefix-Sum � Scatter and Gather � All-to-All Personalized

  27. MPI Functions: Broadcast

  28. MPI Functions: Scatter & Gather

  29. MPI Functions: All Gather

  30. MPI Functions: All-to-All Personalized

  31. MPI Functions: Reduction

  32. MPI Functions: Operations

  33. MPI Functions: All-reduce � Same as MPI_Reduce, but all processes receive the result of MPI_Op operation.

  34. MPI Functions: Prefix Scan

  35. MPI Names

  36. MPI Functions: Topology

  37. Performance Evaluation � Elapsed (wall-clock) time

  38. Matrix/Vector Multiply

Recommend


More recommend