design of parallel algorithms communication algorithms
play

+ Design of Parallel Algorithms Communication Algorithms + Topic - PowerPoint PPT Presentation

+ Design of Parallel Algorithms Communication Algorithms + Topic Overview n One-to-All Broadcast and All-to-One Reduction n All-to-All Broadcast and Reduction n All-Reduce and Prefix-Sum Operations n Scatter and Gather n All-to-All


  1. + Design of Parallel Algorithms Communication Algorithms

  2. + Topic Overview n One-to-All Broadcast and All-to-One Reduction n All-to-All Broadcast and Reduction n All-Reduce and Prefix-Sum Operations n Scatter and Gather n All-to-All Personalized Communication n Improving the Speed of Some Communication Operations

  3. + Basic Communication Operations: Introduction n Many interactions in practical parallel programs occur in well-defined patterns involving groups of processors. n Efficient implementations of these operations can improve performance, reduce development effort and cost, and improve software quality. n Efficient implementations must leverage underlying architecture. For this reason, we refer to specific architectures here. n We select a descriptive set of architectures to illustrate the process of algorithm design.

  4. + Basic Communication Operations: Introduction n Group communication operations are built using point-to-point messaging primitives. n Recall from our discussion of architectures that communicating a message of size m over an uncongested network takes time t s +t m w . n We use this as the basis for our analyses. Where necessary, we take congestion into account explicitly by scaling the t w term. n We assume that the network is bidirectional and that communication is single-ported.

  5. + One-to-All Broadcast and All-to-One Reduction n One processor has a piece of data (of size m ) it needs to send to everyone. n The dual of one-to-all broadcast is all-to-one reduction . n In all-to-one reduction, each processor has m units of data. These data items must be combined piece-wise (using some associative operator, such as addition or min), and the result made available at a target processor.

  6. + One-to-All Broadcast and All-to-One Reduction One-to-all broadcast and all-to-one reduction among processors.

  7. + One-to-All Broadcast and All-to-One Reduction on Rings n Simplest way is to send p-1 messages from the source to the other p-1 processors - this is not very efficient. n Use recursive doubling: source sends a message to a selected processor. We now have two independent problems derined over halves of machines. n Reduction can be performed in an identical fashion by inverting the process.

  8. + One-to-All Broadcast One-to-all broadcast on an eight-node ring. Node 0 is the source of the broadcast. Each message transfer step is shown by a numbered, dotted arrow from the source of the message to its destination. The number on an arrow indicates the time step during which the message is transferred.

  9. + All-to-One Reduction Reduction on an eight-node ring with node 0 as the destination of the reduction.

  10. + Broadcast and Reduction: Example Consider the problem of multiplying a matrix with a vector. n The n x n matrix is assigned to an n x n (virtual) processor grid. The vector is assumed to be on the first row of processors. n The first step of the product requires a one-to-all broadcast of the vector element along the corresponding column of processors. This can be done concurrently for all n columns. n The processors compute local product of the vector element and the local matrix entry. n In the final step, the results of these products are accumulated to the first row using n concurrent all-to-one reduction operations along the columns (using the sum operation).

  11. + Broadcast and Reduction: Matrix-Vector Multiplication Example One-to-all broadcast and all-to-one reduction in the multiplication of a 4 x 4 matrix with a 4 x 1 vector.

  12. + Broadcast and Reduction on a Mesh n We can view each row and column of a square mesh of p nodes as a linear array of √ p nodes. n Broadcast and reduction operations can be performed in two steps - the first step does the operation along a row and the second step along each column concurrently. n This process generalizes to higher dimensions as well.

  13. + Broadcast and Reduction on a Mesh: Example One-to-all broadcast on a 16-node mesh.

  14. + Broadcast and Reduction on a Hypercube n A hypercube with 2 d nodes can be regarded as a d -dimensional mesh with two nodes in each dimension. n The mesh algorithm can be generalized to a hypercube and the operation is carried out in d ( = log p ) steps.

  15. + Broadcast and Reduction on a Hypercube: Example One-to-all broadcast on a three-dimensional hypercube. The binary representations of node labels are shown in parentheses.

  16. + Broadcast and Reduction Algorithms n All of the algorithms described above are adaptations of the same algorithmic template. n We illustrate the algorithm for a hypercube, but the algorithm, as has been seen, can be adapted to other architectures. n The hypercube has 2 d nodes and my_id is the label for a node. n An algorithm to broadcast from 0 is simply implemented by utilizing how the address bits map to the recursive construction of the hypercube n To support arbitrary source processors we us a mapping from physical processors to virtual processors. We always send from processor 0 in the virtual processor space. n The XOR operation with the root gives us a idempotent mapping operation (apply once to get from virtual->physical, second time to get from physical->virtual) n Pseudo code in this chapter assumes buffered communication! Must modify appropriately to make correct MPI implementations.

  17. + Broadcast and Reduction Algorithms One-to-all broadcast of a message X from source on a hypercube.

  18. + Broadcast and Reduction Algorithms Single-node accumulation on a d -dimensional hypercube. Each node contributes a message X containing m words, and node 0 is the destination.

  19. + Cost Analysis n The broadcast or reduction procedure involves log p point-to-point simple message transfers, each at a time cost of t s + t w m . n The total time is therefore given by: log p ∑ T comm = ( t s + t w m ) = t s + t w m ( ) log p i = 1

  20. + Useful Identities for analysis of more complex algorithms to come n Geometric Series: r r n − 1 ( ) log p n ∑ r k ∑ 2 i − 1 = p − 1 = ⇒ r − 1 k = 1 i = 1 n Euler’s Identity: n ( ) ∑ = n n + 1 k 2 k = 1

  21. + All-to-All Broadcast and Reduction n Generalization of broadcast in which each processor is the source as well as destination. n A process sends the same m -word message to every other process, but different processes may broadcast different messages.

  22. + All-to-All Broadcast and Reduction All-to-all broadcast and all-to-all reduction.

  23. + All-to-All Broadcast and Reduction on a Ring n Can be thought of as a one-to-all broadcast where every processor is a root node n Naïve implementation: perform p one-to-all broadcasts. This is not the most efficient as processors often idle waiting for messages to arrive in each independent broadcast . n A better way can perform the operation in p steps: n Each node first sends to one of its neighbors the data it needs to broadcast. n In subsequent steps, it forwards the data received from one of its neighbors to its other neighbor. n The algorithm terminates in p-1 steps.

  24. + All-to-All Broadcast and Reduction on a Ring All-to-all broadcast on an eight-node ring.

  25. + All-to-All Broadcast and Reduction on a Ring All-to-all broadcast on a p -node ring.

  26. + Analysis of ring all-to-all broadcast algorithm n The algorithm does p-1 steps and in each step it sends and receives a message of size m. n Therefore the communication time is: p − 1 ∑ ( ) T all − to − all − ring = t s + t w m = ( t s + t w m )( p − 1) i = 1 n Note that the bisection width of the ring is 2, while the communication pattern requires the transmission of p/2 pieces of information from one half of the network to the other. Therefore the all-to-all broadcast cannot be faster than O(p) for a ring. Therefore this algorithm is asymptotically optimal.

  27. + All-to-all Broadcast on a Mesh n Performed in two phases - in the first phase, each row of the mesh performs an all-to-all broadcast using the procedure for the linear array. n In this phase, all nodes collect √ p messages corresponding to the √ p nodes of their respective rows. Each node consolidates this information into a single message of size m √ p. n The second communication phase is a column-wise all-to-all broadcast of the consolidated messages.

  28. + All-to-all Broadcast on a Mesh All-to-all broadcast on a 3 x 3 mesh. The groups of nodes communicating with each other in each phase are enclosed by dotted boundaries. By the end of the second phase, all nodes get (0,1,2,3,4,5,6,7) (that is, a message from each node).

  29. + All-to-all Broadcast on a Mesh All-to-all broadcast on a square mesh of p nodes.

  30. + Mesh based All-to-All broadcast Analysis n Algorithm proceeds in two steps: 1) ring broadcast over rows with message size = m , then ring broadcast over columns with message size = √ p m n Time for communication:             step 1 step 2 ( ) ( ) ( ) ( ) T comm = t s + t w m p − 1 + t s + t w pm p − 1 ( ) + t w m p − 1 ( ) T comm = 2 t s p − 1 n Due to single-port assumption, all-to-all broadcast cannot execute faster than O(p) time since each processor must receive p-1 distinct messages. Therefore this algorithms is asymptotically optimal.

Recommend


More recommend