t platforms
play

T-PLATFORMS March 3, 2016 Artem Osipov Alexander Daryin - PowerPoint PPT Presentation

Multithreaded distributed BFS with regularized communication pattern on the cluster with Angara interconnect T-PLATFORMS March 3, 2016 Artem Osipov Alexander Daryin GraphHPC-2016 www.t-platforms.com BFS Revisited Breadth-First Search


  1. Multithreaded distributed BFS with regularized communication pattern on the cluster with Angara interconnect T-PLATFORMS March 3, 2016 Artem Osipov Alexander Daryin GraphHPC-2016 www.t-platforms.com

  2. BFS Revisited • Breadth-First Search (BFS) on distributed memory systems • Part of Graph500 benchmark • Performance is limited by: • memory access latency • network latency • network bandwidth GraphHPC-2016 www.t-platforms.com

  3. Compressed Sparse-Row Graph (CSR) local_id … 1 0 2 3 rowstarts … 21 7 0 13 15 42 2 27 6 27 0 0 1 0 columns global_id global_id rank local_id GraphHPC-2016 www.t-platforms.com

  4. Reference Algorithm Data: CSR (rowstarts R, column C), current queue Q fun function ction ProcessQueue(R, C, Q) for or v in in Q do do for or e in in {R[v]..R[v + 1]} do do send v, local(C[e]) to owner(C[e])) GraphHPC-2016 www.t-platforms.com

  5. Reference Algorithm Data: CSR (rowstarts R, column C), current queue Q function fun ction ProcessQueue(R, C, Q) for or v in in Q do do for or p in in {R[v]..R[v + 1]} do do send v, local(C[p]) to owner(C[p])) Irregular communication pattern connection overhead Small message size low bandwidth Separate send buffer Message for each peer process coalescing large memory overhead multiple threads - ? GraphHPC-2016 www.t-platforms.com

  6. Goals • Optimize memory access • Regularize communication pattern • Reduce memory consumption • Allow for multithreaded design • Possible solution: • partition graph data for each peer node • process partitions independently (and possibly concurrently) GraphHPC-2016 www.t-platforms.com

  7. Regularized Communications Pattern • Partition graph data for each peer node • Process partitions independently (and concurrently) • Problem: • Unacceptable memory overhead when using standard CSR representation - |rowstarts| = |V| • Solution: • Pack rowstarts GraphHPC-2016 www.t-platforms.com

  8. Packed Partitioned CSR … … 0 2 3 3 4 5 dst 0 dst 2 local_id 1 4 7 1 8 4 7 7 3 9 6 2 … 1 5 6 .. 2 7 9 dst 1 dst 3 2 8 2 1 1 6 1 9 5 8 0 9 packed rowstarts local_id offset GraphHPC-2016 www.t-platforms.com

  9. Packed CSR Algorithm Data: Packed CSR (vertex indices V, row offsets D, column C), current queue Q (bit mask), number of processes NP fun function ction ProcessQueue(R, C, Q) for or p in in {0..NP-1} do do for or i in in {0..|V[p]|-1} do do if if V[p][i] in n Q then hen for for e in in {D[p][i]..D[p][i + 1] do do send V[p][i], C[e]) to p GraphHPC-2016 www.t-platforms.com

  10. Direction Optimization • Standard algorithm: when sending (u, v) edge for already visited u we're hoping that v is unvisited • At some point it becomes easier to hit visited v than unvisited • Backward stepping algorithm: send (u, v) edge for unvisited u and hope that v is visited • Reduce communications – use edge probing • To increase the effectiveness of probing sort columns by degree GraphHPC-2016 www.t-platforms.com

  11. Backwards Stepping Data: Packed CSR (vertex indices V, row offsets D, column C), bit mask of visited vertices M, number of processes NP fun function ction ProcessUnvisited() // probe first edges for or p in in {0..NP-1} do do for or i in in {0..|V[p]-1|} do do if if V[p][i] in in M then hen send V[p][i], C[D[p][i]] to peer flush send buffer wait for all acks // probe other edges for or p in in {0..NP-1} do do for or i in in {0..|V[p]|-1} do do if if V[p][i] in in M then hen for or e in {D[p][i] + 1..D[p][i + 1]} do do send V[peer][i], C[e] to peer flush send buffer GraphHPC-2016 www.t-platforms.com

  12. Multithreading • Basic concepts: • All MPI communications go through the main thread • All received data is handled by dedicated thread • Other threads prepare data to send • All communications between threads go through concurrent queues (Intel Thread Building Blocks package was used) GraphHPC-2016 www.t-platforms.com

  13. Multithreading send buffers … thread 0 thread 1 thread 2 thread k-1 performs MPI processes received processes packed processes packed processes packed communication and messages CSR for one dst at a CSR for one dst at CSR for one dst at a coordinates worker time a time time threads recv buffers dst list GraphHPC-2016 www.t-platforms.com

  14. Forward stepping MsgHandlerThread(1) MainThread(0) QueueThread(2..k-1) WaitMainThread InitNewRound WaitMainThread RanksToProcess RecvRequests ProcessLocal Send/Recv BuffersQueue ActionsQueue SendRequests ProcessGlobal BuffersQueue ActionsQueue Allgather updated ProcessRecv count GraphHPC-2016 www.t-platforms.com

  15. Backward stepping MsgHandlerThread(1) MainThread(0) WaitMainThread InitNewRound FwdRecvRequests ProcessLocal Send/Recv BuffersQueue ActionsQueue Allgather updated ProcessBckRecv count GraphHPC-2016 www.t-platforms.com

  16. Backward stepping MainThread(0) QueueThread(2..k-1) RanksToProcess InitNewRound WaitMainThread FwdSendRequests BuffersQueue ActionsQueue BckRecvRequests BuffersQueue Allgather updated Send/Recv ProcessFwdRecv count ActionsQueue ProcessGlobal BckSendRequests BuffersQueue ActionsQueue GraphHPC-2016 www.t-platforms.com

  17. Performance: “Angara K1” 16 Nodes 16x2 4 OMP 16x 6 OMP 16x2 10 OMP 16x8 8 7 6 5 4 3 2 1 0 22 23 24 25 26 27 28 29 30 GraphHPC-2016 www.t-platforms.com

  18. Performance: “Angara K1” 32 Nodes 32x1 6 OMP 32x4 32x8 7 6 5 4 3 2 1 0 22 23 24 25 26 27 28 29 30 GraphHPC-2016 www.t-platforms.com

  19. Future optimizations • Graph redistribution (maximal clustering per node) • Multithreaded heavy GraphHPC-2016 www.t-platforms.com

  20. THANK YOU! www.t-platforms.com

Recommend


More recommend