a quest for unified global view parallel programming
play

A Quest for Unified, Global View Parallel Programming Models for Our - PowerPoint PPT Presentation

A Quest for Unified, Global View Parallel Programming Models for Our Future Kenjiro Taura University of Tokyo T0 T1 T161 T2 T40 T162 T184 T3 T31 T41 T77 T163 T172 T185 T187 T4 T29 T32 T38 T42 T66 T78 T102 T164 T166 T173


  1. A Quest for Unified, Global View Parallel Programming Models for Our Future Kenjiro Taura University of Tokyo T0 T1 T161 T2 T40 T162 T184 T3 T31 T41 T77 T163 T172 T185 T187 T4 T29 T32 T38 T42 T66 T78 T102 T164 T166 T173 T175 T186 T188 T190 T5 T11 T30 T33 T37 T39 T43 T62 T67 T74 T79 T82 T103 T153 T165 T167 T171 T174 T176 T181 T189 T191 T6 T7 T12 T24 T34 T35 T44 T63 T65 T68 T72 T75 T76 T80 T81 T83 T101 T104 T122 T154 T155 T168 T169 T177 T179 T182 T192 T8 T9 T13 T14 T25 T26 T36 T45 T61 T64 T69 T71 T73 T84 T93 T105 T120 T123 T137 T156 T158 T170 T178 T180 T183 T193 T195 T10 T15 T23 T27 T46 T60 T70 T85 T94 T106 T111 T121 T124 T128 T138 T152 T157 T159 T160 T194 T196 T198 T16 T20 T28 T47 T56 T86 T87 T95 T96 T107 T110 T112 T114 T125 T129 T135 T139 T143 T197 T199 T17 T21 T48 T57 T88 T92 T97 T108 T109 T113 T115 T117 T126 T130 T136 T140 T144 T146 T18 T22 T49 T55 T58 T89 T90 T98 T100 T116 T118 T127 T131 T141 T145 T147 T150 T19 T50 T54 T59 T91 T99 T119 T132 T134 T142 T148 T149 T151 T51 T53 T133 T52 1 / 52

  2. Acknowledgements ▶ Jun Nakashima (MassiveThreads) ▶ Shigeki Akiyama, Wataru Endo (MassiveThreads/DM) ▶ An Huynh (DAGViz) ▶ Shintaro Iwasaki (Vectorization) 2 / 52

  3. 3 / 52

  4. What is task parallelism? ▶ like most CS terms, the definition is vague ▶ I don’t consider contraposition “data parallelism vs. task parallelism” useful ▶ imagine lots of tasks each working on a piece of data ▶ is it data parallel or task parallel? ▶ let’s instead ask: ▶ what’s useful from programmer’s view point ▶ what are useful distinctions to make from implementer’s view point 4 / 52

  5. What is task parallelism? A system supports task parallelism when: 1. a logical unit of concurrency (that is, a task ) can be created dynamically, at an arbitrary point of execution, create task create task 5 / 52

  6. What is task parallelism? A system supports task parallelism when: 1. a logical unit of concurrency (that is, a task ) can be created dynamically, at an arbitrary point of execution, create task 2. and cheaply; create task 5 / 52

  7. What is task parallelism? A system supports task parallelism when: create task create task 1. a logical unit of concurrency (that is, a task ) can be created dynamically, at an arbitrary point of execution, 2. and cheaply; 5 / 52

  8. What is task parallelism? A system supports task parallelism when: create task create task 1. a logical unit of concurrency (that is, a task ) can be created dynamically, at an arbitrary point of execution, 2. and cheaply; 3. and they are automatically mapped on hardware parallelism (cores, nodes, . . . ) 5 / 52

  9. What is task parallelism? A system supports task parallelism when: create task create task 1. a logical unit of concurrency (that is, a task ) can be created dynamically, at an arbitrary point of execution, 2. and cheaply; 3. and they are automatically mapped on hardware parallelism (cores, nodes, . . . ) 4. and cheaply context-switched 5 / 52

  10. What are they good for? ▶ generality: “creating tasks at arbitrary points” unifies many superficially different patterns ▶ parallel nested loop, parallel recursions ▶ they trivially compose ▶ programmability: cheap task creation + automatic load balancing allow straightforward, processor-oblivious decomposition of the work ( divide-and-conquer-until-trivial ) ▶ performance: dynamic scheduling is a basis for hiding latencies and tolerating noises 6 / 52

  11. Our goal ▶ programmers use tasks (+ higher-level syntax on top) as the unified means to express parallelism ▶ the system maps tasks to hardware parallelism ▶ cores within a node ▶ nodes ▶ SIMD lanes within a core! 7 / 52

  12. Rest of the talk Intra-node Task Parallelism Task Parallelism in Distributed Memory Need Good Performance Analysis Tools Compiler Optimizations and Vectorization Concluding Remarks 8 / 52

  13. 9 / 52

  14. Agenda Intra-node Task Parallelism Task Parallelism in Distributed Memory Need Good Performance Analysis Tools Compiler Optimizations and Vectorization Concluding Remarks 10 / 52

  15. Taxonomy ▶ library or frontend: implemented with ordinary C/C++ compilers or does it heavily rely on a tailored frontend? 11 / 52

  16. Taxonomy ▶ library or frontend: implemented with ordinary C/C++ compilers or does it heavily rely on a tailored frontend? ▶ tasks suspendable or atomic: can tasks suspend/resume in the middle or do tasks always run to completion? 11 / 52

  17. Taxonomy ▶ library or frontend: implemented with ordinary C/C++ compilers or does it heavily rely on a tailored frontend? ▶ tasks suspendable or atomic: can tasks suspend/resume in the middle or do tasks always run to completion? ▶ synchronization patterns arbitrary or pre-defined: can tasks synchronize in an arbitrary topology or only in pre-defined synchronization patterns (e.g., bag-of-tasks, fork/join)? 11 / 52

  18. Taxonomy ▶ library or frontend: implemented with ordinary C/C++ compilers or does it heavily rely on a tailored frontend? ▶ tasks suspendable or atomic: can tasks suspend/resume in the middle or do tasks always run to completion? ▶ synchronization patterns arbitrary or pre-defined: can tasks synchronize in an arbitrary topology or only in pre-defined synchronization patterns (e.g., bag-of-tasks, fork/join)? ▶ tasks untied or tied: can tasks migrate after they started? 11 / 52

  19. Instantiations library suspendable untied sync /frontend task tasks topology OpenMP tasks frontend yes yes fork/join TBB library yes no fork/join Cilk frontend yes yes fork/join Quark library no no arbitrary Nanos++ library yes yes arbitrary Qthreads library yes yes arbitrary Argobots library yes yes? arbitrary MassiveThreads library yes yes arbitrary 12 / 52

  20. MassiveThreads ▶ https://github.com/massivethreads/massivethreads ▶ design philosophy: user-level threads (ULT) in an ordinary thread API as you know it ▶ tid = myth create( f , arg ) ▶ tid = myth join( arg ) ▶ myth yield to switch among threads (useful for latency hiding) ▶ mutex and condition variables to build arbitrary synchronization patterns ▶ efficient work stealing scheduler (locally LIFO and child-first; steal oldest task first) ▶ an (experimental) customizable work stealing [Nakashima and Taura; ROSS 2013] 13 / 52

  21. User-facing APIs on MassiveThreads ▶ TBB’s task group and ✞ parallel for (but with untied quicksort(a, p, q) { if (q - p < th) { work stealing scheduler) ... } else { ▶ Chapel tasks on top of mtbb::task group tg; MassiveThreads (currently r = partition(a, p, q); tg.run([=]{ quicksort(a, p, r-1); }); broken orz ) quicksort(a, r, q); ▶ SML# (Ueno @ Tohoku tg.wait(); } University) ongoing } ▶ Tapas (Fukuda @ RIKEN), a TBB interface on domain specific language for MassiveThreads particle simulation 14 / 52

  22. Important performance metrics ▶ low local creation/sync overhead ▶ low local context switches ▶ reasonably low load balancing (migration) overhead ▶ somewhat sequential scheduling order ✞ parent() { π 0 1 π 0 : 2 spawn { γ : ... }; 3 π 1 : 4 γ π 1 } 5 op measure what time (cycles) local create ≈ 140 π 0 → γ work steal ≈ 900 π 0 → π 1 context switch myth yield ≈ 80 (Haswell i7-4500U (1.80GHz), GCC 4.9) 15 / 52

  23. Comparison to other systems 3000 ≈ 7000 child 2500 parent 2000 ✞ clocks parent() { 1 1500 π 0 : 2 1000 spawn { γ : ... }; 3 500 167 138 π 1 : 73 72 4 0 } 5 Cilk CilkPlus MassiveThreads OpenMP Qthreads TBB Summary: ▶ Cilk(Plus), known for its superb local creation performance, sacrifices work stealing performance ▶ TBB’s local creation overhead is equally good, but it is “parent-first” and tasks are tied to a worker once started 16 / 52

  24. Further research agenda (1) ▶ task runtimes for ever larger scale systems is vital 17 / 52

  25. Further research agenda (1) ▶ task runtimes for ever larger scale systems is vital ▶ ⇒ “locality-/cache-/hierarchy-/topology-/whatever- aware” schedulers obviously important 17 / 52

  26. Further research agenda (1) ▶ task runtimes for ever larger scale systems is vital ▶ ⇒ “locality-/cache-/hierarchy-/topology-/whatever- aware” schedulers obviously important ▶ ⇒ hierarchical/customizable schedulers proposals 17 / 52

  27. Further research agenda (1) ▶ task runtimes for ever larger scale systems is vital ▶ ⇒ “locality-/cache-/hierarchy-/topology-/whatever- aware” schedulers obviously important ▶ ⇒ hierarchical/customizable schedulers proposals ▶ ⇒ yet, IMO, there are no clear demonstrations that clearly outperform simple greedy work stealing over many workloads 17 / 52

  28. Further research agenda (1) ▶ task runtimes for ever larger scale systems is vital ▶ ⇒ “locality-/cache-/hierarchy-/topology-/whatever- aware” schedulers obviously important ▶ ⇒ hierarchical/customizable schedulers proposals ▶ ⇒ yet, IMO, there are no clear demonstrations that clearly outperform simple greedy work stealing over many workloads ▶ the question, it seems, ultimately comes to this: when no tasks exist near you but some may exist far from you, steal it or not (stay idle)? 17 / 52

Recommend


More recommend