breaking the linear memory barrier
play

Breaking the Linear-Memory Barrier in Massively Parallel Computing - PowerPoint PPT Presentation

Breaking the Linear-Memory Barrier in Massively Parallel Computing MIS on Trees with Strongly Sublinear Memory Sebastian Brandt, Man anuela la Fis Fischer, Jara Uitto ETH Zurich Model: Massively Parallel Computing (MPC) Model: Massively


  1. ๐‘ต machines Local Memory in MPC ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) S เทฉ เทฉ เทฉ O ๐‘œ ๐œ€ ฮฉ ๐‘œ 1+๐œ€ ฮ˜(๐‘œ) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐‘ƒ ๐‘œ ๐œ€ , 0 โ‰ค ๐œ€ < 1 ๐‘ƒ ๐‘œ 1+๐œ€ , 0 < ๐œ€ โ‰ค 1 ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. No machine sees all nodes. Machines see all nodes. often trivial usual assumption for most problems, only direct simulation of for many problems, often unrealistic LOCAL/PRAM algorithms โ–ช เทฉ admits O(1)-round algorithms O(๐‘œ) prohibitively large known based on very simple โ–ช sparse graphs trivial sampling approach Algorithms have been stuck at this linear-memory barrier! Lattanzi et al. [SPAAโ€™11]

  2. ๐‘ต machines Local Memory in MPC ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) S เทฉ เทฉ เทฉ O ๐‘œ ๐œ€ ฮฉ ๐‘œ 1+๐œ€ ฮ˜(๐‘œ) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐‘ƒ ๐‘œ ๐œ€ , 0 โ‰ค ๐œ€ < 1 ๐‘ƒ ๐‘œ 1+๐œ€ , 0 < ๐œ€ โ‰ค 1 ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. No machine sees all nodes. Machines see all nodes. often trivial usual assumption for most problems, only direct simulation of for many problems, often unrealistic LOCAL/PRAM algorithms โ–ช เทฉ admits O(1)-round algorithms O(๐‘œ) prohibitively large known based on very simple โ–ช sparse graphs trivial sampling approach Algorithms have been stuck at this linear-memory barrier! Lattanzi et al. [SPAAโ€™11] Fundamentally?

  3. Breaking the Linear-Memory Barrier:

  4. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory

  5. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds

  6. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory Ghaffari, Kuhn, Uitto [FOCSโ€™19] ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines Conditional Lower Bound ฮฉ log log ๐‘œ rounds poly lo log lo log ๐’ rounds

  7. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds

  8. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  9. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  10. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  11. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  12. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  13. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  14. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  15. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  16. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  17. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: best we can hope for enhance LOCAL algorithms with global communication GKU [FOCSโ€™19] โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  18. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  19. Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐‘ป = ๐‘ท ๐’ ๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’ ๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

  20. Problem: Maximal Independent Set (MIS)

  21. Maximal Independent Set (MIS)

  22. Maximal Independent Set (MIS)

  23. Maximal Independent Set (MIS)

  24. Maximal Independent Set (MIS) Independent Set: set of non-adjacent nodes Maximal: no node can be added without violating independence

  25. Maximal Independent Set (MIS) Independent Set: set of non-adjacent nodes Maximal: no node can be added without violating independence

  26. ๐‘ต machines MIS: State of the Art ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) S เทฉ เทฉ เทฉ O ๐‘œ ๐œ€ ฮฉ ๐‘œ 1+๐œ€ ฮ˜(๐‘œ) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐‘ƒ ๐‘œ ๐œ€ , 0 โ‰ค ๐œ€ < 1 ๐‘ƒ ๐‘œ 1+๐œ€ , 0 < ๐œ€ โ‰ค 1 ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. No machine sees all nodes. Machines see all nodes.

  27. ๐‘ต machines MIS: State of the Art ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) S เทฉ เทฉ เทฉ O ๐‘œ ๐œ€ ฮฉ ๐‘œ 1+๐œ€ ฮ˜(๐‘œ) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐‘ƒ ๐‘œ ๐œ€ , 0 โ‰ค ๐œ€ < 1 ๐‘ƒ ๐‘œ 1+๐œ€ , 0 < ๐œ€ โ‰ค 1 ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lattanzi et al. [SPAAโ€™11] ๐‘ƒ 1

  28. ๐‘ต machines MIS: State of the Art ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) S เทฉ เทฉ เทฉ O ๐‘œ ๐œ€ ฮฉ ๐‘œ 1+๐œ€ ฮ˜(๐‘œ) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐‘ƒ ๐‘œ ๐œ€ , 0 โ‰ค ๐œ€ < 1 ๐‘ƒ ๐‘œ 1+๐œ€ , 0 < ๐œ€ โ‰ค 1 ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Ghaffari et al. [PODCโ€™18] Lattanzi et al. [SPAAโ€™11] ๐‘ƒ(log log ๐‘œ) ๐‘ƒ 1

  29. ๐‘ต machines MIS: State of the Art ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) S เทฉ เทฉ เทฉ O ๐‘œ ๐œ€ ฮฉ ๐‘œ 1+๐œ€ ฮ˜(๐‘œ) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐‘ƒ ๐‘œ ๐œ€ , 0 โ‰ค ๐œ€ < 1 ๐‘ƒ ๐‘œ 1+๐œ€ , 0 < ๐œ€ โ‰ค 1 ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyโ€™s Algorithm Ghaffari et al. [PODCโ€™18] Lattanzi et al. [SPAAโ€™11] ๐‘ƒ(log ๐‘œ) ๐‘ƒ(log log ๐‘œ) ๐‘ƒ 1

  30. ๐‘ต machines MIS: State of the Art ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) S เทฉ เทฉ เทฉ O ๐‘œ ๐œ€ ฮฉ ๐‘œ 1+๐œ€ ฮ˜(๐‘œ) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐‘ƒ ๐‘œ ๐œ€ , 0 โ‰ค ๐œ€ < 1 ๐‘ƒ ๐‘œ 1+๐œ€ , 0 < ๐œ€ โ‰ค 1 ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyโ€™s Algorithm Ghaffari et al. [PODCโ€™18] Lattanzi et al. [SPAAโ€™11] ๐‘ƒ(log ๐‘œ) ๐‘ƒ(log log ๐‘œ) ๐‘ƒ 1 Ghaffari and Uitto [SODAโ€™19] เทจ ๐‘ƒ log ๐‘œ

  31. ๐‘ต machines MIS: State of the Art on Trees ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) S เทฉ เทฉ เทฉ O ๐‘œ ๐œ€ ฮฉ ๐‘œ 1+๐œ€ ฮ˜(๐‘œ) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐‘ƒ ๐‘œ ๐œ€ , 0 โ‰ค ๐œ€ < 1 ๐‘ƒ ๐‘œ 1+๐œ€ , 0 < ๐œ€ โ‰ค 1 ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyโ€™s Algorithm Ghaffari et al. [PODCโ€™18] Lattanzi et al. [SPAAโ€™11] ๐‘ƒ(log ๐‘œ) ๐‘ƒ(log log ๐‘œ) ๐‘ƒ 1 Ghaffari and Uitto [SODAโ€™19] เทจ ๐‘ƒ log ๐‘œ

  32. ๐‘ต machines MIS: State of the Art on Trees ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) S เทฉ เทฉ เทฉ O ๐‘œ ๐œ€ ฮฉ ๐‘œ 1+๐œ€ ฮ˜(๐‘œ) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐‘ƒ ๐‘œ ๐œ€ , 0 โ‰ค ๐œ€ < 1 ๐‘ƒ ๐‘œ 1+๐œ€ , 0 < ๐œ€ โ‰ค 1 ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyโ€™s Algorithm Trivial solution Ghaffari et al. [PODCโ€™18] Lattanzi et al. [SPAAโ€™11] Trivial solution ๐‘ƒ(log ๐‘œ) ๐‘ƒ(log log ๐‘œ) ๐‘ƒ 1 ๐‘ƒ 1 ๐‘ƒ 1 Ghaffari and Uitto [SODAโ€™19] เทจ ๐‘ƒ log ๐‘œ

  33. ๐‘ต machines MIS: State of the Art on Trees ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) S เทฉ เทฉ เทฉ O ๐‘œ ๐œ€ ฮฉ ๐‘œ 1+๐œ€ ฮ˜(๐‘œ) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐‘ƒ ๐‘œ ๐œ€ , 0 โ‰ค ๐œ€ < 1 ๐‘ƒ ๐‘œ 1+๐œ€ , 0 < ๐œ€ โ‰ค 1 ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Our Result Lubyโ€™s Algorithm Trivial solution Ghaffari et al. [PODCโ€™18] Lattanzi et al. [SPAAโ€™11] Trivial solution ๐‘ƒ log 3 log ๐‘œ ๐‘ƒ(log ๐‘œ) ๐‘ƒ 1 ๐‘ƒ(log log ๐‘œ) ๐‘ƒ 1 ๐‘ƒ 1 Ghaffari and Uitto [SODAโ€™19] เทจ ๐‘ƒ log ๐‘œ

  34. Our Result ๐‘ท(๐ฆ๐ฉ๐ก ๐Ÿ’ ๐ฆ๐ฉ๐ก ๐’) -round MPC algorithm ๐‘ท ๐’ ๐œบ memory that w.h.p. computes MIS on trees. with ๐“ = เทฉ

  35. Our Result เทฉ ๐‘ท ๐ฆ๐ฉ๐ก ๐’ rounds ๐‘ท ๐’ ๐œบ memory ๐“ = เทฉ Ghaffari and Uitto [SODAโ€™19] ๐‘ท(๐ฆ๐ฉ๐ก ๐Ÿ’ ๐ฆ๐ฉ๐ก ๐’) -round MPC algorithm ๐‘ท ๐’ ๐œบ memory that w.h.p. computes MIS on trees. with ๐“ = เทฉ

  36. Our Result เทฉ ๐‘ท ๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐’ rounds ๐‘ท ๐ฆ๐ฉ๐ก ๐’ rounds ๐“ = เทฉ ๐‘ท ๐’ ๐œบ memory ๐“ = เทฉ ๐‘ท ๐’ memory Ghaffari et al. [PODCโ€™18] Ghaffari and Uitto [SODAโ€™19] ๐‘ท(๐ฆ๐ฉ๐ก ๐Ÿ’ ๐ฆ๐ฉ๐ก ๐’) -round MPC algorithm ๐‘ท ๐’ ๐œบ memory that w.h.p. computes MIS on trees. with ๐“ = เทฉ

  37. Our Result เทฉ ๐‘ท ๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐’ rounds ๐‘ท ๐ฆ๐ฉ๐ก ๐’ rounds ๐“ = เทฉ ๐‘ท ๐’ ๐œบ memory ๐“ = เทฉ ๐‘ท ๐’ memory Ghaffari et al. [PODCโ€™18] Ghaffari and Uitto [SODAโ€™19] ๐‘ท(๐ฆ๐ฉ๐ก ๐Ÿ’ ๐ฆ๐ฉ๐ก ๐’) -round MPC algorithm ๐‘ท ๐’ ๐œบ memory that w.h.p. computes MIS on trees. with ๐“ = เทฉ Conditional ๐›(๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐’) -round lower bound for ๐“ = เทฉ ๐‘ท ๐’ ๐œบ Ghaffari, Kuhn, and Uitto [FOCSโ€™19]

  38. Algorithm

  39. Algorithm Outline

  40. Algorithm Outline

  41. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  42. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ€™16] Beck [RSAโ€™91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  43. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ€™16] Beck [RSAโ€™91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  44. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ€™16] Beck [RSAโ€™91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  45. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ€™16] Beck [RSAโ€™91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  46. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ€™16] Beck [RSAโ€™91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  47. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ€™16] Beck [RSAโ€™91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  48. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ€™16] Beck [RSAโ€™91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  49. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ€™16] Beck [RSAโ€™91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  50. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ€™16] Beck [RSAโ€™91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  51. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ€™16] Beck [RSAโ€™91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  52. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  53. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  54. Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation

  55. Polynomial Degree Reduction: Subsample-and-Conquer

  56. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  57. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  58. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  59. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  60. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  61. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  62. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  63. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  64. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  65. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  66. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  67. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  68. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

  69. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring

  70. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring

  71. Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

Recommend


More recommend