๐ต machines Local Memory in MPC ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) S เทฉ เทฉ เทฉ O ๐ ๐ ฮฉ ๐ 1+๐ ฮ(๐) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐ ๐ ๐ , 0 โค ๐ < 1 ๐ ๐ 1+๐ , 0 < ๐ โค 1 ๐ = เทจ ๐ = เทจ ๐ = เทจ ๐ ๐ Machines see all nodes. No machine sees all nodes. Machines see all nodes. often trivial usual assumption for most problems, only direct simulation of for many problems, often unrealistic LOCAL/PRAM algorithms โช เทฉ admits O(1)-round algorithms O(๐) prohibitively large known based on very simple โช sparse graphs trivial sampling approach Algorithms have been stuck at this linear-memory barrier! Lattanzi et al. [SPAAโ11]
๐ต machines Local Memory in MPC ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) S เทฉ เทฉ เทฉ O ๐ ๐ ฮฉ ๐ 1+๐ ฮ(๐) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐ ๐ ๐ , 0 โค ๐ < 1 ๐ ๐ 1+๐ , 0 < ๐ โค 1 ๐ = เทจ ๐ = เทจ ๐ = เทจ ๐ ๐ Machines see all nodes. No machine sees all nodes. Machines see all nodes. often trivial usual assumption for most problems, only direct simulation of for many problems, often unrealistic LOCAL/PRAM algorithms โช เทฉ admits O(1)-round algorithms O(๐) prohibitively large known based on very simple โช sparse graphs trivial sampling approach Algorithms have been stuck at this linear-memory barrier! Lattanzi et al. [SPAAโ11] Fundamentally?
Breaking the Linear-Memory Barrier:
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory Ghaffari, Kuhn, Uitto [FOCSโ19] ๐ต = ๐ท ๐/๐ ๐บ machines Conditional Lower Bound ฮฉ log log ๐ rounds poly lo log lo log ๐ rounds
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: best we can hope for enhance LOCAL algorithms with global communication GKU [FOCSโ19] โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory ๐ป = ๐ท ๐ ๐บ local memory ๐ต = ๐ท ๐/๐ ๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Problem: Maximal Independent Set (MIS)
Maximal Independent Set (MIS)
Maximal Independent Set (MIS)
Maximal Independent Set (MIS)
Maximal Independent Set (MIS) Independent Set: set of non-adjacent nodes Maximal: no node can be added without violating independence
Maximal Independent Set (MIS) Independent Set: set of non-adjacent nodes Maximal: no node can be added without violating independence
๐ต machines MIS: State of the Art ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) S เทฉ เทฉ เทฉ O ๐ ๐ ฮฉ ๐ 1+๐ ฮ(๐) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐ ๐ ๐ , 0 โค ๐ < 1 ๐ ๐ 1+๐ , 0 < ๐ โค 1 ๐ = เทจ ๐ = เทจ ๐ = เทจ ๐ ๐ Machines see all nodes. No machine sees all nodes. Machines see all nodes.
๐ต machines MIS: State of the Art ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) S เทฉ เทฉ เทฉ O ๐ ๐ ฮฉ ๐ 1+๐ ฮ(๐) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐ ๐ ๐ , 0 โค ๐ < 1 ๐ ๐ 1+๐ , 0 < ๐ โค 1 ๐ = เทจ ๐ = เทจ ๐ = เทจ ๐ ๐ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lattanzi et al. [SPAAโ11] ๐ 1
๐ต machines MIS: State of the Art ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) S เทฉ เทฉ เทฉ O ๐ ๐ ฮฉ ๐ 1+๐ ฮ(๐) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐ ๐ ๐ , 0 โค ๐ < 1 ๐ ๐ 1+๐ , 0 < ๐ โค 1 ๐ = เทจ ๐ = เทจ ๐ = เทจ ๐ ๐ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Ghaffari et al. [PODCโ18] Lattanzi et al. [SPAAโ11] ๐(log log ๐) ๐ 1
๐ต machines MIS: State of the Art ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) S เทฉ เทฉ เทฉ O ๐ ๐ ฮฉ ๐ 1+๐ ฮ(๐) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐ ๐ ๐ , 0 โค ๐ < 1 ๐ ๐ 1+๐ , 0 < ๐ โค 1 ๐ = เทจ ๐ = เทจ ๐ = เทจ ๐ ๐ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyโs Algorithm Ghaffari et al. [PODCโ18] Lattanzi et al. [SPAAโ11] ๐(log ๐) ๐(log log ๐) ๐ 1
๐ต machines MIS: State of the Art ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) S เทฉ เทฉ เทฉ O ๐ ๐ ฮฉ ๐ 1+๐ ฮ(๐) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐ ๐ ๐ , 0 โค ๐ < 1 ๐ ๐ 1+๐ , 0 < ๐ โค 1 ๐ = เทจ ๐ = เทจ ๐ = เทจ ๐ ๐ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyโs Algorithm Ghaffari et al. [PODCโ18] Lattanzi et al. [SPAAโ11] ๐(log ๐) ๐(log log ๐) ๐ 1 Ghaffari and Uitto [SODAโ19] เทจ ๐ log ๐
๐ต machines MIS: State of the Art on Trees ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) S เทฉ เทฉ เทฉ O ๐ ๐ ฮฉ ๐ 1+๐ ฮ(๐) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐ ๐ ๐ , 0 โค ๐ < 1 ๐ ๐ 1+๐ , 0 < ๐ โค 1 ๐ = เทจ ๐ = เทจ ๐ = เทจ ๐ ๐ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyโs Algorithm Ghaffari et al. [PODCโ18] Lattanzi et al. [SPAAโ11] ๐(log ๐) ๐(log log ๐) ๐ 1 Ghaffari and Uitto [SODAโ19] เทจ ๐ log ๐
๐ต machines MIS: State of the Art on Trees ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) S เทฉ เทฉ เทฉ O ๐ ๐ ฮฉ ๐ 1+๐ ฮ(๐) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐ ๐ ๐ , 0 โค ๐ < 1 ๐ ๐ 1+๐ , 0 < ๐ โค 1 ๐ = เทจ ๐ = เทจ ๐ = เทจ ๐ ๐ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyโs Algorithm Trivial solution Ghaffari et al. [PODCโ18] Lattanzi et al. [SPAAโ11] Trivial solution ๐(log ๐) ๐(log log ๐) ๐ 1 ๐ 1 ๐ 1 Ghaffari and Uitto [SODAโ19] เทจ ๐ log ๐
๐ต machines MIS: State of the Art on Trees ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) S เทฉ เทฉ เทฉ O ๐ ๐ ฮฉ ๐ 1+๐ ฮ(๐) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: ๐ ๐ ๐ , 0 โค ๐ < 1 ๐ ๐ 1+๐ , 0 < ๐ โค 1 ๐ = เทจ ๐ = เทจ ๐ = เทจ ๐ ๐ Machines see all nodes. No machine sees all nodes. Machines see all nodes. Our Result Lubyโs Algorithm Trivial solution Ghaffari et al. [PODCโ18] Lattanzi et al. [SPAAโ11] Trivial solution ๐ log 3 log ๐ ๐(log ๐) ๐ 1 ๐(log log ๐) ๐ 1 ๐ 1 Ghaffari and Uitto [SODAโ19] เทจ ๐ log ๐
Our Result ๐ท(๐ฆ๐ฉ๐ก ๐ ๐ฆ๐ฉ๐ก ๐) -round MPC algorithm ๐ท ๐ ๐บ memory that w.h.p. computes MIS on trees. with ๐ = เทฉ
Our Result เทฉ ๐ท ๐ฆ๐ฉ๐ก ๐ rounds ๐ท ๐ ๐บ memory ๐ = เทฉ Ghaffari and Uitto [SODAโ19] ๐ท(๐ฆ๐ฉ๐ก ๐ ๐ฆ๐ฉ๐ก ๐) -round MPC algorithm ๐ท ๐ ๐บ memory that w.h.p. computes MIS on trees. with ๐ = เทฉ
Our Result เทฉ ๐ท ๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐ rounds ๐ท ๐ฆ๐ฉ๐ก ๐ rounds ๐ = เทฉ ๐ท ๐ ๐บ memory ๐ = เทฉ ๐ท ๐ memory Ghaffari et al. [PODCโ18] Ghaffari and Uitto [SODAโ19] ๐ท(๐ฆ๐ฉ๐ก ๐ ๐ฆ๐ฉ๐ก ๐) -round MPC algorithm ๐ท ๐ ๐บ memory that w.h.p. computes MIS on trees. with ๐ = เทฉ
Our Result เทฉ ๐ท ๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐ rounds ๐ท ๐ฆ๐ฉ๐ก ๐ rounds ๐ = เทฉ ๐ท ๐ ๐บ memory ๐ = เทฉ ๐ท ๐ memory Ghaffari et al. [PODCโ18] Ghaffari and Uitto [SODAโ19] ๐ท(๐ฆ๐ฉ๐ก ๐ ๐ฆ๐ฉ๐ก ๐) -round MPC algorithm ๐ท ๐ ๐บ memory that w.h.p. computes MIS on trees. with ๐ = เทฉ Conditional ๐(๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐) -round lower bound for ๐ = เทฉ ๐ท ๐ ๐บ Ghaffari, Kuhn, and Uitto [FOCSโ19]
Algorithm
Algorithm Outline
Algorithm Outline
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ16] Beck [RSAโ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ16] Beck [RSAโ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ16] Beck [RSAโ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ16] Beck [RSAโ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ16] Beck [RSAโ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ16] Beck [RSAโ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ16] Beck [RSAโ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ16] Beck [RSAโ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ16] Beck [RSAโ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAโ16] Beck [RSAโ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Polynomial Degree Reduction: Subsample-and-Conquer
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Recommend
More recommend