for planted clique part ii lecture outline
play

for Planted Clique Part II Lecture Outline Part I: Relaxed k-clique - PowerPoint PPT Presentation

Lecture 13: SOS Lower Bounds for Planted Clique Part II Lecture Outline Part I: Relaxed k-clique Equations and Theorem Statement Part II: Pseudo-Calibration/Moment Matching Part III: Decomposition of Graph Matrices via Minimum Vertex


  1. Lecture 13: SOS Lower Bounds for Planted Clique Part II

  2. Lecture Outline • Part I: Relaxed k-clique Equations and Theorem Statement • Part II: Pseudo-Calibration/Moment Matching • Part III: Decomposition of Graph Matrices via Minimum Vertex Separators • Part IV: Attempt #1: Bounding with Square Terms • Part V: Approximate PSD Decomposition • Part VI: Further Work and Open Problems

  3. Part I: Relaxed k-clique Equations and Theorem Statement

  4. Relaxed Planted Clique Equations • Flaw in the current analysis: Need to relax the 𝑙 -clique equations slightly to make the combinatorics easier to analyze • Relaxed 𝑙 -clique Equations: 2 = 𝑦 𝑗 for all i. 𝑦 𝑗 𝑦 𝑗 𝑦 𝑘 = 0 if 𝑗, 𝑘 ∉ 𝐹(𝐻) 1 − 𝜗 𝑙 ≤ σ 𝑗 𝑦 𝑗 ≤ (1 + 𝜗)𝑙

  5. Planted Clique SOS Lower Bound • Theorem 1.1 of [BHK+16]: ∃𝑑 > 0 such that if 1 𝑒 2 −𝑑 𝑙 ≤ 𝑜 𝑚𝑝𝑕𝑜 , with high probability degree 𝑒 SOS cannot prove that the relaxed 𝑙 -clique equations are infeasible. • Note: For 𝑒 = 4 there is a lower bound of ෩ Ω 𝑜 for the original 𝑙 -clique equations.

  6. High Level Idea • High level idea: Show that it is hard to distinguish between the random distribution 1 𝐻 𝑜, 2 and the planted distribution where we put each vertex in the planted clique with 𝑙 probability 𝑜 . • Remark: We take this planted distribution to make the combinatorics easier. If we could analyze the planted distribution where the clique has size exactly 𝑙 , we would satisfy the constraint σ 𝑗 𝑦 𝑗 = 𝑙 exactly.

  7. Part II: Pseudo-Calibration/Moment Matching

  8. Choosing Pseudo-Expectation Values • Last lecture, Pessimist disproved our first attempt for pseudo-expectation values, the MW moments. • How can we come up with better pseudo- expectation values?

  9. Pseudo-Calibration/Moment Matching • Setup: We are trying to distinguish between a 1 random distribution ( 𝐻 𝑜, 2 ) and a planted 1 distribution ( 𝐻 𝑜, 2 + planted clique) • Pseudo-calibration/moment matching: The pseudo-expectation values over the random distribution should match the actual expected values over the planted distribution in expectation for all low degree tests.

  10. Review: Discrete Fourier Analysis • Requirements for discrete Fourier analysis 1. An inner product 2. An orthonormal basis of Fourier characters • This gives us Fourier decompositions and Parseval’s Theorem

  11. Fourier Analysis over the Hypercube • Example: Fourier analysis on {−1,1} 𝑜 • Inner product: 𝑔 ⋅ 𝑕 = 1 2 𝑜 σ 𝑦 𝑔 𝑦 𝑕(𝑦) • Fourier characters: 𝜓 𝐵 (𝑦) = ς 𝑗∈𝐵 𝑦 𝑗 • Fourier decomposition: 𝑔 = σ 𝑊 መ 𝑔 𝐵 𝜓 𝐵 where መ 𝑔 𝐵 = 𝑔 ⋅ 𝜓 𝐵 2 = 𝑔 ⋅ 𝑔 = • Parseval’s Theorem: σ 𝐵 መ 𝑔 2 𝑔 𝐵

  12. 1 Fourier Analysis over 𝐻 𝑜, 2 • Inner product: 𝑔 ⋅ 𝑕 = 𝐹 𝐻∼𝐻 𝑜, 1 𝑔 𝐻 𝑕(𝐻) 2 • Fourier characters: 𝜓 𝐹 (𝐻) = −1 |𝐹\E 𝐻 |

  13. Pseudo-Calibration Equation • Pseudo-Calibration Equation: [ ෨ 𝐹 𝐻∼𝐻 𝑜, 1 𝐹[𝑦 𝑊 ] ⋅ 𝜓 𝐹 ] = 𝐹 𝐻∼𝑞𝑚𝑏𝑜𝑢𝑓𝑒 𝑒𝑗𝑡𝑢 [𝑦 𝑊 ⋅ 𝜓 𝐹 ] 2 • We want this equation to hold for all small 𝑊 and 𝐹

  14. Pseudo-Calibration Calculation • To calculate 𝐹 𝐻∼𝑞𝑚𝑏𝑜𝑢𝑓𝑒 𝑒𝑗𝑡𝑢 𝑦 𝑊 ⋅ 𝜓 𝐹 , first choose the planted clique and then choose the rest of the graph • 𝑦 𝑊 = 0 if any 𝑗 ∈ 𝑊 is not in the planted clique • 𝐹[𝜓 𝐹 (𝐻)] = 0 whenever 𝐹 is not fully contained in the planted clique • Def: Define 𝑊 𝐹 = endpoints of edges in 𝐹 • If 𝑊 ∪ 𝑊 𝐹 ⊆ 𝑞𝑚𝑏𝑜𝑢𝑓𝑒 𝑑𝑚𝑗𝑟𝑣𝑓 then 𝑦 𝑊 𝜓 𝐹 = 1 |𝑊∪𝑊(𝐹)| 𝑙 • 𝐹 𝐻∼𝑞𝑚𝑏𝑜𝑢𝑓𝑒 𝑒𝑗𝑡𝑢 𝑦 𝑊 ⋅ 𝜓 𝐹 = 𝑜

  15. Calculation Picture 𝐹 𝑊 • If all the vertices are in the planted clique then 𝑦 𝑊 𝜓 𝐹 (𝐻) = 1 . Otherwise, either 𝑦 𝑊 = 0 (because an 𝑗 ∈ 𝑊 ) is missing or 𝐹 𝜓 𝐹 = 0 because each edge outside the clique is present with probability 1 2

  16. Fourier Coefficients of ෨ 𝐹[𝑦 𝑊 ] • From the pseudo-calibration calculation, |𝑊∪𝑊(𝐹)| ෣ 𝑙 ෨ ෨ 𝐹[𝑦 𝑊 ] 𝐹 = 𝐹 𝐻∼𝐻 𝑜, 1 𝐹[𝑦 𝑊 ] ⋅ 𝜓 𝐹 = 𝑜 2 |𝑊∪𝑊(𝐹)| 𝑙 • We take ෨ 𝐹[𝑦 𝑊 ] = σ 𝐹: 𝑊∪𝑊 𝐹 ≤𝐸 𝑜 where 𝐸 is a truncation parameter and then normalize so that ෨ 𝐹[𝑦 ∅ ] = ෨ 𝐹[1] = 1 • Good exercise: What happens if we don’t truncate at all?

  17. Graph Matrix Decomposition |𝑊(𝐼)| 𝑙 • Ignoring the normalization, 𝑁 = σ 𝐼 𝑆 𝐼 𝑜 where we sum over ALL 𝐼 with at most 𝐸 vertices which have no isolated vertices outside of 𝑉 and 𝑊 .

  18. Part III: Decomposition of Graph Matrices via Minimum Vertex Separators

  19. Proof Sketch • How can we show 𝑁 ≽ 0 with high probability? • High level idea: 1. Find an approximate PSD decomposition 𝑁 𝑔𝑏𝑑𝑢 of 𝑁 2. Handle the error 𝑁 𝑔𝑏𝑑𝑢 − 𝑁 . Unfortunately, this error is not small enough to ignore, so we carefully show that 𝑁 𝑔𝑏𝑑𝑢 − 𝑁 ≼ 𝑁 𝑔𝑏𝑑𝑢 with high probability. We briefly sketch the ideas for this in Appendix I. For the full details, see [BHK+16]

  20. Technical Minefield • Warning: This analysis is a technical minefield Mine handled Not quite correct, correctly see Appendix II

  21. Decomposition via Separators • How can we handle all of the different 𝑆 𝐼 ? • Key idea: Decompose each 𝐼 into three parts 𝜏, 𝜐, 𝜏 ′𝑈 based on the leftmost and rightmost minimum vertex separators 𝑇 and 𝑈 of 𝐼 𝜐 T S 𝜏 𝜏 ′𝑈 U H V

  22. Separator Definitions • Definition: Given a graph 𝐼 with distinguished sets of vertices 𝑉 and 𝑊 , a vertex separator 𝑇 is a set of vertices such that any path from 𝑉 to 𝑊 must intersect 𝑇 . • Definition: A leftmost minimum vertex separator 𝑇 is a set of vertices such that for any vertex sepator 𝑇′ of minimum size, any path from 𝑉 to 𝑇′ intersects 𝑇 . • A rightmost minimum vertex separator is defined analogously.

  23. Existence of Minimum Separators • Lemma 6.3 of [BHK+16]: Leftmost and rightmost minimum vertex separators always exist and are unique.

  24. Left, Middle, and Right Parts • Let 𝑇, 𝑈 be the leftmost and rightmost minimum vertex separators of 𝐼 • Definition: We take the left part 𝜏 of 𝐼 to be the part of 𝐼 between 𝑉 and 𝑇 , we take the middle part 𝜐 of 𝐼 to be the part of 𝐼 between 𝑇 and 𝑈 , and we take the right part 𝜏 ′𝑈 of 𝐼 to be the part of 𝐼 between 𝑈 and 𝑊

  25. Conditions on Parts • 𝜏, 𝜐, 𝜏 ′𝑈 satisfy the following: • The unique minimum vertex separator of 𝜏 is 𝑊 𝜏 = 𝑇 (where 𝑊 𝜏 is the right side of 𝜏 ) • The leftmost and rightmost minimum vertex separators of 𝜐 are 𝑉 𝜐 = 𝑇 and 𝑊 𝜐 = 𝑈 (where 𝑉 𝜐 and 𝑊 𝜐 are the left and right sides of 𝜐 ) • The unique minimum vertex separator of 𝜏 ′𝑈 is 𝑉 𝜏 ′𝑈 = 𝑈 (where 𝑉 𝜏 ′𝑈 is the left side of 𝜏 ′𝑈 )

  26. Approximate Decomposition • Claim: If 𝑠 is the size of the minimum vertex separator of 𝐼 , 𝑆 𝐼 ≈ 𝑆 𝜏 𝑆 𝜐 𝑆 𝜏 ′𝑈 • Idea: There is a bijection between injective mappings 𝜚: 𝑊 𝐼 → 𝑊(𝐻) and injective mappings 𝜚 1 : 𝑊 𝜏 → 𝑊(𝐻) , 𝜚 2 : 𝑊 𝜐 → 𝑊(𝐻) , and 𝜚 3 : 𝑊(𝜏 ′𝑈 ) → 𝑊(𝐻) such that 1. 𝜚 1 , 𝜚 2 agree on 𝑇 and 𝜚 2 , 𝜚 3 agree on 𝑈 2. Collectively, 𝜚 1 , 𝜚 2 , 𝜚 3 don’t map two different vertices of 𝐼 to the same vertex of 𝐻

  27. Approximate Decomposition • Claim: If 𝑠 is the size of the minimum vertex separator of 𝐼 , 𝑆 𝐼 ≈ 𝑆 𝜏 𝑆 𝜐 𝑆 𝜏 ′𝑈 • Corollary: 𝑊 𝐼 − 𝑠 𝑊 𝐼 − 𝑠 |𝑊(𝐼)| 𝑊 𝐼 −𝑠 2 𝑆 𝜏 2 𝑆 𝜏 ′𝑈 𝑙 𝑙 𝑙 𝑙 𝑆 𝐼 ≈ 𝑆 𝜐 𝑜 𝑜 𝑜 𝑜

  28. ≈ T S 𝑆 𝐼 U V × × S T T S 𝑆 𝜐 𝑆 𝜏 𝑆 𝜏 ′𝑈 V U

  29. Intersection Terms • Warning! There will be terms where 𝜚 1 , 𝜚 2 , 𝜚 3 map multiple vertices to the same vertex. We call these intersection terms. • We sketch how to handle intersection terms in Appendix I. For now, we sweep this under the rug.

  30. Part IV: Attempt #1: Bounding With Square Terms

  31. Bounding With Square Terms • How can we handle all of the 𝑆 𝜏 𝑆 𝜐 𝑆 𝜏 ′ 𝑈 terms? 𝑈 • One idea: Can bound 𝑆 𝜏 𝑆 𝜐 𝑆 𝜏 ′𝑈 + 𝑆 𝜏 𝑆 𝜐 𝑆 𝜏 ′𝑈 as follows. 𝑈 𝑈 𝑈 𝑆 𝜐 𝑈 𝑆 𝜐 𝑈 • 𝑏𝑆 𝜏 − 𝑐𝑆 𝑏𝑆 𝜏 − 𝑐𝑆 ≽ 0 𝜏 ′𝑈 𝜏 ′𝑈

Recommend


More recommend