lecture 6 linear programming for sparsest cut sparsest
play

Lecture 6: Linear Programming for Sparsest Cut Sparsest Cut and SOS - PowerPoint PPT Presentation

Lecture 6: Linear Programming for Sparsest Cut Sparsest Cut and SOS The SOS hierarchy captures the algorithms for sparsest cut, but they were discovered directly without thinking about SOS (and this is how well present them) Why we


  1. Lecture 6: Linear Programming for Sparsest Cut

  2. Sparsest Cut and SOS • The SOS hierarchy captures the algorithms for sparsest cut, but they were discovered directly without thinking about SOS (and this is how we’ll present them) • Why we are covering sparsest cut in detail: 1. Quite interesting in its own right 2. Illustrates the kinds of things SOS can capture 3. Determining if SOS can do better is a major open problem on SOS.

  3. Lecture Outline • Part I: Sparsest cut • Part II: Linear programming relaxation and analysis via metric embeddings • Part III: Bourgain’s Theorem • Part IV: Tight example: expanders

  4. Part I: Sparsest Cut

  5. Flaw of Minimum Cut • We’ve seen that MIN-CUT can be solved efficiently • However, MIN-CUT may not be the best way to decompose a graph • Example:

  6. Flaw of Minimum Cut • MIN-CUT: • Desired Cut:

  7. ҧ Sparsest Cut Problem • Idea: Divide # of cut edges by # of possible which could have been cut • Definition: Given a cut 𝐷 = (𝑇, ҧ 𝑇) , define 𝜚 𝐷 = # 𝑝𝑔 𝑓𝑒𝑕𝑓𝑡 𝑑𝑣𝑢 𝑇 ⋅ 𝑇 • Sparsest cut problem: Minimize 𝜚(𝐷) • Can also have a weighted version: σ 𝑗,𝑘:𝑗∈𝑇,𝑘∈ ҧ 𝑇, 𝑗,𝑘 ∈𝐹(𝐻) 𝑥(𝑗, 𝑘) 𝜚 𝐷 = σ 𝑗,𝑘:𝑗∈𝑇,𝑘∈ ҧ 𝑇 𝑥(𝑗, 𝑘)

  8. Linear Programming for Sparsest Cut • Theorem [LR99]: There is a linear programming relaxation for sparsest cut which gives an 𝑃(log 𝑜) approximation.

  9. Part II: Linear Programming Relaxation and Analysis via Metric Embeddings

  10. Metric and Pseudo-metric Spaces • Definition: A metric space (𝑌, 𝑒) is a set of points 𝑌 and a distance function 𝑒: 𝑌 × 𝑌 → ℝ ≥0 where 1. ∀𝑦 1 , 𝑦 2 ∈ 𝑌, 𝑒 𝑦 1 , 𝑦 2 = 𝑒(𝑦 1 , 𝑦 2 ) 2. ∀𝑦 1 , 𝑦 2 ∈ 𝑌, 𝑒 𝑦 1 , 𝑦 2 = 0 ⬄ 𝑦 1 = 𝑦 2 3. ∀𝑦 1 , 𝑦 2 , 𝑦 3 ∈ 𝑌, d x 1 , x 3 ≤ 𝑒 𝑦 1 , 𝑦 2 + 𝑒(𝑦 2 , 𝑦 3 ) • Example 1: Euclidean Space: 𝑒 𝑦, 𝑧 = 𝑧 − 𝑦 • Example 2: 𝑀 1 distance: 𝑒 𝑦, 𝑧 = σ 𝑗 |𝑧 𝑗 − 𝑦 𝑗 | • Without the second condition, this is called a pseudo-metric space

  11. ҧ Cut Spaces • A cut 𝐷 = (𝑇, ҧ 𝑇) induces a pseudo-metric space on a graph 𝐻 : Take 𝑒(𝑣, 𝑤) = 0 if 𝑣, 𝑤 ∈ 𝑇 or 𝑣, 𝑤 ∈ 𝑇 and otherwise take 𝑒 𝑣, 𝑤 = 𝑑 for some 𝑑 > 0 . • We call this a cut space.

  12. Problem Reformulation σ 𝑗,𝑘:𝑗<𝑘, 𝑗,𝑘 ∈𝐹(𝐻) 𝑒(𝑗,𝑘) • Reformulation: Minimize over σ 𝑗,𝑘:𝑗<𝑘 𝑒(𝑗,𝑘) all cut spaces • First issue: Objective function is nonlinear • Fix: Set denominator equal to 1 . • Modified Reformulation: Minimize σ 𝑗,𝑘:𝑗<𝑘, 𝑗,𝑘 ∈𝐹(𝐻) 𝑒(𝑗, 𝑘) over all cut spaces normalized so that σ 𝑗,𝑘:𝑗<𝑘 𝑒(𝑗, 𝑘) = 1

  13. Problem Relaxation • Want to minimize σ 𝑗,𝑘:𝑗<𝑘, 𝑗,𝑘 ∈𝐹(𝐻) 𝑒(𝑗, 𝑘) over all cut spaces normalized so that σ 𝑗,𝑘:𝑗<𝑘 𝑒(𝑗, 𝑘) = 1 • Relaxation: Minimize σ 𝑗,𝑘:𝑗<𝑘, 𝑗,𝑘 ∈𝐹(𝐻) 𝑒(𝑗, 𝑘) over all pseudo-metrics normalized so that σ 𝑗,𝑘:𝑗<𝑘 𝑒(𝑗, 𝑘) = 1 . Linear program constraints: ∀𝑗, 𝑘 , 𝑒 𝑗, 𝑘 = 𝑒(𝑘, 𝑗) ≥ 0 1. ∀𝑗, 𝑘, 𝑙, 𝑒(𝑗, 𝑙) ≤ 𝑒(𝑗, 𝑘) + 𝑒(𝑘, 𝑙) 2. σ 𝑗,𝑘:𝑗<𝑘 𝑒(𝑗, 𝑘) = 1 3.

  14. 𝑀 1 Spaces • Definition: We say that a pseudo-metric (𝑌, 𝑒) is an 𝑀 1 space if there is a mapping f: 𝑌 → ℝ n such that ∀𝑦, 𝑧 ∈ 𝑌 , 𝑒 𝑦, 𝑧 = σ 𝑗 |𝑔 𝑧 𝑗 − 𝑔 𝑦 𝑗 | • In this case, we may as well pretend we are already in ℝ 𝑜 with the 𝑀 1 distance function • Lemma: For the sparsest cut relaxation, there is no gap between 𝑀 1 spacs and cut spaces!

  15. 𝑀 1 Space Example • If 𝑦 1 = 1,2 , 𝑦 2 = (0,3) , and 𝑦 3 = (4,4) , then in the 𝑀 1 metric, 𝑒 𝑦 1 , 𝑦 2 = 2 , 𝑒 𝑦 1 , 𝑦 3 = 5 , and 𝑒 𝑦 2 , 𝑦 3 = 5 6 5 𝑦 3 4 𝑦 2 3 𝑦 1 2 1 0 0 1 2 3 4 5 6

  16. Decomposing 𝑀 1 Pseudo-metrics • Lemma: Any finite 𝑀 1 space can be decomposed as a linear combination of cut spaces. • Proof sketch: We can work coordinate by coordinate. For a single coordinate, here is the picture: + 1 × -2 -1 0 1 2 = 2 × + -2 -1 0 1 2 -2 -1 0 1 2 1 × -2 -1 0 1 2

  17. Useful Lemma • Lemma: If 𝑏, 𝑐 ≥ 0 and 𝑑, 𝑒 > 0 then min 𝑏 𝑑 , 𝑐 ≤ 𝑏 + 𝑐 𝑑 + 𝑒 ≤ max 𝑏 𝑑 , 𝑐 𝑒 𝑒 • Proof: Without loss of generality, assume that 𝑏 𝑑 ≤ 𝑐 𝑒 . Take 𝑏 ′ = 𝑐𝑑 𝑒 ≥ 𝑏 and take 𝑐 ′ = 𝑒𝑏 𝑑 ≤ 𝑐 . 𝑑 = 𝑏+𝑐 ′ 𝑑+𝑒 ≤ 𝑏 ′ +𝑐 Now 𝑏 𝑑+𝑒 ≤ 𝑏+𝑐 𝑑+𝑒 = 𝑐 𝑒 • Together with the previous decomposition, this shows that for any 𝑀 1 space, there’s always a cut spacec which is as good or better.

  18. Metric Embeddings and Distortion • Often want to embed a more complicated metric space into a simpler one. This embedding won’t be perfect, but may still be useful • Given metric spaces 𝑌, 𝑒 , (𝑍, 𝑒 ′ ) and a map 𝑔: 𝑌 → 𝑍 : 𝑒 ′ (𝑔 𝑣 ,𝑔(𝑤)) 1. Define the expansion of 𝑔 to be m𝑏𝑦 𝑒(𝑣,𝑤) 𝑣,𝑤∈𝑌 𝑒(𝑣,𝑤) 2. Define the contraction of 𝑔 to be m𝑏𝑦 𝑒 ′ (𝑔 𝑣 ,𝑔(𝑤)) 𝑣,𝑤∈𝑌 3. Define the distortion of 𝑔 to be the product of the expansion and the contraction of 𝑔

  19. Metric Embeddings into 𝑀 1 • If the pseudo-metric given by our linear program can be embedded into 𝑀 1 with distortion 𝛽 , this gives an 𝛽 -approximation for the value of the sparsest cut. • Question: How well can general finite pseudo- metric spaces be embedded into 𝑀 1 ?

  20. Part III: Bourgain’s Theorem

  21. Bourgain’s Theorem • Theorem [Bou85]: Every metric on 𝑜 points can be embedded into an 𝑀 1 metric with distortion 𝑃(log 𝑜) . Moreover, 𝑃( 𝑚𝑝𝑕𝑜 2 ) coordinates are sufficient • Note: the bound on the number of coordinates is due to Linial, London, and Rabinovich [LLR95]

  22. Fréchet Embeddings • Def: Given a set of points 𝑇 , define 𝑒 𝑦, 𝑇 = min 𝑡∈𝑇 𝑒 𝑦, 𝑡 • Fréchet embedding: Gives a value to each point based on its distance from some subset 𝑇 of points and takes the distance between. In other words, 𝑒 𝑇 𝑦, 𝑧 = |𝑒 𝑧, 𝑇 − 𝑒(𝑦, 𝑇)| • Proposition: For any 𝑇 , 𝑒 𝑇 𝑦, 𝑧 ≤ 𝑒(𝑦, 𝑧)

  23. Fréchet Embedding Example • Start with the distance metric 𝑒 𝑣, 𝑤 = length of the shortest path from 𝑣 to 𝑤 on the graph shown. If we take 𝑇 to be the set of red vertices, we get the values shown for 𝑒(𝑤, 𝑇) . 1 1 0 1 2 0 1 2 3

  24. Fréchet Embeddings Bound • 𝑒 𝑦, 𝑇 = min 𝑡∈𝑇 𝑒 𝑦, 𝑡 • 𝑒 𝑇 𝑦, 𝑧 = |𝑒 𝑧, 𝑇 − 𝑒(𝑦, 𝑇)| • Proposition: For any 𝑇 , 𝑒 𝑇 𝑦, 𝑧 ≤ 𝑒(𝑦, 𝑧) • Proof: Let 𝑡 be the point in 𝑇 of minimal distance from 𝑦 . 𝑒 𝑧, 𝑇 ≤ 𝑒 𝑧, 𝑡 ≤ 𝑒 𝑦, 𝑡 + 𝑒 𝑦, 𝑧 = 𝑒 𝑦, 𝑧 + 𝑒(𝑦, 𝑇) • By symmetry, d 𝑦, 𝑇 ≤ 𝑒 𝑦, 𝑧 + 𝑒(𝑧, 𝑇) so d S x, y = 𝑒 𝑧, 𝑇 − 𝑒 𝑦, 𝑇 ≤ 𝑒(𝑦, 𝑧) , as needed.

  25. Bourgain’s Theorem Proof Idea • Proof idea: Choose many Fréchet embeddings, have a coordinate for each one. • Resulting expansion is at most the sum of the weights on the embeddings (this will be 𝑃(𝑚𝑝𝑕𝑜) for us) • Challenge: Ensure that the contraction is 𝑃(1) . In other words, ensure that some of the Fréchet embeddings preserve some of the distance between each pair of points 𝑦 and 𝑧 .

  26. Bad Case #1 • Issue: Could have that 𝑔 𝑇 𝑦, 𝑧 ≪ 𝑒(𝑦, 𝑧) . In fact, 𝑔 𝑇 (𝑦, 𝑧) can easily be zero! • Case 1: All points in 𝑇 are far from 𝑦 and 𝑧 and 𝑒 𝑦, 𝑇 = 𝑒(𝑧, 𝑇) . • Example: y x Nearest point in 𝑇

  27. Bad Case #2 • Case 2: There two points 𝑡 𝑦 and 𝑡 𝑧 in 𝑇 where 𝑡 𝑦 is very close to 𝑦 and 𝑡 𝑧 is very close to 𝑧 . If so, can have that d x, S = 𝑒 𝑦, 𝑡 𝑦 = 𝑒 𝑧, 𝑡 𝑧 = 𝑒(𝑧, 𝑇) • Example: 𝑡 𝑧 𝑡 𝑦 x y

  28. Attempt #1 • Want 𝑇 to contain exactly one point 𝑞 which is very close to 𝑦 or 𝑧 . • Let 𝑒 = 𝑒(𝑦, 𝑧) . Pick 𝑇 so that 𝑇 has precisely one point 𝑞 which is within distance 𝑒 3 of either 𝑦 or 𝑧 . • Can be accomplished with constant probability by taking a random S of the appropriate size. x y

  29. Attempt #1 • Attempt #1: Pick 𝑇 so that 𝑇 has precisely one point 𝑞 which is within distance 𝑒 3 of either 𝑦 or 𝑧 . • Danger: 𝑇 also contains point(s) of distance slightly more than 𝑒 3 from the other point. x y

  30. Attempt #1 • Possible fix: Require that 𝑇 contains exactly one point within distance 𝑒 3 of 𝑦 or 𝑧 and no other 𝑒 2 of 𝑦 or 𝑧 points within distance 𝑒 • This implies 𝑒 𝑇 𝑦, 𝑧 ≥ 6 • However, may be too much to ask for… x y

Recommend


More recommend