from weak to strong lp gaps for all csps
play

From Weak to Strong LP Gaps for all CSPs Mrinalkanti Ghosh joint - PowerPoint PPT Presentation

From Weak to Strong LP Gaps for all CSPs Mrinalkanti Ghosh joint work with: Madhur Tulsiani MAX k-CSP - n variables - m constraints MAX k-CSP - n variables taking boolean values. - m constraints: each is a k-ary boolean predicate. - Satisfy


  1. From Weak to Strong LP Gaps for all CSPs Mrinalkanti Ghosh joint work with: Madhur Tulsiani

  2. MAX k-CSP - n variables - m constraints

  3. MAX k-CSP - n variables taking boolean values. - m constraints: each is a k-ary boolean predicate. - Satisfy as many as possible.

  4. MAX k-CSP - n variables taking boolean values. - m constraints: each is a k-ary boolean predicate. - Satisfy as many as possible. Max-3-SAT x 1 ∨ x 22 ∨ x 19 x 3 ∨ x 9 ∨ x 23 x 5 ∨ x 7 ∨ x 9 . . .

  5. MAX k-CSP - n variables taking boolean values. - m constraints: each is a k-ary boolean predicate. - Satisfy as many as possible. Max-3-SAT Max-Cut x 1 x 1 ∨ x 22 ∨ x 19 x 2 x 1 � = x 2 x 4 x 3 ∨ x 9 ∨ x 23 x 2 � = x 5 x 3 � = x 4 x 5 ∨ x 7 ∨ x 9 . . . . x 5 x 7 x 3 . . x 6

  6. MAX k-CSP - n variables taking boolean values. - m constraints: each is a k-ary boolean predicate. - Satisfy as many as possible. Max-3-SAT Max-Cut x 1 x 1 ∨ x 22 ∨ x 19 x 2 x 1 � = x 2 x 4 x 3 ∨ x 9 ∨ x 23 x 2 � = x 5 x 3 � = x 4 x 5 ∨ x 7 ∨ x 9 . . . . x 5 x 7 x 3 . . x 6 Approximation Problem: Approximate the fraction of constraints simultaneously satisfiable.

  7. MAX k-CSP - n variables taking values in some finite domains. - m constraints: each is a non-negative k-ary function. - Satisfy as many as possible. Max-3-SAT Max-Cut x 1 x 1 ∨ x 22 ∨ x 19 x 2 x 1 � = x 2 x 4 x 3 ∨ x 9 ∨ x 23 x 2 � = x 5 x 3 � = x 4 x 5 ∨ x 7 ∨ x 9 . . . . x 5 x 7 x 3 . . x 6 Approximation Problem: Approximate the fraction of constraints simultaneously satisfiable.

  8. CSPs and Relaxations MAX k-CSP (f): for i -th constraint, let S C i := ( x i 1 , · · · , x i k ). Then: � C i ≡ f ( x i 1 + b i 1 , · · · , x i k + b i k ) ≡ f ( α + b C i ) · x ( S Ci ,α ) , SCi α ∈{ 0 , 1 } with x ( S Ci ,α ) = indicator of assignment of α to S C i .

  9. CSPs and Relaxations MAX k-CSP (f): for i -th constraint, let S C i := ( x i 1 , · · · , x i k ). Then: � C i ≡ f ( x i 1 + b i 1 , · · · , x i k + b i k ) ≡ f ( α + b C i ) · x ( S Ci ,α ) , SCi α ∈{ 0 , 1 } with x ( S Ci ,α ) = indicator of assignment of α to S C i . � ∀ C ∈ Φ , i ∈ S C , x ( S C ,α ) = x ( i , b ) b ∈ { 0 , 1 } α ∈{ 0 , 1 } SC α ( i )= b � x ( i , b ) = 1 ∀ i ∈ [ n ] b ∈{ 0 , 1 } x ( S ,α ) ≥ 0

  10. CSPs and Relaxations MAX k-CSP (f): for i -th constraint, let S C i := ( x i 1 , · · · , x i k ). Then: � C i ≡ f ( x i 1 + b i 1 , · · · , x i k + b i k ) ≡ f ( α + b C i ) · x ( S Ci ,α ) , SCi α ∈{ 0 , 1 } with x ( S Ci ,α ) = indicator of assignment of α to S C i . �� � maximize α ∈{ 0 , 1 } SC f ( α + b C ) · x ( S C ,α ) E C ∈ Φ � ∀ C ∈ Φ , i ∈ S C , x ( S C ,α ) = x ( i , b ) b ∈ { 0 , 1 } α ∈{ 0 , 1 } SC α ( i )= b � x ( i , b ) = 1 ∀ i ∈ [ n ] b ∈{ 0 , 1 } x ( S ,α ) ≥ 0

  11. CSPs and Relaxations MAX k-CSP (f): for i -th constraint, let S C i := ( x i 1 , · · · , x i k ). Then: � C i ≡ f ( x i 1 + b i 1 , · · · , x i k + b i k ) ≡ f ( α + b C i ) · x ( S Ci ,α ) , SCi α ∈{ 0 , 1 } with x ( S Ci ,α ) = indicator of assignment of α to S C i . �� � maximize α ∈{ 0 , 1 } SC f ( α + b C ) · x ( S C ,α ) E C ∈ Φ � #constraints = ∀ C ∈ Φ , i ∈ S C , � m · 2 k � x ( S C ,α ) = x ( i , b ) Θ b ∈ { 0 , 1 } α ∈{ 0 , 1 } SC α ( i )= b � x ( i , b ) = 1 ∀ i ∈ [ n ] b ∈{ 0 , 1 } x ( S ,α ) ≥ 0

  12. Extended Formulation and Sherali-Adams Relaxation - Extended Formulation: Defined by a feasible polytope P , and a way of encoding instances Φ as a (linear) objective function w Φ .

  13. Extended Formulation and Sherali-Adams Relaxation - Extended Formulation: Defined by a feasible polytope P , and a way of encoding instances Φ as a (linear) objective function w Φ . - Optimize objective � w Φ , x � (depending on Φ) over P .

  14. Extended Formulation and Sherali-Adams Relaxation - Extended Formulation: Defined by a feasible polytope P , and a way of encoding instances Φ as a (linear) objective function w Φ . - Optimize objective � w Φ , x � (depending on Φ) over P . - Introduce additional variables y . Image from [Fiorini-Rothvoss-Tiwari-11] Optimize over polytope P = { x | ∃ y Ex + Fy = g , y ≥ 0 } . Size equals #variables + #constraints.

  15. Extended Formulation and Sherali-Adams Relaxation - Extended Formulation: Defined by a feasible polytope P , and a way of encoding instances Φ as a (linear) objective function w Φ . - Sherali-Adams: A Sherali-Adams of level t is an Extended Formulation with � · 2 t . � n Image from #variables = t [Fiorini-Rothvoss-Tiwari-11]

  16. Extended Formulation and Sherali-Adams Relaxation - Extended Formulation: Defined by a feasible polytope P , and a way of encoding instances Φ as a (linear) objective function w Φ . - Sherali-Adams: A Sherali-Adams of level t is an Extended Formulation with � · 2 t . � n Image from #variables = t [Fiorini-Rothvoss-Tiwari-11] - Variables: x ( S ,α ) , | S | ≤ t , α ∈ { 0 , 1 } S .

  17. Extended Formulation and Sherali-Adams Relaxation EF: - Extended Formulation: Defined by a feasible polytope P , and a way of encoding instances Φ as a (linear) objective function w Φ . - Sherali-Adams: A Sherali-Adams of level t is an Extended Formulation with � · 2 t . � n SA: #variables = t D S ∩ T S T D S D T - Feasible point in SA ( t ): Family {D S } | S |≤ t of consistent distribution with D S a distribution on { 0 , 1 } S .

  18. Extended Formulation and Sherali-Adams Relaxation EF: - Extended Formulation: Defined by a feasible polytope P , and a way of encoding instances Φ as a (linear) objective function w Φ . - Sherali-Adams: A Sherali-Adams of level t is an Extended Formulation with � · 2 t . � n SA: #variables = t D S ∩ T S T D S D T - Feasible point in SA ( t ): Family {D S } | S |≤ t of consistent distribution with D S a distribution on { 0 , 1 } S . Basic: C 1 C 2 - Similarly, for Basic LP solution.

  19. Result

  20. Result

  21. Result Main Theorem: For all CSPs, if Ba- sic LP has integral- ity gap of ( c , s ) then for all ε > 0, there exist large enough instance(s) with integrality gap of ( c − ε, s + ε ) for SA ( � O ε (log n )).

  22. Result With [Kothari- Meka-Raghavendra- 17]: For all CSPs, if Basic LP has ( c , s ) gap, then so does any LP Extended For- mulation of size n � O (log n ) . Ignoring ε losses.

  23. Hard Instance Basic: SA: C 1 C 2 D S ∩ T S T D S D T

  24. Hard Instance Basic: SA: C 1 C 2 D S ∩ T S T D S D T Use the hard instance Φ 0 of the basic relaxation as template to build the new hard instance on n variables and m = ∆ · n constraints.

  25. Hard Instance #variables = n and #constraints = m = ∆ · n .

  26. Hard Instance #variables = n and #constraints = m = ∆ · n . b 9 - For each variable in Φ 0 , create x 9 x 8 b 8 bucket with large number of x 7 b 7 variables. x 6 b 6 x 5 b 5 x 4 b 4 x 3 b 3 x 2 b 2 x 1 b 1 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9

  27. Hard Instance #variables = n and #constraints = m = ∆ · n . b 9 - For each variable in Φ 0 , create x 9 x 8 b 8 bucket with large number of x 7 b 7 variables. x 6 b 6 - Independently, sample each x 5 b 5 x 4 constraint as: b 4 x 3 b 3 x 2 b 2 x 1 b 1 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9

  28. Hard Instance #variables = n and #constraints = m = ∆ · n . b 9 - For each variable in Φ 0 , create x 9 x 8 b 8 bucket with large number of x 7 b 7 variables. x 6 b 6 - Independently, sample each x 5 b 5 x 4 constraint as: b 4 x 3 b 3 Sample constraint C from Φ 0 . x 2 b 2 x 1 b 1 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9

  29. Hard Instance #variables = n and #constraints = m = ∆ · n . b 9 - For each variable in Φ 0 , create x 9 x 8 b 8 bucket with large number of x 7 b 7 variables. x 6 b 6 - Independently, sample each x 5 b 5 x 4 constraint as: b 4 x 3 b 3 Sample constraint C from Φ 0 . x 2 b 2 For each variable x in S C , choose x 1 b 1 y x ∈ B x , u.a.r. Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9

  30. Hard Instance #variables = n and #constraints = m = ∆ · n . b 9 - For each variable in Φ 0 , create x 9 x 8 b 8 bucket with large number of x 7 b 7 variables. x 6 b 6 - Independently, sample each x 5 b 5 x 4 constraint as: b 4 x 3 b 3 Sample constraint C from Φ 0 . x 2 b 2 For each variable x in S C , choose x 1 b 1 y x ∈ B x , u.a.r. Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 Put the constraint C on the variables { y x } x ∈ S C .

  31. Hard Instance #variables = n and #constraints = m = ∆ · n . b 9 - For each variable in Φ 0 , create x 9 x 8 b 8 bucket with large number of x 7 b 7 variables. x 6 b 6 - Independently, sample each x 5 b 5 x 4 constraint as: b 4 x 3 b 3 Sample constraint C from Φ 0 . x 2 b 2 For each variable x in S C , choose x 1 b 1 y x ∈ B x , u.a.r. Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 Φ 0 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 n / 9 Put the constraint C on the variables { y x } x ∈ S C . W.h.p., the instance hypergraph generated has o ( n ) cycles of length at most η log n for η > 0.

Recommend


More recommend