optimal inapproximability of max csps over large alphabet
play

Optimal Inapproximability of Max CSPs over large alphabet Pasin - PowerPoint PPT Presentation

Optimal Inapproximability of Max CSPs over large alphabet Pasin Manurangsi 1 Preetum Nakkiran 2 Luca Trevisan 1 1 UC Berkeley 2 Harvard RANDOM-APPROX 2016 1/17 Max k -CSP R Maximum Constraint Satisfaction Problem: Variables take values in


  1. Optimal Inapproximability of Max CSPs over large alphabet Pasin Manurangsi 1 Preetum Nakkiran 2 Luca Trevisan 1 1 UC Berkeley 2 Harvard RANDOM-APPROX 2016 1/17

  2. Max k -CSP R Maximum Constraint Satisfaction Problem: ◮ Variables take values in alphabet of size R . ◮ Constraints involve k variables each. ◮ Goal: find assignment maximizing # of satisfied constraints. 2/17

  3. Max k -CSP R Maximum Constraint Satisfaction Problem: ◮ Variables take values in alphabet of size R . ◮ Constraints involve k variables each. ◮ Goal: find assignment maximizing # of satisfied constraints. Example For k = 2, R = 3, a 2-CSP 3 is given by a list of constraints:  ( x 1 = 0 ∧ x 2 = 2)    ( x 1 = 1 ∧ x 3 = 2)   . . .  2/17

  4. Hardness of Max k -CSP R NP-hard to solve exactly (contains MAX-CUT, MAX 3-SAT). 3/17

  5. Hardness of Max k -CSP R NP-hard to solve exactly (contains MAX-CUT, MAX 3-SAT). NP-hard to approximate (PCP theorem). 3/17

  6. Hardness of Max k -CSP R NP-hard to solve exactly (contains MAX-CUT, MAX 3-SAT). NP-hard to approximate (PCP theorem). Boolean CSPs ( R = 2): Optimal approximation factor is O ( k / 2 k ). 3/17

  7. Hardness of Max k -CSP R NP-hard to solve exactly (contains MAX-CUT, MAX 3-SAT). NP-hard to approximate (PCP theorem). Boolean CSPs ( R = 2): Optimal approximation factor is O ( k / 2 k ). Non-boolean CSPs ( R > 2): not resolved prior. 3/17

  8. Hardness of Approximation Trivial (1 / R k )-approximation for Max k -CSP R : Random assignment. Each clause matches the maximizing assignment w.p. 1 / R k . Q: Can we do better? Is it hard to do much better? 4/17

  9. Prior Work: Non-boolean Max CSP Approximation factors: 1 Algorithm UG-Hardness NP-Hardness log R k = 2 R k = 3 3 ≤ k < O (1) 1 Ignoring constants, and for large R . 5/17

  10. Prior Work: Non-boolean Max CSP Approximation factors: 1 Algorithm UG-Hardness NP-Hardness log R log R log R k = 2 √ R R R k = 3 3 ≤ k < O (1) 1 Ignoring constants, and for large R . 5/17

  11. Prior Work: Non-boolean Max CSP Approximation factors: 1 Algorithm UG-Hardness NP-Hardness log R log R log R k = 2 √ R R R 1 k = 3 R 2 3 ≤ k < O (1) 1 Ignoring constants, and for large R . 5/17

  12. Prior Work: Non-boolean Max CSP Approximation factors: 1 Algorithm UG-Hardness NP-Hardness log R log R log R k = 2 √ R R R 1 1 k = 3 R 2 R 3 ≤ k < O (1) 1 Ignoring constants, and for large R . 5/17

  13. Prior Work: Non-boolean Max CSP Approximation factors: 1 Algorithm UG-Hardness NP-Hardness log R log R log R k = 2 √ R R R 1 1 k = 3 R 2 R 1 1 3 ≤ k < O (1) R k − 1 R k − 2 1 Ignoring constants, and for large R . 5/17

  14. Prior Work: Non-boolean Max CSP Approximation factors: 1 Algorithm UG-Hardness NP-Hardness log R log R log R k = 2 √ R R R 1 1 k = 3 R 2 R 1 1 3 ≤ k < O (1) R k − 1 R k − 2 For constant k ≥ 3, factor of R gap in hardness vs. approximation. 1 Ignoring constants, and for large R . 5/17

  15. Our results Algorithm UG-Hardness NP-Hardness log R log R log R k = 2 √ R R R 1 1 k = 3 R 2 R 1 1 3 ≤ k < O (1) R k − 1 R k − 2 2 6/17

  16. Our results Algorithm UG-Hardness NP-Hardness log R log R log R k = 2 √ R R R 1 log R 1 k = 3 R 2 R 2 R 1 log R 1 3 ≤ k < O (1) R k − 1 R k − 1 R k − 2 2 6/17

  17. Our results Algorithm UG-Hardness NP-Hardness log R log R log R k = 2 √ R R R 1 log R log R 1 k = 3 R 2 R 2 R 2 R 1 log R log R 1 3 ≤ k < O (1) R k − 1 R k − 1 R k − 1 R k − 2 2 6/17

  18. Our results Algorithm UG-Hardness NP-Hardness log R log R log R k = 2 √ R R R 1 log R log R 1 k = 3 R 2 R 2 R 2 R 1 log R log R 1 3 ≤ k < O (1) R k − 1 R k − 1 R k − 1 R k − 2 We give matching UG-hardness and approximation algorithms for any k , R . Gap reduced to O (1) for constant k . 2 2 Original paper had polylog(R) gap. Improvement suggested by Rishi Saket, Subhash Khot, Venkat Guriswami. 6/17

  19. Dictator Testing UG-Hardness-of-approximation equivalent to dictator testing. Dictator: f ( x 1 , x 2 , . . . , x n ) = x i . 7/17

  20. Dictator Testing UG-Hardness-of-approximation equivalent to dictator testing. Dictator: f ( x 1 , x 2 , . . . , x n ) = x i . Problem Given oracle access to f : [ R ] n → [ R ] , determine if f is a dictator or “far from a dictator”. 7/17

  21. Dictator Testing UG-Hardness-of-approximation equivalent to dictator testing. Dictator: f ( x 1 , x 2 , . . . , x n ) = x i . Problem Given oracle access to f : [ R ] n → [ R ] , determine if f is a dictator or “far from a dictator”. ◮ Completeness c : If f is a dictator, accept w.p. ≥ c . ◮ Soundness s : If f is “far from” a dictator, accept w.p. ≤ s . “Far from dictator” ≡ small low-degree influences (Fourier condition) 7/17

  22. Examples f : [ R ] n → R “Far from dictator” ≡ small low-degree influences Example Plurality on n coordinates is far from a dictator (no influential coordinate). 8/17

  23. Examples f : [ R ] n → R “Far from dictator” ≡ small low-degree influences Example Plurality on n coordinates is far from a dictator (no influential coordinate). Example f ( x 1 , x 2 , . . . x n ) := x 1 ⊕ R x 2 is NOT far from a dictator. 8/17

  24. UG-Hardness of Approximation k -query dictator test over UG-hard to distinguish alphabet R , with between k -CSP R instances ⇐ ⇒ (soundness, completeness) = ( s , c ) where OPT ≈ s vs. OPT ≈ c 9/17

  25. UG-Hardness of Approximation k -query dictator test over UG-hard to distinguish alphabet R , with between k -CSP R instances ⇐ ⇒ (soundness, completeness) = ( s , c ) where OPT ≈ s vs. OPT ≈ c � � UG-hard to approximate Max k -CSP R better than ≈ ( s / c ). 9/17

  26. UG-hardness of boolean 2-CSP [Khot, Kindler, Mossel, O’Donnell] 2-Query Boolean Dictator test f : { 0 , 1 } n → { 0 , 1 } , E [ f ] = 1 / 2. 10/17

  27. UG-hardness of boolean 2-CSP [Khot, Kindler, Mossel, O’Donnell] 2-Query Boolean Dictator test f : { 0 , 1 } n → { 0 , 1 } , E [ f ] = 1 / 2. ◮ Pick x ∼ { 0 , 1 } n uniform 10/17

  28. UG-hardness of boolean 2-CSP [Khot, Kindler, Mossel, O’Donnell] 2-Query Boolean Dictator test f : { 0 , 1 } n → { 0 , 1 } , E [ f ] = 1 / 2. ◮ Pick x ∼ { 0 , 1 } n uniform ◮ Pick “noise” η ∼ { 0 , 1 } n , each coordinate Bernoulli ( p ). 10/17

  29. UG-hardness of boolean 2-CSP [Khot, Kindler, Mossel, O’Donnell] 2-Query Boolean Dictator test f : { 0 , 1 } n → { 0 , 1 } , E [ f ] = 1 / 2. ◮ Pick x ∼ { 0 , 1 } n uniform ◮ Pick “noise” η ∼ { 0 , 1 } n , each coordinate Bernoulli ( p ). ◮ Accept iff f ( x ) = f ( x ⊕ η ) 10/17

  30. UG-hardness of boolean 2-CSP [Khot, Kindler, Mossel, O’Donnell] 2-Query Boolean Dictator test f : { 0 , 1 } n → { 0 , 1 } , E [ f ] = 1 / 2. ◮ Pick x ∼ { 0 , 1 } n uniform ◮ Pick “noise” η ∼ { 0 , 1 } n , each coordinate Bernoulli ( p ). ◮ Accept iff f ( x ) = f ( x ⊕ η ) For p ≈ 0 . 15, ◮ Completeness: If f is a dictator, accepts w.p. ≥ 1 − p ≈ 0 . 85. ◮ Soundness: If f is “far from” a dictator, accepts w.p. ≤≈ 0 . 74. 10/17

  31. UG-hardness of boolean 2-CSP [Khot, Kindler, Mossel, O’Donnell] 2-Query Boolean Dictator test f : { 0 , 1 } n → { 0 , 1 } , E [ f ] = 1 / 2. ◮ Pick x ∼ { 0 , 1 } n uniform ◮ Pick “noise” η ∼ { 0 , 1 } n , each coordinate Bernoulli ( p ). ◮ Accept iff f ( x ) = f ( x ⊕ η ) For p ≈ 0 . 15, ◮ Completeness: If f is a dictator, accepts w.p. ≥ 1 − p ≈ 0 . 85. ◮ Soundness: If f is “far from” a dictator, accepts w.p. ≤≈ 0 . 74. ◮ Ratio: s / c ≈ 0 . 878567 = α GW 10/17

  32. Why it works Verifier accepts iff f ( x ) = f ( x + η ) Noise η iid on every coordinate. 11/17

  33. Why it works Verifier accepts iff f ( x ) = f ( x + η ) Noise η iid on every coordinate. If f depends on many coordinates, the noise will “add up”: f ( x + η ) will be almost uncorrelated with f ( x ). 11/17

  34. Why it works Verifier accepts iff f ( x ) = f ( x + η ) Noise η iid on every coordinate. If f depends on many coordinates, the noise will “add up”: f ( x + η ) will be almost uncorrelated with f ( x ). Example majority function maj : {± 1 } n → {± 1 } . � maj ( x 1 , . . . , x n ) = sign ( x i ) i If noise η is high enough, sign ( � i x i ) will be almost independent of sign ( � i ( x i + η i )) 11/17

  35. Our k -query large alphabet dictator test f : [ R ] n → [ R ] f is balanced: All pre-images f − 1 ( i ) of same size. 12/17

  36. Our k -query large alphabet dictator test f : [ R ] n → [ R ] f is balanced: All pre-images f − 1 ( i ) of same size. ◮ Pick z ∼ [ R ] n uniform 12/17

  37. Our k -query large alphabet dictator test f : [ R ] n → [ R ] f is balanced: All pre-images f − 1 ( i ) of same size. ◮ Pick z ∼ [ R ] n uniform ◮ Pick k iid noise η 1 , . . . , η k ∈ [ R ] n , s.t. each coordinate of η j is � 0 w.p. ρ uniform in [ R ] otherwise 12/17

  38. Our k -query large alphabet dictator test f : [ R ] n → [ R ] f is balanced: All pre-images f − 1 ( i ) of same size. ◮ Pick z ∼ [ R ] n uniform ◮ Pick k iid noise η 1 , . . . , η k ∈ [ R ] n , s.t. each coordinate of η j is � 0 w.p. ρ uniform in [ R ] otherwise ◮ Accept iff f ( z + η 1 ) = f ( z + η 2 ) = · · · = f ( z + η k ) 12/17

Recommend


More recommend