near optimal pseudorandom generators for constant depth
play

Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once - PowerPoint PPT Presentation

Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas Dean Doron 1 Pooya Hatami 2 William M. Hoza 3 UT Austin Stanford UT Austin Ohio State UT Austin BIRS Workshop 19w5088 July 8, 2019 1Supported by NSF Grant


  1. Forbes-Kelley pseudorandom restriction ◮ A distribution D over { 0 , 1 } n is ε -biased if it fools parities: � � �� � � � − 1 � � S � = ∅ = ⇒ � ≤ ε � E � D i � 2 i ∈ S ◮ Let D , D ′ be independent small-bias strings ◮ Let X = Res( D , D ′ ) (seed length � O (log n )) ◮ Theorem [Forbes, Kelley ’18]: For any O (1)-width ROBP f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E ◮ In words, X preserves expectation of f ◮ (Proof involves clever Fourier analysis, building on [RSV13, HLV18, CHRT18])

  2. Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits

  3. Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP

  4. Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction

  5. Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X

  6. Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n )

  7. Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n ) ◮ With high probability, X ◦ t ∈ { 0 , 1 } n (no ⋆ )

  8. Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n ) ◮ With high probability, X ◦ t ∈ { 0 , 1 } n (no ⋆ ) ◮ Expectation preserved at every step, so total error is low: X ◦ t [ f ( X ◦ t )] ≈ E U [ f ( U )] E

  9. Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n ) ◮ With high probability, X ◦ t ∈ { 0 , 1 } n (no ⋆ ) ◮ Expectation preserved at every step, so total error is low: X ◦ t [ f ( X ◦ t )] ≈ E U [ f ( U )] E O (log 2 n ) truly random bits ◮ Total cost: �

  10. Improved PRGs via simplification [GMRTV12] ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 12 ¬ x 9 x 7 ¬ x 2 ¬ x 1 x 13 ¬ x 5 x 11 x 4 x 10 ¬ x 6

  11. Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 12 ¬ x 9 x 7 ¬ x 2 ¬ x 1 x 13 ¬ x 5 x 11 x 4 x 10 ¬ x 6

  12. Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 13 x 11 x 4 1 0 1 0 0 0

  13. Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 13 x 11 x 4 1 0 1 0 0 0

  14. Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 13 x 11 x 4 1 0 1 0 0 0

  15. Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ 0 ∧ 0 ∧ 0 x 8 x 3 x 7 ¬ x 1 x 11 x 4

  16. Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4

  17. Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4

  18. Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4

  19. Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4 ◮ Step 2: Fool restricted formula, taking advantage of simplicity

  20. Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n )

  21. Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1

  22. Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1 1. Recursively sample G d , G ′ d ∈ { 0 , 1 } n

  23. Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1 1. Recursively sample G d , G ′ d ∈ { 0 , 1 } n 2. Sample D , D ′ ∈ { 0 , 1 } n with small bias

  24. Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1 1. Recursively sample G d , G ′ d ∈ { 0 , 1 } n 2. Sample D , D ′ ∈ { 0 , 1 } n with small bias 3. X = Res( G d ⊕ D , G ′ d ⊕ D ′ )

  25. Preserving expectation ◮ Claim : For any depth-( d + 1) read-once AC 0 formula f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E

  26. Preserving expectation ◮ Claim : For any depth-( d + 1) read-once AC 0 formula f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E ◮ Proof : Read-once AC 0 can be simulated by constant-width ROBPs [CSV15]

  27. Preserving expectation ◮ Claim : For any depth-( d + 1) read-once AC 0 formula f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E ◮ Proof : Read-once AC 0 can be simulated by constant-width ROBPs [CSV15] ◮ So we can simply apply Forbes-Kelley result: X = Res( G d ⊕ D , G ′ d ⊕ D ′ )

  28. Simplification ◮ ∆( f ) def = maximum fan-in of any gate other than root

  29. Simplification ◮ ∆( f ) def = maximum fan-in of any gate other than root ◮ Main Lemma : With high probability over X ◦ t , ∆( f | X ◦ t ) ≤ polylog n , where t = O ((log log n ) 2 )

  30. Simplification ◮ ∆( f ) def = maximum fan-in of any gate other than root ◮ Main Lemma : With high probability over X ◦ t , ∆( f | X ◦ t ) ≤ polylog n , where t = O ((log log n ) 2 ) ◮ Actually we only prove this statement “up to sandwiching”

  31. ∆ �→ polylog n : Proof outline ◮ Chen, Steinke, Vadhan ’15: Read-once AC 0 simplifies under truly random restrictions

  32. ∆ �→ polylog n : Proof outline ◮ Chen, Steinke, Vadhan ’15: Read-once AC 0 simplifies under truly random restrictions ◮ Testing for simplification is another read-once AC 0 problem

  33. ∆ �→ polylog n : Proof outline ◮ Chen, Steinke, Vadhan ’15: Read-once AC 0 simplifies under truly random restrictions ◮ Testing for simplification is another read-once AC 0 problem ◮ So we can derandomize the [CSV15] analysis: X = Res( G d ⊕ D , G ′ d ⊕ D ′ )

  34. Collapse under truly random restrictions ◮ Assume f is a biased read-once AC 0 formula: E [ f ] ≤ ρ or E [ f ] ≥ 1 − ρ

  35. Collapse under truly random restrictions ◮ Assume f is a biased read-once AC 0 formula: E [ f ] ≤ ρ or E [ f ] ≥ 1 − ρ ◮ Let R = Res( U , U ′ ) (truly random restriction)

  36. Collapse under truly random restrictions ◮ Assume f is a biased read-once AC 0 formula: E [ f ] ≤ ρ or E [ f ] ≥ 1 − ρ ◮ Let R = Res( U , U ′ ) (truly random restriction) ◮ Theorem [CSV ’15]: 1 R ◦ s [ f | R ◦ s nonconstant] ≤ ρ + Pr n 100 , where s = O (log log n )

  37. Collapse under truly random restrictions ◮ Assume f is a biased read-once AC 0 formula: E [ f ] ≤ ρ or E [ f ] ≥ 1 − ρ ◮ Let R = Res( U , U ′ ) (truly random restriction) ◮ Theorem [CSV ’15]: 1 R ◦ s [ f | R ◦ s nonconstant] ≤ ρ + Pr n 100 , where s = O (log log n ) ◮ (Proof uses Fourier analysis)

  38. NAND formulas ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 12 ¬ x 9 x 7 ¬ x 2 ¬ x 1 x 13 ¬ x 5 x 11 x 4 x 10 ¬ x 6

  39. NAND formulas NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND x 8 x 3 x 12 ¬ x 9 x 7 ¬ x 2 ¬ x 1 x 13 ¬ x 5 x 11 x 4 x 10 ¬ x 6

  40. Collapse under truly random restrictions (continued) ◮ Corollary : If E [ f ] ≥ 1 − ρ , then 1 R ◦ s [ f | R ◦ s �≡ 1] ≤ 2 ρ + Pr n 100

  41. Collapse under truly random restrictions (continued) ◮ Corollary : If E [ f ] ≥ 1 − ρ , then 1 R ◦ s [ f | R ◦ s �≡ 1] ≤ 2 ρ + Pr n 100 ◮ Let F be a set of formulas on disjoint variable sets

  42. Collapse under truly random restrictions (continued) ◮ Corollary : If E [ f ] ≥ 1 − ρ , then 1 R ◦ s [ f | R ◦ s �≡ 1] ≤ 2 ρ + Pr n 100 ◮ Let F be a set of formulas on disjoint variable sets ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ

  43. Collapse under truly random restrictions (continued) ◮ Corollary : If E [ f ] ≥ 1 − ρ , then 1 R ◦ s [ f | R ◦ s �≡ 1] ≤ 2 ρ + Pr n 100 ◮ Let F be a set of formulas on disjoint variable sets ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ ◮ Corollary : � � |F| 1 R ◦ s [ ∀ f ∈ F , f | R ◦ s �≡ 1] ≤ Pr 2 ρ + . n 100

  44. Derandomizing collapse ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables

  45. Derandomizing collapse ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Computational problem: Given y , z ∈ { 0 , 1 } n , decide whether ∃ f ∈ F , f | Res( y , z ) ≡ 1

  46. Derandomizing collapse ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Computational problem: Given y , z ∈ { 0 , 1 } n , decide whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 ◮ Lemma : Can be decided in depth- d read-once AC 0

  47. Deciding whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 NAND ≡ 1 ⇐ ⇒ a c b

  48. Deciding whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 ∨ NAND ≡ 1 ⇐ ⇒ a c b a ≡ 0 b ≡ 0 c ≡ 0

  49. Deciding whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 ∨ NAND ≡ 1 ⇐ ⇒ a c b a ≡ 0 b ≡ 0 c ≡ 0 ∧ NAND ≡ 0 ⇐ ⇒ a ′ b ′ c ′ a ′ ≡ 1 b ′ ≡ 1 c ′ ≡ 1

  50. Deciding whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 (continued) ◮ At bottom, we get one additional layer: (Res( y , z ) i ≡ b ) ⇐ ⇒ ( y i = 0 ∧ z i = b ) ( ¬ Res( y , z ) i ≡ b ) ⇐ ⇒ ( y i = 0 ∧ z i = 1 − b )

  51. Deciding whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 (continued) ◮ At bottom, we get one additional layer: (Res( y , z ) i ≡ b ) ⇐ ⇒ ( y i = 0 ∧ z i = b ) ( ¬ Res( y , z ) i ≡ b ) ⇐ ⇒ ( y i = 0 ∧ z i = 1 − b ) ◮ At top: “ ∃ f ∈ F ” is one more ∨ gate (merge with top ∨ gates)

  52. Collapse under pseudorandom restrictions ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ

  53. Collapse under pseudorandom restrictions ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ ◮ X = Res( G d ⊕ D , G ′ d ⊕ D ′ )

  54. Collapse under pseudorandom restrictions ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ ◮ X = Res( G d ⊕ D , G ′ d ⊕ D ′ ) ◮ G d , G ′ d fool depth d , so Pr X [ ∀ f ∈ F , f | X �≡ 1] ≈ Pr R [ ∀ f ∈ F , f | R �≡ 1]

  55. Collapse under pseudorandom restrictions ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ ◮ X = Res( G d ⊕ D , G ′ d ⊕ D ′ ) ◮ G d , G ′ d fool depth d , so Pr X [ ∀ f ∈ F , f | X �≡ 1] ≈ Pr R [ ∀ f ∈ F , f | R �≡ 1] ◮ Hybrid argument: X ◦ s [ ∀ f ∈ F , f | X ◦ s �≡ 1] ≈ Pr R ◦ s [ ∀ f ∈ F , f | R ◦ s �≡ 1] Pr

  56. Collapse under pseudorandom restrictions ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ ◮ X = Res( G d ⊕ D , G ′ d ⊕ D ′ ) ◮ G d , G ′ d fool depth d , so Pr X [ ∀ f ∈ F , f | X �≡ 1] ≈ Pr R [ ∀ f ∈ F , f | R �≡ 1] ◮ Hybrid argument: X ◦ s [ ∀ f ∈ F , f | X ◦ s �≡ 1] ≈ Pr R ◦ s [ ∀ f ∈ F , f | R ◦ s �≡ 1] Pr � � |F| 1 ≤ 2 ρ + n 100

  57. √ ∆ �→ ∆ polylog n ◮ So far: X ◦ s causes any biased depth-( d − 1) formula to collapse

  58. √ ∆ �→ ∆ polylog n ◮ So far: X ◦ s causes any biased depth-( d − 1) formula to collapse ◮ What about unbiased depth-( d + 1) formulas?

  59. √ ∆ �→ ∆ polylog n ◮ So far: X ◦ s causes any biased depth-( d − 1) formula to collapse ◮ What about unbiased depth-( d + 1) formulas? ◮ Assume that for every gate g in f , E [ ¬ g ] ≥ 1 / poly( n )

  60. √ ∆ �→ ∆ polylog n ◮ So far: X ◦ s causes any biased depth-( d − 1) formula to collapse ◮ What about unbiased depth-( d + 1) formulas? ◮ Assume that for every gate g in f , E [ ¬ g ] ≥ 1 / poly( n ) ◮ Lemma : With high probability over X ◦ s , � ∆( f | X ◦ s ) ≤ ∆( f ) · polylog n

  61. √ Illustration: ∆ �→ ∆ polylog n Total depth d + 1 NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND

  62. √ Illustration: ∆ �→ ∆ polylog n Total depth d + 1 Likely to collapse if biased NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND

  63. √ Illustration: ∆ �→ ∆ polylog n Total depth d + 1 Likely to have few Likely to collapse remaining children if biased NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND

  64. √ Proof that ∆ �→ ∆ polylog n ◮ (This analysis follows [GMRTV12, CSV15])

Recommend


More recommend