Forbes-Kelley pseudorandom restriction ◮ A distribution D over { 0 , 1 } n is ε -biased if it fools parities: � � �� � � � − 1 � � S � = ∅ = ⇒ � ≤ ε � E � D i � 2 i ∈ S ◮ Let D , D ′ be independent small-bias strings ◮ Let X = Res( D , D ′ ) (seed length � O (log n )) ◮ Theorem [Forbes, Kelley ’18]: For any O (1)-width ROBP f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E ◮ In words, X preserves expectation of f ◮ (Proof involves clever Fourier analysis, building on [RSV13, HLV18, CHRT18])
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n )
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n ) ◮ With high probability, X ◦ t ∈ { 0 , 1 } n (no ⋆ )
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n ) ◮ With high probability, X ◦ t ∈ { 0 , 1 } n (no ⋆ ) ◮ Expectation preserved at every step, so total error is low: X ◦ t [ f ( X ◦ t )] ≈ E U [ f ( U )] E
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n ) ◮ With high probability, X ◦ t ∈ { 0 , 1 } n (no ⋆ ) ◮ Expectation preserved at every step, so total error is low: X ◦ t [ f ( X ◦ t )] ≈ E U [ f ( U )] E O (log 2 n ) truly random bits ◮ Total cost: �
Improved PRGs via simplification [GMRTV12] ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 12 ¬ x 9 x 7 ¬ x 2 ¬ x 1 x 13 ¬ x 5 x 11 x 4 x 10 ¬ x 6
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 12 ¬ x 9 x 7 ¬ x 2 ¬ x 1 x 13 ¬ x 5 x 11 x 4 x 10 ¬ x 6
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 13 x 11 x 4 1 0 1 0 0 0
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 13 x 11 x 4 1 0 1 0 0 0
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 13 x 11 x 4 1 0 1 0 0 0
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ 0 ∧ 0 ∧ 0 x 8 x 3 x 7 ¬ x 1 x 11 x 4
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4 ◮ Step 2: Fool restricted formula, taking advantage of simplicity
Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n )
Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1
Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1 1. Recursively sample G d , G ′ d ∈ { 0 , 1 } n
Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1 1. Recursively sample G d , G ′ d ∈ { 0 , 1 } n 2. Sample D , D ′ ∈ { 0 , 1 } n with small bias
Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1 1. Recursively sample G d , G ′ d ∈ { 0 , 1 } n 2. Sample D , D ′ ∈ { 0 , 1 } n with small bias 3. X = Res( G d ⊕ D , G ′ d ⊕ D ′ )
Preserving expectation ◮ Claim : For any depth-( d + 1) read-once AC 0 formula f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E
Preserving expectation ◮ Claim : For any depth-( d + 1) read-once AC 0 formula f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E ◮ Proof : Read-once AC 0 can be simulated by constant-width ROBPs [CSV15]
Preserving expectation ◮ Claim : For any depth-( d + 1) read-once AC 0 formula f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E ◮ Proof : Read-once AC 0 can be simulated by constant-width ROBPs [CSV15] ◮ So we can simply apply Forbes-Kelley result: X = Res( G d ⊕ D , G ′ d ⊕ D ′ )
Simplification ◮ ∆( f ) def = maximum fan-in of any gate other than root
Simplification ◮ ∆( f ) def = maximum fan-in of any gate other than root ◮ Main Lemma : With high probability over X ◦ t , ∆( f | X ◦ t ) ≤ polylog n , where t = O ((log log n ) 2 )
Simplification ◮ ∆( f ) def = maximum fan-in of any gate other than root ◮ Main Lemma : With high probability over X ◦ t , ∆( f | X ◦ t ) ≤ polylog n , where t = O ((log log n ) 2 ) ◮ Actually we only prove this statement “up to sandwiching”
∆ �→ polylog n : Proof outline ◮ Chen, Steinke, Vadhan ’15: Read-once AC 0 simplifies under truly random restrictions
∆ �→ polylog n : Proof outline ◮ Chen, Steinke, Vadhan ’15: Read-once AC 0 simplifies under truly random restrictions ◮ Testing for simplification is another read-once AC 0 problem
∆ �→ polylog n : Proof outline ◮ Chen, Steinke, Vadhan ’15: Read-once AC 0 simplifies under truly random restrictions ◮ Testing for simplification is another read-once AC 0 problem ◮ So we can derandomize the [CSV15] analysis: X = Res( G d ⊕ D , G ′ d ⊕ D ′ )
Collapse under truly random restrictions ◮ Assume f is a biased read-once AC 0 formula: E [ f ] ≤ ρ or E [ f ] ≥ 1 − ρ
Collapse under truly random restrictions ◮ Assume f is a biased read-once AC 0 formula: E [ f ] ≤ ρ or E [ f ] ≥ 1 − ρ ◮ Let R = Res( U , U ′ ) (truly random restriction)
Collapse under truly random restrictions ◮ Assume f is a biased read-once AC 0 formula: E [ f ] ≤ ρ or E [ f ] ≥ 1 − ρ ◮ Let R = Res( U , U ′ ) (truly random restriction) ◮ Theorem [CSV ’15]: 1 R ◦ s [ f | R ◦ s nonconstant] ≤ ρ + Pr n 100 , where s = O (log log n )
Collapse under truly random restrictions ◮ Assume f is a biased read-once AC 0 formula: E [ f ] ≤ ρ or E [ f ] ≥ 1 − ρ ◮ Let R = Res( U , U ′ ) (truly random restriction) ◮ Theorem [CSV ’15]: 1 R ◦ s [ f | R ◦ s nonconstant] ≤ ρ + Pr n 100 , where s = O (log log n ) ◮ (Proof uses Fourier analysis)
NAND formulas ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 12 ¬ x 9 x 7 ¬ x 2 ¬ x 1 x 13 ¬ x 5 x 11 x 4 x 10 ¬ x 6
NAND formulas NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND x 8 x 3 x 12 ¬ x 9 x 7 ¬ x 2 ¬ x 1 x 13 ¬ x 5 x 11 x 4 x 10 ¬ x 6
Collapse under truly random restrictions (continued) ◮ Corollary : If E [ f ] ≥ 1 − ρ , then 1 R ◦ s [ f | R ◦ s �≡ 1] ≤ 2 ρ + Pr n 100
Collapse under truly random restrictions (continued) ◮ Corollary : If E [ f ] ≥ 1 − ρ , then 1 R ◦ s [ f | R ◦ s �≡ 1] ≤ 2 ρ + Pr n 100 ◮ Let F be a set of formulas on disjoint variable sets
Collapse under truly random restrictions (continued) ◮ Corollary : If E [ f ] ≥ 1 − ρ , then 1 R ◦ s [ f | R ◦ s �≡ 1] ≤ 2 ρ + Pr n 100 ◮ Let F be a set of formulas on disjoint variable sets ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ
Collapse under truly random restrictions (continued) ◮ Corollary : If E [ f ] ≥ 1 − ρ , then 1 R ◦ s [ f | R ◦ s �≡ 1] ≤ 2 ρ + Pr n 100 ◮ Let F be a set of formulas on disjoint variable sets ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ ◮ Corollary : � � |F| 1 R ◦ s [ ∀ f ∈ F , f | R ◦ s �≡ 1] ≤ Pr 2 ρ + . n 100
Derandomizing collapse ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables
Derandomizing collapse ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Computational problem: Given y , z ∈ { 0 , 1 } n , decide whether ∃ f ∈ F , f | Res( y , z ) ≡ 1
Derandomizing collapse ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Computational problem: Given y , z ∈ { 0 , 1 } n , decide whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 ◮ Lemma : Can be decided in depth- d read-once AC 0
Deciding whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 NAND ≡ 1 ⇐ ⇒ a c b
Deciding whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 ∨ NAND ≡ 1 ⇐ ⇒ a c b a ≡ 0 b ≡ 0 c ≡ 0
Deciding whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 ∨ NAND ≡ 1 ⇐ ⇒ a c b a ≡ 0 b ≡ 0 c ≡ 0 ∧ NAND ≡ 0 ⇐ ⇒ a ′ b ′ c ′ a ′ ≡ 1 b ′ ≡ 1 c ′ ≡ 1
Deciding whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 (continued) ◮ At bottom, we get one additional layer: (Res( y , z ) i ≡ b ) ⇐ ⇒ ( y i = 0 ∧ z i = b ) ( ¬ Res( y , z ) i ≡ b ) ⇐ ⇒ ( y i = 0 ∧ z i = 1 − b )
Deciding whether ∃ f ∈ F , f | Res( y , z ) ≡ 1 (continued) ◮ At bottom, we get one additional layer: (Res( y , z ) i ≡ b ) ⇐ ⇒ ( y i = 0 ∧ z i = b ) ( ¬ Res( y , z ) i ≡ b ) ⇐ ⇒ ( y i = 0 ∧ z i = 1 − b ) ◮ At top: “ ∃ f ∈ F ” is one more ∨ gate (merge with top ∨ gates)
Collapse under pseudorandom restrictions ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ
Collapse under pseudorandom restrictions ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ ◮ X = Res( G d ⊕ D , G ′ d ⊕ D ′ )
Collapse under pseudorandom restrictions ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ ◮ X = Res( G d ⊕ D , G ′ d ⊕ D ′ ) ◮ G d , G ′ d fool depth d , so Pr X [ ∀ f ∈ F , f | X �≡ 1] ≈ Pr R [ ∀ f ∈ F , f | R �≡ 1]
Collapse under pseudorandom restrictions ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ ◮ X = Res( G d ⊕ D , G ′ d ⊕ D ′ ) ◮ G d , G ′ d fool depth d , so Pr X [ ∀ f ∈ F , f | X �≡ 1] ≈ Pr R [ ∀ f ∈ F , f | R �≡ 1] ◮ Hybrid argument: X ◦ s [ ∀ f ∈ F , f | X ◦ s �≡ 1] ≈ Pr R ◦ s [ ∀ f ∈ F , f | R ◦ s �≡ 1] Pr
Collapse under pseudorandom restrictions ◮ Let F be a set of depth-( d − 1) formulas on disjoint variables ◮ Assume ∀ f ∈ F , E [ f ] ≥ 1 − ρ ◮ X = Res( G d ⊕ D , G ′ d ⊕ D ′ ) ◮ G d , G ′ d fool depth d , so Pr X [ ∀ f ∈ F , f | X �≡ 1] ≈ Pr R [ ∀ f ∈ F , f | R �≡ 1] ◮ Hybrid argument: X ◦ s [ ∀ f ∈ F , f | X ◦ s �≡ 1] ≈ Pr R ◦ s [ ∀ f ∈ F , f | R ◦ s �≡ 1] Pr � � |F| 1 ≤ 2 ρ + n 100
√ ∆ �→ ∆ polylog n ◮ So far: X ◦ s causes any biased depth-( d − 1) formula to collapse
√ ∆ �→ ∆ polylog n ◮ So far: X ◦ s causes any biased depth-( d − 1) formula to collapse ◮ What about unbiased depth-( d + 1) formulas?
√ ∆ �→ ∆ polylog n ◮ So far: X ◦ s causes any biased depth-( d − 1) formula to collapse ◮ What about unbiased depth-( d + 1) formulas? ◮ Assume that for every gate g in f , E [ ¬ g ] ≥ 1 / poly( n )
√ ∆ �→ ∆ polylog n ◮ So far: X ◦ s causes any biased depth-( d − 1) formula to collapse ◮ What about unbiased depth-( d + 1) formulas? ◮ Assume that for every gate g in f , E [ ¬ g ] ≥ 1 / poly( n ) ◮ Lemma : With high probability over X ◦ s , � ∆( f | X ◦ s ) ≤ ∆( f ) · polylog n
√ Illustration: ∆ �→ ∆ polylog n Total depth d + 1 NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND
√ Illustration: ∆ �→ ∆ polylog n Total depth d + 1 Likely to collapse if biased NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND
√ Illustration: ∆ �→ ∆ polylog n Total depth d + 1 Likely to have few Likely to collapse remaining children if biased NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND
√ Proof that ∆ �→ ∆ polylog n ◮ (This analysis follows [GMRTV12, CSV15])
Recommend
More recommend