Forbes-Kelley pseudorandom restriction ◮ A distribution D over { 0 , 1 } n is ε -biased if it fools parities: � � �� � � � − 1 � � S � = ∅ = ⇒ � ≤ ε � E � D i � 2 i ∈ S ◮ Let D , D ′ be independent small-bias strings ◮ Let X = Res( D , D ′ ) (seed length � O (log n )) ◮ Theorem [Forbes, Kelley ’18]: For any O (1)-width ROBP f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E ◮ In words, X preserves expectation of f ◮ (Proof involves clever Fourier analysis, building on [RSV13, HLV18, CHRT18])
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n )
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n ) ◮ With high probability, X ◦ t ∈ { 0 , 1 } n (no ⋆ )
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n ) ◮ With high probability, X ◦ t ∈ { 0 , 1 } n (no ⋆ ) ◮ Expectation preserved at every step, so total error is low: X ◦ t [ f ( X ◦ t )] ≈ E U [ f ( U )] E
Forbes-Kelley pseudorandom generator ◮ So [FK18] can assign values to half the inputs using � O (log n ) truly random bits ◮ After restricting, f | X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦ t denote composition of t independent copies of X ◮ Let t = O (log n ) ◮ With high probability, X ◦ t ∈ { 0 , 1 } n (no ⋆ ) ◮ Expectation preserved at every step, so total error is low: X ◦ t [ f ( X ◦ t )] ≈ E U [ f ( U )] E O (log 2 n ) truly random bits ◮ Total cost: �
Improved PRGs via simplification [GMRTV12] ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 12 ¬ x 9 x 7 ¬ x 2 ¬ x 1 x 13 ¬ x 5 x 11 x 4 x 10 ¬ x 6
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 12 ¬ x 9 x 7 ¬ x 2 ¬ x 1 x 13 ¬ x 5 x 11 x 4 x 10 ¬ x 6
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 13 x 11 x 4 1 0 1 0 0 0
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 13 x 11 x 4 1 0 1 0 0 0
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 13 x 11 x 4 1 0 1 0 0 0
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ 0 ∧ 0 ∧ 0 x 8 x 3 x 7 ¬ x 1 x 11 x 4
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ ∧ ∧ ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4
Improved PRGs via simplification [GMRTV12] ◮ Step 1: Apply pseudorandom restriction X ∈ { 0 , 1 , ⋆ } n ◮ Design X to preserve expectation ◮ Design X so that X ◦ t also simplifies formula, for t ≪ log n ∧ x 8 x 3 x 7 ¬ x 1 x 11 x 4 ◮ Step 2: Fool restricted formula, taking advantage of simplicity
Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n )
Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1
Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1 1. Recursively sample G d , G ′ d ∈ { 0 , 1 } n
Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1 1. Recursively sample G d , G ′ d ∈ { 0 , 1 } n 2. Sample D , D ′ ∈ { 0 , 1 } n with small bias
Our pseudorandom restriction ◮ Assume by recursion: PRG for depth d with seed length � O (log n ) ◮ Let’s sample X ∈ { 0 , 1 , ⋆ } n for depth d + 1 1. Recursively sample G d , G ′ d ∈ { 0 , 1 } n 2. Sample D , D ′ ∈ { 0 , 1 } n with small bias 3. X = Res( G d ⊕ D , G ′ d ⊕ D ′ )
Preserving expectation ◮ Claim : For any depth-( d + 1) read-once AC 0 formula f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E
Preserving expectation ◮ Claim : For any depth-( d + 1) read-once AC 0 formula f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E ◮ Proof : Read-once AC 0 can be simulated by constant-width ROBPs [CSV15]
Preserving expectation ◮ Claim : For any depth-( d + 1) read-once AC 0 formula f , X , U [ f | X ( U )] ≈ E U [ f ( U )] E ◮ Proof : Read-once AC 0 can be simulated by constant-width ROBPs [CSV15] ◮ So we can simply apply Forbes-Kelley result: X = Res( G d ⊕ D , G ′ d ⊕ D ′ )
Simplification ◮ ∆( f ) def = maximum fan-in of any gate other than root
Simplification ◮ ∆( f ) def = maximum fan-in of any gate other than root ◮ Main Lemma : With high probability over X ◦ t , ∆( f | X ◦ t ) ≤ polylog n , where t = O ((log log n ) 2 )
Simplification ◮ ∆( f ) def = maximum fan-in of any gate other than root ◮ Main Lemma : With high probability over X ◦ t , ∆( f | X ◦ t ) ≤ polylog n , where t = O ((log log n ) 2 ) ◮ Actually we only prove this statement “up to sandwiching”
Simplification under truly random restrictions ◮ Let f be a read-once AC 0 formula
Simplification under truly random restrictions ◮ Let f be a read-once AC 0 formula ◮ Let R = Res( U , U ′ ) (truly random restriction)
Simplification under truly random restrictions ◮ Let f be a read-once AC 0 formula ◮ Let R = Res( U , U ′ ) (truly random restriction) ⇒ W.h.p. over R ◦ t , ◮ Chen, Steinke, Vadhan ’15 = ∆( f | R ◦ t ) ≤ polylog n
Simplification under truly random restrictions ◮ Let f be a read-once AC 0 formula ◮ Let R = Res( U , U ′ ) (truly random restriction) ⇒ W.h.p. over R ◦ t , ◮ Chen, Steinke, Vadhan ’15 = ∆( f | R ◦ t ) ≤ polylog n ◮ (In fact the simplification they show is more severe)
Simplification under truly random restrictions ◮ Let f be a read-once AC 0 formula ◮ Let R = Res( U , U ′ ) (truly random restriction) ⇒ W.h.p. over R ◦ t , ◮ Chen, Steinke, Vadhan ’15 = ∆( f | R ◦ t ) ≤ polylog n ◮ (In fact the simplification they show is more severe) ◮ Again, these statements are true “up to sandwiching.” Proof uses Fourier analysis
Derandomizing simplification ◮ Let f be a depth-( d − 1) read-once AC 0 formula
Derandomizing simplification ◮ Let f be a depth-( d − 1) read-once AC 0 formula ◮ Let b ∈ { 0 , 1 }
Derandomizing simplification ◮ Let f be a depth-( d − 1) read-once AC 0 formula ◮ Let b ∈ { 0 , 1 } ◮ Computational problem: Given y , z ∈ { 0 , 1 } n , decide whether f | Res( y , z ) ≡ b
Derandomizing simplification ◮ Let f be a depth-( d − 1) read-once AC 0 formula ◮ Let b ∈ { 0 , 1 } ◮ Computational problem: Given y , z ∈ { 0 , 1 } n , decide whether f | Res( y , z ) ≡ b ◮ Lemma : Can be decided in depth- d read-once AC 0
Deciding whether f | Res( y , z ) ≡ b ∧ ≡ 0 ⇐ ⇒ a c b
Deciding whether f | Res( y , z ) ≡ b ∧ ∨ ≡ 0 ⇐ ⇒ a c b a ≡ 0 b ≡ 0 c ≡ 0
Deciding whether f | Res( y , z ) ≡ b ∧ ∨ ≡ 0 ⇐ ⇒ a c b a ≡ 0 b ≡ 0 c ≡ 0 ∨ ∧ ≡ 0 ⇐ ⇒ a ′ b ′ c ′ a ′ ≡ 0 b ′ ≡ 0 c ′ ≡ 0
Deciding whether f | Res( y , z ) ≡ b (continued) ◮ At bottom, we get one additional layer: (Res( y , z ) i ≡ b ) ⇐ ⇒ ( y i = 0 ∧ z i = b ) ( ¬ Res( y , z ) i ≡ b ) ⇐ ⇒ ( y i = 0 ∧ z i = 1 − b )
Collapse under pseudorandom restrictions ◮ Let f be a depth-( d − 1) read-once AC 0 formula ◮ Let b ∈ { 0 , 1 }
Collapse under pseudorandom restrictions ◮ Let f be a depth-( d − 1) read-once AC 0 formula ◮ Let b ∈ { 0 , 1 } ◮ X = Res( G d ⊕ D , G ′ d ⊕ D ′ )
Collapse under pseudorandom restrictions ◮ Let f be a depth-( d − 1) read-once AC 0 formula ◮ Let b ∈ { 0 , 1 } ◮ X = Res( G d ⊕ D , G ′ d ⊕ D ′ ) ◮ G d , G ′ d fool depth d , so Pr X [ f | X ≡ b ] ≈ Pr R [ f | R ≡ b ]
Collapse under pseudorandom restrictions ◮ Let f be a depth-( d − 1) read-once AC 0 formula ◮ Let b ∈ { 0 , 1 } ◮ X = Res( G d ⊕ D , G ′ d ⊕ D ′ ) ◮ G d , G ′ d fool depth d , so Pr X [ f | X ≡ b ] ≈ Pr R [ f | R ≡ b ] ◮ Hybrid argument: X ◦ t [ f | X ◦ t ≡ b ] ≈ Pr Pr R ◦ t [ f | R ◦ t ≡ b ]
Bridging the gap from d − 1 to d + 1 ◮ So far: Depth-( d − 1) formulas collapse with about the right probability
Bridging the gap from d − 1 to d + 1 ◮ So far: Depth-( d − 1) formulas collapse with about the right probability ◮ We were supposed to show that depth-( d + 1) formulas simplify w.r.t. ∆ w.h.p.
Idea of proof that ∆ �→ polylog n Total depth d + 1 ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨
Idea of proof that ∆ �→ polylog n These gates Total depth d + 1 collapsing... ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨
Idea of proof that ∆ �→ polylog n ...corresponds to These gates Total depth d + 1 these gates having collapsing... few children ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨
∆ = polylog n ◮ To recap, after t = O ((log log n ) 2 ) restrictions, ∆ = polylog n
∆ = polylog n ◮ To recap, after t = O ((log log n ) 2 ) restrictions, ∆ = polylog n ∧
∆ = polylog n ◮ To recap, after t = O ((log log n ) 2 ) restrictions, ∆ = polylog n ◮ Total cost so far: � O (log n ) truly random bits ∧
Final step: MRT PRG ◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length � O (log( n /ε )) that fools functions of the form m � f = f i , i =1
Final step: MRT PRG ◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length � O (log( n /ε )) that fools functions of the form m � f = f i , i =1 where f 1 , . . . , f m are on disjoint variables and f i can be computed by an ROBP with width O (1), length polylog n
Final step: MRT PRG ◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length � O (log( n /ε )) that fools functions of the form m � f = f i , i =1 where f 1 , . . . , f m are on disjoint variables and f i can be computed by an ROBP with width O (1), length polylog n ◮ (Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])
Final step: MRT PRG ◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length � O (log( n /ε )) that fools functions of the form m � f = f i , i =1 where f 1 , . . . , f m are on disjoint variables and f i can be computed by an ROBP with width O (1), length polylog n ◮ (Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09]) ◮ In our case, m � f = f i i =1
Final step: MRT PRG ◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length � O (log( n /ε )) that fools functions of the form m � f = f i , i =1 where f 1 , . . . , f m are on disjoint variables and f i can be computed by an ROBP with width O (1), length polylog n ◮ (Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09]) ◮ In our case, m � � � ( − 1) | S | ( − 1) f i f = f i = 2 m i =1 S ⊆ [ m ] i ∈ S
Directions for further research
Directions for further research Arbitrary-order poly( n )-ROBPs Read-once poly( n )-ROBPs formulas Arbitrary-order O (1)-ROBPs Read-once AC 0 [ ⊕ ] O (1)-ROBPs Arbitrary-order Read-once AC 0 3-ROBPs MRT19 This work 3-ROBPs MRT19 Read-once Arbitrary-order Read-once CNFs DETT10 2-ROBPs polynomials Regular Arbitrary-order SZ95 LV17 O (1)-ROBPs permutation BRRY14 O (1)-ROBPs CHHL18 2-ROBPs SZ95 Parities Conjunctions NN93 NN93
Directions for further research Arbitrary-order poly( n )-ROBPs Read-once poly( n )-ROBPs formulas Arbitrary-order O (1)-ROBPs Read-once AC 0 [ ⊕ ] O (1)-ROBPs Arbitrary-order Read-once AC 0 3-ROBPs MRT19 This work 3-ROBPs MRT19 Read-once Arbitrary-order Read-once CNFs DETT10 2-ROBPs polynomials Regular Arbitrary-order SZ95 LV17 O (1)-ROBPs permutation BRRY14 O (1)-ROBPs CHHL18 2-ROBPs SZ95 Parities Conjunctions NN93 NN93
Read-once AC 0 [ ⊕ ] ⊕ ∨ ∧ ∨ ⊕ ⊕ ⊕ ∨ ∧ ∧ x 8 x 3 x 12 ¬ x 9 x 7 ¬ x 2 ¬ x 1 x 13 ¬ x 5 x 11 x 4 x 10 ¬ x 6
Recommend
More recommend