Current constructions of two-source extractors { 0 , 1 } n { 0 , 1 } n { 0 , 1 } n nmE ( x 1 , 1) nmE ( x 1 , 2) . . . X 2 X 2 X 1 . x 1 . . nmE ( x 1 , D )
Current constructions of two-source extractors { 0 , 1 } n { 0 , 1 } n { 0 , 1 } n nmE ( x 1 , 1) nmE ( x 1 , 2) . . . X 2 X 2 X 1 x 2 . x 1 . . nmE ( x 1 , D )
Current constructions of two-source extractors { 0 , 1 } n { 0 , 1 } n { 0 , 1 } n nmE ( x 1 , 1) nmE ( x 1 , 2) D 0 . . . X 2 X 2 X 1 x 2 . x 1 . . nmE ( x 1 , D )
Current constructions of two-source extractors { 0 , 1 } n { 0 , 1 } n { 0 , 1 } n nmE ( x 1 , 1) nmE ( x 1 , 2) D 0 . . . X 2 X 2 X 1 x 2 . x 1 . . nmE ( x 1 , D )
Current constructions of two-source extractors { 0 , 1 } n { 0 , 1 } n { 0 , 1 } n nmE ( x 1 , 1) nmE ( x 1 , 3) nmE ( x 1 , 2) nmE ( x 1 , 7) D 0 . . . X 2 . X 2 X 1 . . x 2 . x 1 . . nmE ( x 1 , D )
Current constructions of two-source extractors { 0 , 1 } n { 0 , 1 } n { 0 , 1 } n nmE ( x 1 , 1) nmE ( x 1 , 3) nmE ( x 1 , 2) nmE ( x 1 , 7) D 0 . . . X 2 . X 2 X 1 . . x 2 . x 1 . . nmE ( x 1 , D ) f
Current constructions of two-source extractors { 0 , 1 } n { 0 , 1 } n { 0 , 1 } n nmE ( x 1 , 1) nmE ( x 1 , 3) nmE ( x 1 , 2) nmE ( x 1 , 7) D 0 . . . X 2 . X 2 X 1 . . x 2 . x 1 . . nmE ( x 1 , D ) f ≈ U 1
Resilient functions
Resilient functions The sampled table is close to being uniform and t -wise independent in the good rows.
Resilient functions The sampled table is close to being uniform and t -wise independent in the good rows. We need f to be resilient : Say we have D’ players. ε -fraction of them are malicious, and the rest are t -wise independent and uniform.
Resilient functions The sampled table is close to being uniform and t -wise independent in the good rows. We need f to be resilient : Say we have D’ players. ε -fraction of them are malicious, and the rest are t -wise independent and uniform. The honest players draw their random bit and later the malicious players draw as they wish.
Resilient functions The sampled table is close to being uniform and t -wise independent in the good rows. We need f to be resilient : Say we have D’ players. ε -fraction of them are malicious, and the rest are t -wise independent and uniform. The honest players draw their random bit and later the malicious players draw as they wish. With high probability, the outcome has small bias — the malicious players cannot substantially bias the outcome.
The bottleneck
The bottleneck A corollary of [KKL88] — even one malicious player can bias the output with probability at least log D’ / D’ .
The bottleneck A corollary of [KKL88] — even one malicious player can bias the output with probability at least log D’ / D’ . We cannot hope for an error smaller than 1/ D ’, and D ’ is the size of our table.
The bottleneck A corollary of [KKL88] — even one malicious player can bias the output with probability at least log D’ / D’ . We cannot hope for an error smaller than 1/ D ’, and D ’ is the size of our table. Thus, the running time is at least 1/ ε .
Today’s talk Two-source extractors and the low-error challenge. Seeded and non-malleable extractors. Current constructions of two-source extractors via non-malleable extractors and where they fail in achieving small error. Constructing low-error two-source extractors given “good” non-malleable extractors.
Getting a small error
Getting a small error We should abandon resilient functions if we want to get a small error.
Getting a small error We should abandon resilient functions if we want to get a small error. In current constructions, we need the sampled set to contain many good rows.
Getting a small error We should abandon resilient functions if we want to get a small error. In current constructions, we need the sampled set to contain many good rows. Instead of trying to sample and then employ t -wise independence in the good rows, let’s just try and hit a good row — a weaker sampling guarantee.
Getting a small error We should abandon resilient functions if we want to get a small error. In current constructions, we need the sampled set to contain many good rows. Instead of trying to sample and then employ t -wise independence in the good rows, let’s just try and hit a good row — a weaker sampling guarantee. We hit with a disperser.
Dispersers { 0 , 1 } n = [ N ] { 0 , 1 } m = [ M ] | Γ ( A, [ D ]) | > K 0 B | A | ≥ K A
Dispersers { 0 , 1 } n = [ N ] Γ :{0,1} n × [ D ] → {0,1} m is a { 0 , 1 } m = [ M ] ( K , K ’)-disperser if for every set A of cardinality at least K , Γ maps A to a set of | Γ ( A, [ D ]) | > K 0 cardinality greater than K ’. B | A | ≥ K A
Dispersers { 0 , 1 } n = [ N ] Γ :{0,1} n × [ D ] → {0,1} m is a { 0 , 1 } m = [ M ] ( K , K ’)-disperser if for every set A of cardinality at least K , Γ maps A to a set of | Γ ( A, [ D ]) | > K 0 cardinality greater than K ’. B We are interested in the | A | ≥ K case where K ’ is small A compared to 2 m . That is, we want to avoid small bad sets.
Dispersers { 0 , 1 } n = [ N ] [RT]: When K ’ is not too { 0 , 1 } m = [ M ] large, say K ’= ε M , the lower bound on the degree is | Γ ( A, [ D ]) | > K 0 ! log N K D = Ω B log 1 | A | ≥ K ε A
Explicit disperser
Explicit disperser Quite amazingly, when K = N 𝜀 for a constant 𝜀 <1 (alternatively, for entropy k = 𝜀 n ), there exist explicit constructions that achieve this bound [BKSSW 05, Raz 05, Zuckerman 06].
Explicit disperser Quite amazingly, when K = N 𝜀 for a constant 𝜀 <1 (alternatively, for entropy k = 𝜀 n ), there exist explicit constructions that achieve this bound [BKSSW 05, Raz 05, Zuckerman 06]. The key ingredient in Zuckerman’s beautiful construction: a points-lines incidence graph.
Explicit disperser Quite amazingly, when K = N 𝜀 for a constant 𝜀 <1 (alternatively, for entropy k = 𝜀 n ), there exist explicit constructions that achieve this bound [BKSSW 05, Raz 05, Zuckerman 06]. The key ingredient in Zuckerman’s beautiful construction: a points-lines incidence graph. Gives sub-optimal results also for lower k -s, where 𝜀 is sub-constant.
Our reduction
Our reduction We are given a source X 1 over {0,1} n 1 with entropy k 1 and a source X 2 over {0,1} n 2 with min-entropy k 2 .
Our reduction We are given a source X 1 over {0,1} n 1 with entropy k 1 and a source X 2 over {0,1} n 2 with min-entropy k 2 . Ingredients: nmE: {0,1} n 1 × [ D ] → {0,1} m , a t - n.m. extractor with error ε . Γ : {0,1} n 2 × [ t +1 ] → [ D ], a ( ε K 2 , ε D )-disperser.
Our reduction We are given a source X 1 over {0,1} n 1 with entropy k 1 and a source X 2 over {0,1} n 2 with min-entropy k 2 . Ingredients: nmE: {0,1} n 1 × [ D ] → {0,1} m , a t - n.m. extractor with error ε . Γ : {0,1} n 2 × [ t +1 ] → [ D ], a ( ε K 2 , ε D )-disperser. On input x 1 , x 2 , output ⊕ i ∈ [t+1] nmE( x 1 , Γ ( x 2 , i )).
Our reduction [ N 2 ] [ N 1 ] X 2 X 1
Our reduction [ N 2 ] [ N 1 ] X 2 X 1 x 1
Our reduction [ N 2 ] [ N 1 ] X 2 X 1 x 1
Our reduction [ N 2 ] [ N 1 ] [ N 2 ] nmE ( x 1 , 1) nmE ( x 1 , 2) . . . X 2 X 2 X 1 . x 1 . . nmE ( x 1 , D )
Our reduction [ N 2 ] [ N 1 ] [ N 2 ] nmE ( x 1 , 1) nmE ( x 1 , 2) . . . X 2 X 2 X 1 x 2 . x 1 . . nmE ( x 1 , D )
Our reduction [ N 2 ] [ N 1 ] [ N 2 ] nmE ( x 1 , 1) nmE ( x 1 , 2) t + 1 . . . X 2 X 2 X 1 x 2 . x 1 . . Γ nmE ( x 1 , D )
Our reduction [ N 2 ] [ N 1 ] [ N 2 ] nmE ( x 1 , 1) nmE ( x 1 , 2) t + 1 . . . X 2 X 2 X 1 x 2 . x 1 . . Γ nmE ( x 1 , D )
Our reduction [ N 2 ] [ N 1 ] [ N 2 ] nmE ( x 1 , 1) nmE ( x 1 , 3) nmE ( x 1 , 2) t + 1 nmE ( x 1 , 7) . . . X 2 . X 2 X 1 . . x 2 . x 1 . . Γ nmE ( x 1 , D )
Our reduction [ N 2 ] [ N 1 ] [ N 2 ] nmE ( x 1 , 1) nmE ( x 1 , 3) nmE ( x 1 , 2) t + 1 nmE ( x 1 , 7) . . . X 2 . X 2 X 1 . . x 2 . x 1 . . Γ nmE ( x 1 , D ) M
Our reduction [ N 2 ] [ N 1 ] [ N 2 ] nmE ( x 1 , 1) nmE ( x 1 , 3) nmE ( x 1 , 2) t + 1 nmE ( x 1 , 7) . . . X 2 . X 2 X 1 . . x 2 . x 1 . . Γ nmE ( x 1 , D ) M ≈ U 1
Our reduction [ N 2 ] [ N 1 ] [ N 2 ] nmE ( x 1 , 1) nmE ( x 1 , 3) nmE ( x 1 , 2) t + 1 nmE ( x 1 , 7) . . . X 2 . X 2 X 1 . . x 2 . x 1 . . Γ nmE ( x 1 , D ) M e ! r e h s n o i t c n u f t n e i l i s e r o N ≈ U 1
Correctness overview
Correctness overview The source X 1 defines a set of good and bad seeds for the n.m. extractor. Let G be the set of good seeds, of density at least 1- ε .
Correctness overview The source X 1 defines a set of good and bad seeds for the n.m. extractor. Let G be the set of good seeds, of density at least 1- ε . Γ is a ( ε K 2 , ε D )-disperser, so the number of elements x 2 for which Γ ( x 2 ,[t+1]) contains only bad seeds is at most ε K 2.
Correctness overview The source X 1 defines a set of good and bad seeds for the n.m. extractor. Let G be the set of good seeds, of density at least 1- ε . Γ is a ( ε K 2 , ε D )-disperser, so the number of elements x 2 for which Γ ( x 2 ,[t+1]) contains only bad seeds is at most ε K 2. Thus, with probability at least 1- ε K 2 / K 2 =1- ε , the input x 2 samples t +1 seeds of nmE, one of which, y , is good.
Correctness overview
Correctness overview t u p t u o , x , x t u p n i 2 n O 1 ) ) i , x ( Γ , x ( E m 2 n 1 ⊕ i ∈ [t+1]
Correctness overview In such a case, nmE( X , y ) is ε -close to uniform, even condition on t arbitrary outputs! This is since: t u p t u o , x , x t u p n i 2 n O 1 ) ) i , x ( Γ , x ( E m 2 n 1 ⊕ i ∈ [t+1]
Correctness overview In such a case, nmE( X , y ) is ε -close to uniform, even condition on t arbitrary outputs! This is since: For every y ∈ G and any y 1 ,…, y t ∈ {0,1} d \{ y } it holds that (nmE( X , y ),nmE( X , y 1 ),…,nmE( X , y t )) is ε -close to ( U ,nmE( X , y 1 ),…,nmE( X , y t )) . t u p t u o , x , x t u p n i 2 n O 1 ) ) i , x ( Γ , x ( E m 2 n 1 ⊕ i ∈ [t+1]
Correctness overview In such a case, nmE( X , y ) is ε -close to uniform, even condition on t arbitrary outputs! This is since: For every y ∈ G and any y 1 ,…, y t ∈ {0,1} d \{ y } it holds that (nmE( X , y ),nmE( X , y 1 ),…,nmE( X , y t )) is ε -close to ( U ,nmE( X , y 1 ),…,nmE( X , y t )) . Hence, the parity of the sampled random variables is also close to uniform, and the overall t u p t u o , x , x t u p n i 2 n O 1 error is 2 ε . ) ) i , x ( Γ , x ( E m 2 n 1 ⊕ i ∈ [t+1]
Our reduction So, if the n.m. extractor can support small error (and existing constructions can), we get a construction with a small error.
Our reduction
Our reduction The parity is not resilient… What happened here? We proposed a di ff erent approach:
Our reduction The parity is not resilient… What happened here? We proposed a di ff erent approach: Instead of sampling D ’ rows from the table and applying a resilient function, we pick a drastically smaller sample set — of size t +1.
Our reduction The parity is not resilient… What happened here? We proposed a di ff erent approach: Instead of sampling D ’ rows from the table and applying a resilient function, we pick a drastically smaller sample set — of size t +1. Instead of requiring that the number of malicious players is small, we have the weaker requirement that not all of the players in our sample set are malicious.
But does it work?
But does it work? Or, when does it work? We have no option but to look closer into the parameters.
But does it work? Or, when does it work? We have no option but to look closer into the parameters. A potential circular hazard: The degree of Γ should be at most t +1, but The degree of Γ also depends on the seed length of the n.m. extractor, which in turn depends on t …
Our result
Recommend
More recommend