pseudorandom fu functions in in alm lmost
play

Pseudorandom fu functions in in alm lmost constant depth fr from - PowerPoint PPT Presentation

Pseudorandom fu functions in in alm lmost constant depth fr from lo low-noise LPN Yu Yu Cryptologic Research Center joint work with (John Steinberger) Outline Introduction to LPN Decisional and Computational LPN


  1. Pseudorandom fu functions in in alm lmost constant depth fr from lo low-noise LPN Yu Yu Cryptologic Research Center joint work with (John Steinberger)

  2. Outline • Introduction to LPN  Decisional and Computational LPN  Asymptotic hardness of LPN  Related work  (randomized) PRFs and PRGs • The road map  Overview of the LPN-based randomized PRG in AC 0 mod 2  Bernoulli noise extractor in AC 0 mod 2  Bernoulli-like noise sampler in AC 0  randomized PRG  randomized PRF  Conclusion and open problems

  3. Learning Parity with Noise (LPN) $ {0,1} 𝑟×𝑜 , 𝑦 $ {0,1} 𝑜 , 𝑓 Ber 𝜈 𝑟 , 𝑧 ≔ 𝐵𝑦 + 𝑓 Challenger: 𝑏 Ber 𝜈 : Bernoulli distribution 𝟐 y e x a of noise rate 0 < 𝜈 < 𝟑 Pr [ Ber 𝜈 = 1 ] = 𝜈 1 0 1 1 0 1 1 Pr [ Ber 𝜈 = 0 ] = 1 − 𝜈 0 1 0 1 0 0 0 1 𝑟 : q-fold of Ber 𝜈 Ber 𝜈 1 0 1 0 1 0 1 0 1 1 0 1 0 1 1 • + = 1 (mod 2) 0 0 1 1 0 0 1 0 1 0 1 1 0 0 0 1 1 1 0 1 0 0 1 0 1 0 0 1 1 0 Search LPN: given 𝑏 and 𝑧 , find out 𝑡 distinguish 𝑏, 𝑧 from 𝑏, 𝑉 𝑟 Decisional LPN: [Blum et al.94, Katz&Shin06]: the two versions are (poly) equivalent $ {0,1} 𝑜 𝑜 instead of 𝑦 In fact: can use 𝑦 Ber 𝜈

  4. Hardness of LPN • worst-case hardness LPN (decoding random linear code) is NP-hard. • average-case hardness  constant noise 𝜈 = 𝑃(1) BKW (Blum,Kalai,Wasserman): 𝑢 = 𝑟 = 2 𝑃(𝑜/ log 𝑜) 𝑢 = 2 𝑃(𝑜/ loglog 𝑜) , 𝑟 = 𝑜 1+𝜗 Lyubashevsky’s tradeoff:  low noise 𝜈 = 𝑜 −𝑑 (for constant 0 < 𝑑 < 1 ) 𝑢 = 2 𝑃(𝑜 1−𝑑 ) , 𝑟 = 𝑜 + 𝑃(1) • quantum resistance

  5. Related Work • public-key cryptography from LPN  CPA PKE from low-noise LPN [Alekhnovich 03]  CCA PKE from low-noise LPN [Dottling et al.12, Kiltz et al. 14 ]  CCA PKE from constant-noise LPN [Yu & J. Zhang C16] • symmetric cryptography from LPN  Pseudorandom generators [Blum et al.93, Applebaum et al.09]  Authentication schemes [Hopper&Blum 01, Juels et al. 05,…] [Kiltz et al.11, Dodis et al.12, Lyu & Masny13, Cash et al.16]  Perfectly binding string commitment scheme [Jain et al. 12]  Pseudorandom functions from (low-noise) LPN? This work

  6. Main results • Low-noise LPN implies  Polynomial-stretch pseudorandom generators (PRGs) in AC 0 mod 2 AC 0 (mod 2) : polynomial-size, O (1) -depth circuits with unbounded fan-in ∧ , ∨ , ⊕ .  Pseudorandom functions (PRFs) in AC 0 (mod 2) AC 0 (mod 2) : polynomial-size, 𝜕 (1) -depth circuits with unbounded fan-in ∧ , ∨ , ⊕ [Razborov & Rudich 94]: good PRFs do NOT exist in AC 0 (mod 2) • More about the PRGs/PRFs:  weak seed/key of sublinear entropy & security ≈ LPN on linear size secret  uniform seed/key of size λ & security up to 2 𝑃(λ/logλ) • Technical tools:  Bernoulli noise extractor in AC 0 mod 2 Rényi entropy source  Bernoulli distribution  Bernoulli-like noise sampler in AC 0 Uniform randomness  Bernoulli-like distribution  Security-preserving and depth-preserving domain extender for PRFs

  7. (randomized) PRGs, PRFs and LPN • 𝐻 𝑏 : {0,1} 𝑜 × {0,1} 𝑛 → {0,1} 𝑚 𝑜 < 𝑚 is randomized PRG if (𝐻 𝑏 𝑉 𝑜 , 𝑏) ~ 𝑑 (𝑉 𝑚 , 𝑏) • 𝐺 𝑙,𝑏 : {0,1} 𝑜 × {0,1} 𝑜 × {0,1} 𝑛 → {0,1} 𝑚 is randomized PRF if for every PPT 𝐵 − Pr 𝐵 𝑆 𝑏 = 1 | = 𝑜𝑓𝑕𝑚(𝑜) | Pr 𝐵 𝐺 𝑙,𝑏 𝑏 = 1 where 𝑆: {0,1} 𝑛 → {0,1} 𝑚 is a random function. • Can we obtain (randomized) PRGs and weak PRFs from LPN ? try eliminating the noise (like LWR from LWE) 𝑏 1 , 𝑦 , ⋯ , 𝑏 𝑗 , 𝑦 𝑏 𝑟−𝑗+1 , 𝑦 , ⋯ , 𝑏 𝑟 , 𝑦 , ⋯ , 𝑀(∙) 𝑀(∙) where 𝑀(∙) is deterministic, 𝐻 𝑏 𝑦 = 𝑀 𝑏 ∙ 𝑦 , 𝐺 𝑦 𝑏 = 𝑀(𝑏 ∙ 𝑦) [Akavia et al.14]: may not work! our approach: convert entropy source w into Bernoulli noise

  8. Overview: LPN-based randomized PRG • Input: (possibly weak) seed 𝑥 and public coin 𝑏 • Noise sampling: convert (almost all entropy of) 𝑥 into Bernoulli-like noise (𝑦, 𝑓) • Output : 𝑯 𝒃 𝒙 = 𝒃𝑦 + 𝒇 𝒙 Bernoulli noise extractor/sampler 𝒚 = 𝒚 𝟐 , ⋯ , 𝒚 𝒐 , 𝒇 = (𝒇 𝟐 , 𝒇 𝟑 , 𝒇 𝟒 , ⋯ , 𝒇 𝒓 ) 𝒃𝒚 + 𝒇 • Theorem : Assume that the decisional LPN is ( 𝑟 = 1 + 𝑃 1 𝑜, 𝑢, ε )-hard on secret of size 𝑜 and noise rate 𝜈 = 𝑜 −𝑑 (0 < 𝑑 < 1) , then 𝑯 𝒃 is a ( 𝑢 − 𝑞𝑝𝑚𝑧 𝑜 , 𝑃 ε )-hard randomized PRG in AC 0 mod 2 on  weak seed 𝒙 of entropy 𝑃(𝑜 1−𝑑 ∙ log 𝑜)  uniform seed 𝒙 of size 𝑃(𝑜 1−𝑑 ∙ log 𝑜)

  9. Bernoulli Noise Extractor • Sample Ber 𝜈 𝜈 = 2 −𝑗 : output ⨀ ( 𝑥 1 , ⋯ , 𝑥 𝑗 ) = 𝑥 1 𝑥 2 ⋯ 𝑥 𝑗 • For 𝜈 = 𝑜 −𝑑 (𝑗 = 𝑑log𝑜) , Shannon entropy H ( Ber 𝜈 ) ≈ 𝜈log(1/𝜈) λ random bits  ( λ / 𝑗) = 𝑃( λ/ log𝑜) Bernoulli bits in theory: λ random bits  λ /H( Ber 𝜈 ) ≈ 𝑃( λ 𝑜 𝑑 / log𝑜) Bernoulli bits [Applebaum et al.09]: 𝑥 remains a lot of entropy given the noise sampled 𝑥 • The proposal: ℎ 𝑟 ℎ 1 ℎ 2 ℎ 3 ⨀ ⨀ ⨀ ⨀ ℎ 1 , ℎ 2 , ⋯ , ℎ 𝑟 : 2-wise independent hash functions (randomized by 𝑏 )

  10. Bernoulli Noise Extractor (cont’d) • The extractor is in AC0 (mod 2) • Theorem (informal) : Let ℎ 1 , ℎ 2 , ⋯ , ℎ 𝑟 be 2-wise independent hash functions, for any source 𝑥 of Renyi entropy λ , for any constant 0 < Δ ≤ 1 , 𝑟 − λ 1+ Δ H Ber𝜈 + 2 − Δ 2 𝜈𝑟/3 𝑟 Stat-Dist ( 𝑏, (𝑓 1 , ⋯ 𝑓 𝑟 ), 𝑏, Ber 𝜈 ) < 2 2 𝑟 = Ω( 𝑜 1−𝑑 ∙ log n ) • Parameters: 𝜈 = 𝑜 −𝑑 , set q = Ω 𝑜 , λ = 1 + 2 Δ H Ber 𝜈 output length 𝑟−𝑜 λ = 𝑜 Ω (1) • PRG’s stretch: input length = • Proof: Cauchy-Schwarz + 2-wise independence + flattening Shannon entropy like the crooked LHL [Dodis & Smith 05] Chernoff [HILL99]

  11. An alternative: Bernoulli noise sampler • Use uniform randomness (weak random source), and do it in AC 0 ( AC 0 mod 2 ) • The idea: take conjunction of 2𝜈𝑟 copies of random Hamming-weight-1 distributions 𝑟 0001000000000000000 bitwise OR 0000000100000000000 0001000100000001000 2𝜈𝑟 … 0000000000000001000 𝑟 ) need 2𝜈𝑟(log𝑟) uniform random bits • The above distribution (denoted as ψ 𝜈 𝑟 )) • Asymptotically optimal: for 𝜈 = 𝑜 −𝑑 , 𝑟 = 𝑞𝑝𝑚𝑧(𝑜) , 2𝜈𝑟log𝑟 = 𝑃(H(Ber 𝜈 𝑜+𝑟 from uniform 𝑥 • PRG: 𝑯 𝒃 𝒙 = 𝒃𝑦 + 𝒇 by sampling (𝑦, 𝑓) ψ 𝜈 • Theorem: 𝑯 𝒃 is a randomized PRG of seed length 𝑃(𝑜 1−𝑑 log𝑜) with comparable security to the underlying standard LPN of secret size 𝑜 . 𝑜+𝑟 -LPN Proof. (1) computational LPN  computational ψ 𝜈 𝑜+𝑟 LPN  decisional ψ 𝜈 𝑜+𝑟 -LPN (2) computational ψ 𝜈 sample-preserving reduction by [Applebaum et al.07]

  12. Randomized PRGs to PRFs • Given randomized PRG 𝐻 𝑏 : {0,1} 𝑜 × {0,1} 𝑛 → {0,1} 𝑜 2 in AC 0 mod 2 how to construct a PRF in AC 0 (mod 2) ? ① a PRF of input size 𝜕 log𝑜 : 𝑜 -ary GGM tree of depth 𝑒 = 𝜕(1) 0⋯00 (𝑙) 𝐻 𝑏 0⋯01 (𝑙) ⋯ 𝐻 𝑏 1⋯11 (𝑙) 𝐻 𝑏 (𝑙) ≝ 𝐻 𝑏 n bits n bits n bits n blocks 𝑦 𝑒−1 log𝑜+1 ⋯𝑦 dlog𝑜 (⋯ 𝐻 𝑏 𝑦 1 ⋯𝑦 log𝑜 𝑙 ) ⋯ ) 𝑦 log𝑜+1 ⋯𝑦 2log𝑜 (𝐻 𝑏 𝐺 𝑙,𝑏 (𝑦 1 ⋯ 𝑦 𝑒log𝑜 ) ≝ 𝐻 𝑏 ② Domain extension from {0,1} 𝜕 log𝑜 to {0,1} n (w. security & depth preserved) Generalized Levin’s trick: 𝐺 ′ 𝑙,𝑏 (𝑦) ≝ 𝐺 𝑙 1 ,𝑏 (ℎ 1 (𝑦)) ⊕ 𝐺 𝑙 2 ,𝑏 (ℎ 2 (𝑦)) ⊕ ⋯ ⊕ 𝐺 𝑙 𝑚 ,𝑏 (ℎ 𝑚 (𝑦)) universal hash functions ℎ 1 , ⋯ , ℎ 𝑚 : {0,1} n → {0,1} 𝜕 log𝑜 , 𝑙 ≝ (𝑙 1 , ℎ 1 , ⋯ , 𝑙 𝑚 ℎ 𝑚 )

  13. Randomized PRGs to PRFs (cont’d) Theorem [Generalized Levin’s trick]: For random functions 𝑆 1 , ⋯ , 𝑆 𝑚 : {0,1} 𝜕 𝑚𝑝𝑕𝑜 → {0,1} 𝑜 and universal hash functions ℎ 1 , ⋯ , ℎ 𝑚 : {0,1} 𝑜 → {0,1} 𝜕 𝑚𝑝𝑕𝑜 , let 𝑆′(𝑦) ≝ 𝑆 1 (ℎ 1 (𝑦)) ⊕ 𝑆 2 (ℎ 2 (𝑦)) ⊕ ⋯ ⊕ 𝑆 𝑚 (ℎ 𝑚 (𝑦)) 𝑚 𝑟 -indistinguishable from random function {0,1} 𝑜 → {0,1} 𝑜 Then, 𝑆′ is 𝑟 𝑜 𝜕 1 for any (computationally unbounded) adversary making up to 𝑟 oracle queries. • See [Bellare et al.99] [Maurer 02][Dottling,Schröder15] [Gazi&Tessaro15] • Our proof: using the Patarin’s H-coefficient technique • Security is preserved for 𝑟 = 𝑜 𝜕 1 and 𝑚 = 𝑃(𝑜/log𝑜) Theorem [The PRF] Assume the decisional LPN is ( 𝑟 = 1 + 𝑃 1 𝑜, 𝑢, ε )-hard on secret of size 𝑜 and noise rate 𝜈 = 𝑜 −𝑑 (0 < 𝑑 < 1) , then for any 𝜕 1 there exists ( 𝑟 = 𝑜 𝜕 1 , 𝑢 − 𝑞𝑝𝑚𝑧 𝑟, 𝑜 , 𝑃 𝑒𝑟ε )-hard randomized PRF 𝐺 ′ 𝑙,𝑏 in AC 0 (mod 2) of depth 𝜕 1 on any weak key 𝑙 of entropy 𝑃(𝑜 1−𝑑 ∙ log 𝑜) .

Recommend


More recommend