lattice based cryptography trapdoors discrete gaussians
play

Lattice-Based Cryptography: Trapdoors, Discrete Gaussians, and - PowerPoint PPT Presentation

Lattice-Based Cryptography: Trapdoors, Discrete Gaussians, and Applications Chris Peikert Georgia Institute of Technology crypt@b-it 2013 1 / 21 Agenda 1 Strong trapdoors for lattices 2 Discrete Gaussians, sampling, and preimage


  1. Gaußians √ ◮ The 1 -dim Gaussian function: (pdf of normal dist w/ std dev 1 / 2 π ) ρ ( x ) ∆ = exp( − π · x 2 ) . Also define ρ s ( x ) ∆ = ρ ( x/s ) = exp( − π · ( x/s ) 2 ) . ◮ Sum of Gaussians centered at lattice points: � f s ( c ) = ρ s ( c − z ) = ρ s ( c + Z ) . z ∈ Z ◮ Fact: ρ s ( c + Z ) ∈ [1 ± 1 − ε ] · s for all c ∈ R , where ε ≤ 2 exp( − πs 2 ) . ε 7 / 21

  2. Gaußians √ ◮ The 1 -dim Gaussian function: (pdf of normal dist w/ std dev 1 / 2 π ) ρ ( x ) ∆ = exp( − π · x 2 ) . Also define ρ s ( x ) ∆ = ρ ( x/s ) = exp( − π · ( x/s ) 2 ) . ◮ Sum of Gaussians centered at lattice points: � f s ( c ) = ρ s ( c − z ) = ρ s ( c + Z ) . z ∈ Z ◮ Fact: ρ s ( c + Z ) ∈ [1 ± 1 − ε ] · s for all c ∈ R , where ε ≤ 2 exp( − πs 2 ) . ε 7 / 21

  3. Gaußians √ ◮ The 1 -dim Gaussian function: (pdf of normal dist w/ std dev 1 / 2 π ) ρ ( x ) ∆ = exp( − π · x 2 ) . Also define ρ s ( x ) ∆ = ρ ( x/s ) = exp( − π · ( x/s ) 2 ) . ◮ Sum of Gaussians centered at lattice points: � f s ( c ) = ρ s ( c − z ) = ρ s ( c + Z ) . z ∈ Z ◮ Fact: ρ s ( c + Z ) ∈ [1 ± 1 − ε ] · s for all c ∈ R , where ε ≤ 2 exp( − πs 2 ) . ε 7 / 21

  4. n -dimensional Gaussians ◮ The n -dim Gaussian: ρ ( x ) ∆ = exp( − π · � x � 2 ) = ρ ( x 1 ) · · · ρ ( x n ) . Clearly, it is rotationally invariant. 8 / 21

  5. n -dimensional Gaussians ◮ The n -dim Gaussian: ρ ( x ) ∆ = exp( − π · � x � 2 ) = ρ ( x 1 ) · · · ρ ( x n ) . Clearly, it is rotationally invariant. i � ˜ ◮ Fact: Suppose L has a basis B with M = max b i � . Then ρ s ( c + L ) ∈ [1 ± ε ] · s n for all c ∈ R n , where ε ≤ 2 n · exp( − π ( s/M ) 2 ) . b 2 ˜ b 1 = b 1 ˜ b 2 8 / 21

  6. n -dimensional Gaussians ◮ The n -dim Gaussian: ρ ( x ) ∆ = exp( − π · � x � 2 ) = ρ ( x 1 ) · · · ρ ( x n ) . Clearly, it is rotationally invariant. i � ˜ ◮ Fact: Suppose L has a basis B with M = max b i � . Then ρ s ( c + L ) ∈ [1 ± ε ] · s n for all c ∈ R n , where ε ≤ 2 n · exp( − π ( s/M ) 2 ) . So s ≈ M √ log n suffices for near-uniformity. b 2 ˜ b 1 = b 1 ˜ b 2 8 / 21

  7. n -dimensional Gaussians ◮ The n -dim Gaussian: ρ ( x ) ∆ = exp( − π · � x � 2 ) = ρ ( x 1 ) · · · ρ ( x n ) . Clearly, it is rotationally invariant. i � ˜ ◮ Fact: Suppose L has a basis B with M = max b i � . Then ρ s ( c + L ) ∈ [1 ± ε ] · s n for all c ∈ R n , where ε ≤ 2 n · exp( − π ( s/M ) 2 ) . So s ≈ M √ log n suffices for near-uniformity. 8 / 21

  8. n -dimensional Gaussians ◮ The n -dim Gaussian: ρ ( x ) ∆ = exp( − π · � x � 2 ) = ρ ( x 1 ) · · · ρ ( x n ) . Clearly, it is rotationally invariant. i � ˜ ◮ Fact: Suppose L has a basis B with M = max b i � . Then ρ s ( c + L ) ∈ [1 ± ε ] · s n for all c ∈ R n , where ε ≤ 2 n · exp( − π ( s/M ) 2 ) . So s ≈ M √ log n suffices for near-uniformity. 8 / 21

  9. n -dimensional Gaussians ◮ The n -dim Gaussian: ρ ( x ) ∆ = exp( − π · � x � 2 ) = ρ ( x 1 ) · · · ρ ( x n ) . Clearly, it is rotationally invariant. i � ˜ ◮ Fact: Suppose L has a basis B with M = max b i � . Then ρ s ( c + L ) ∈ [1 ± ε ] · s n for all c ∈ R n , where ε ≤ 2 n · exp( − π ( s/M ) 2 ) . So s ≈ M √ log n suffices for near-uniformity. 8 / 21

  10. n -dimensional Gaussians ◮ The n -dim Gaussian: ρ ( x ) ∆ = exp( − π · � x � 2 ) = ρ ( x 1 ) · · · ρ ( x n ) . Clearly, it is rotationally invariant. i � ˜ ◮ Fact: Suppose L has a basis B with M = max b i � . Then ρ s ( c + L ) ∈ [1 ± ε ] · s n for all c ∈ R n , where ε ≤ 2 n · exp( − π ( s/M ) 2 ) . So s ≈ M √ log n suffices for near-uniformity. 8 / 21

  11. Discrete Gaussians ◮ Define the discrete Gaussian distribution over coset c + L as ρ s ( x ) D c + L ,s ( x ) = ρ s ( c + L ) for all x ∈ c + L . 9 / 21

  12. Discrete Gaussians ◮ Define the discrete Gaussian distribution over coset c + L as ρ s ( x ) D c + L ,s ( x ) = ρ s ( c + L ) for all x ∈ c + L . ◮ Consider the following experiment: 1 Choose x ∈ Z n from D Z n ,s . 9 / 21

  13. Discrete Gaussians ◮ Define the discrete Gaussian distribution over coset c + L as ρ s ( x ) D c + L ,s ( x ) = ρ s ( c + L ) for all x ∈ c + L . ◮ Consider the following experiment: 1 Choose x ∈ Z n from D Z n ,s . 2 Reveal coset x + L . (e.g., as ¯ x = x mod B for some basis B ) 9 / 21

  14. Discrete Gaussians ◮ Define the discrete Gaussian distribution over coset c + L as ρ s ( x ) D c + L ,s ( x ) = ρ s ( c + L ) for all x ∈ c + L . ◮ Consider the following experiment: 1 Choose x ∈ Z n from D Z n ,s . 2 Reveal coset x + L . (e.g., as ¯ x = x mod B for some basis B ) Immediate facts: 9 / 21

  15. Discrete Gaussians ◮ Define the discrete Gaussian distribution over coset c + L as ρ s ( x ) D c + L ,s ( x ) = ρ s ( c + L ) for all x ∈ c + L . ◮ Consider the following experiment: 1 Choose x ∈ Z n from D Z n ,s . 2 Reveal coset x + L . (e.g., as ¯ x = x mod B for some basis B ) Immediate facts: 1 Every coset c + L is equally ∗ likely: we get uniform dist over Z n / L . 9 / 21

  16. Discrete Gaussians ◮ Define the discrete Gaussian distribution over coset c + L as ρ s ( x ) D c + L ,s ( x ) = ρ s ( c + L ) for all x ∈ c + L . ◮ Consider the following experiment: 1 Choose x ∈ Z n from D Z n ,s . 2 Reveal coset x + L . (e.g., as ¯ x = x mod B for some basis B ) Immediate facts: 1 Every coset c + L is equally ∗ likely: we get uniform dist over Z n / L . 2 Given that x ∈ c + L , it has conditional distribution D c + L ,s . 9 / 21

  17. f Preimage Sampleable TDF: Evaluation (0 , q ) ◮ ‘Hard’ description of L specifies f . Concretely: SIS matrix A defines f A . ( q, 0) O 10 / 21

  18. f Preimage Sampleable TDF: Evaluation (0 , q ) ◮ ‘Hard’ description of L specifies f . Concretely: SIS matrix A defines f A . ◮ f ( x ) = x mod L for Gaussian x ← D Z m ,s . Concretely: f A ( x ) = Ax = u ∈ Z n q . x ( q, 0) O 10 / 21

  19. f Preimage Sampleable TDF: Evaluation (0 , q ) ◮ ‘Hard’ description of L specifies f . Concretely: SIS matrix A defines f A . ◮ f ( x ) = x mod L for Gaussian x ← D Z m ,s . Concretely: f A ( x ) = Ax = u ∈ Z n q . x ( q, 0) O ◮ Inverting f A ⇔ decoding unif syndrome u ⇔ solving SIS. 10 / 21

  20. f Preimage Sampleable TDF: Evaluation (0 , q ) ◮ ‘Hard’ description of L specifies f . Concretely: SIS matrix A defines f A . ◮ f ( x ) = x mod L for Gaussian x ← D Z m ,s . Concretely: f A ( x ) = Ax = u ∈ Z n q . x ( q, 0) O ◮ Inverting f A ⇔ decoding unif syndrome u ⇔ solving SIS. ◮ Given u , conditional distrib. of x is the discrete Gaussian D L ⊥ u ( A ) ,s . 10 / 21

  21. f − 1 Preimage Sampling: Method #1 ◮ Sample D L ⊥ u ( A ) ,s given any short enough basis S : max � ˜ s i � ≤ s . ⋆ Unlike [GGH’96] , output leaks nothing about S ! (the bound s is public) 11 / 21

  22. f − 1 Preimage Sampling: Method #1 ◮ Sample D L ⊥ u ( A ) ,s given any short enough basis S : max � ˜ s i � ≤ s . ⋆ Unlike [GGH’96] , output leaks nothing about S ! (the bound s is public) ◮ “Nearest-plane” algorithm with randomized rounding [Klein’00,GPV’08] s 2 s 1 coset L ⊥ u ( A ) O 11 / 21

  23. f − 1 Preimage Sampling: Method #1 ◮ Sample D L ⊥ u ( A ) ,s given any short enough basis S : max � ˜ s i � ≤ s . ⋆ Unlike [GGH’96] , output leaks nothing about S ! (the bound s is public) ◮ “Nearest-plane” algorithm with randomized rounding [Klein’00,GPV’08] s 2 s 1 coset L ⊥ u ( A ) O 11 / 21

  24. f − 1 Preimage Sampling: Method #1 ◮ Sample D L ⊥ u ( A ) ,s given any short enough basis S : max � ˜ s i � ≤ s . ⋆ Unlike [GGH’96] , output leaks nothing about S ! (the bound s is public) ◮ “Nearest-plane” algorithm with randomized rounding [Klein’00,GPV’08] s 2 s 1 coset L ⊥ u ( A ) O 11 / 21

  25. f − 1 Preimage Sampling: Method #1 ◮ Sample D L ⊥ u ( A ) ,s given any short enough basis S : max � ˜ s i � ≤ s . ⋆ Unlike [GGH’96] , output leaks nothing about S ! (the bound s is public) ◮ “Nearest-plane” algorithm with randomized rounding [Klein’00,GPV’08] s 2 x s 1 coset L ⊥ u ( A ) O 11 / 21

  26. f − 1 Preimage Sampling: Method #1 ◮ Sample D L ⊥ u ( A ) ,s given any short enough basis S : max � ˜ s i � ≤ s . ⋆ Unlike [GGH’96] , output leaks nothing about S ! (the bound s is public) ◮ “Nearest-plane” algorithm with randomized rounding [Klein’00,GPV’08] s 2 x s 1 coset L ⊥ u ( A ) O ◮ Proof idea: ρ s (( c + L ) ∩ plane ) depends only on dist( 0 , plane ) ; essentially no dependence on shift within plane 11 / 21

  27. Identity-Based Encryption ◮ Proposed by [Shamir’84] : could this exist? mpk ( msk ) 12 / 21

  28. Identity-Based Encryption ◮ Proposed by [Shamir’84] : could this exist? mpk ( msk ) s k sk Alice sk Bobbi Carol 12 / 21

  29. Identity-Based Encryption ◮ Proposed by [Shamir’84] : could this exist? mpk ( msk ) s k sk Alice sk Bobbi Carol Enc( mpk , “Alice”, msg) 12 / 21

  30. Identity-Based Encryption ◮ Proposed by [Shamir’84] : could this exist? mpk ( msk ) s k sk Alice sk Bobbi Carol ?? ?? Enc( mpk , “Alice”, msg) 12 / 21

  31. Fast-Forward 17 Years. . . 1 [BonehFranklin’01,. . . ] : first IBE construction, using “new math” (elliptic curves w/ bilinear pairings) 13 / 21

  32. Fast-Forward 17 Years. . . 1 [BonehFranklin’01,. . . ] : first IBE construction, using “new math” (elliptic curves w/ bilinear pairings) 2 [Cocks’01,BGH’07] : quadratic residuosity mod N = pq [GM’82] 13 / 21

  33. Fast-Forward 17 Years. . . 1 [BonehFranklin’01,. . . ] : first IBE construction, using “new math” (elliptic curves w/ bilinear pairings) 2 [Cocks’01,BGH’07] : quadratic residuosity mod N = pq [GM’82] 3 [GPV’08] : lattices! 13 / 21

  34. Recall: ‘Dual’ LWE Cryptosystem A x ← Gauss 14 / 21

  35. Recall: ‘Dual’ LWE Cryptosystem A x ← Gauss u = Ax = f A ( x ) (public key) 14 / 21

  36. Recall: ‘Dual’ LWE Cryptosystem A x ← Gauss s , e u = Ax = f A ( x ) (public key) b t = s t A + e t (ciphertext ‘preamble’) 14 / 21

  37. Recall: ‘Dual’ LWE Cryptosystem A x ← Gauss s , e u = Ax = f A ( x ) (public key) b t = s t A + e t (ciphertext ‘preamble’) b ′ = s t u + e ′ + bit · q 2 (‘payload’) 14 / 21

  38. Recall: ‘Dual’ LWE Cryptosystem A x ← Gauss s , e u = Ax = f A ( x ) (public key) b t = s t A + e t (ciphertext ‘preamble’) b ′ = s t u + e ′ + bit · q b ′ − b t x ≈ bit · q 2 2 (‘payload’) 14 / 21

  39. Recall: ‘Dual’ LWE Cryptosystem A x ← Gauss s , e u = Ax = f A ( x ) (public key) b t = s t A + e t (ciphertext ‘preamble’) b ′ = s t u + e ′ + bit · q b ′ − b t x ≈ bit · q 2 2 (‘payload’) ? ( A , u , b , b ′ ) 14 / 21

  40. Recall: ‘Dual’ LWE Cryptosystem A x ← Gauss s , e u = Ax = f A ( x ) (public key) b t = s t A + e t (ciphertext ‘preamble’) b ′ = s t u + e ′ + bit · q b ′ − b t x ≈ bit · q 2 2 (‘payload’) ? ( A , u , b , b ′ ) 14 / 21

  41. ID-Based Encryption x ← f − 1 A ( u ) mpk = A s , e u = H ( “Alice” ) (‘identity’ public key) b = s t A + e t (ciphertext preamble) b ′ = s t u + e ′ + bit · q b ′ − b t x ≈ bit · q 2 2 (‘payload’) 15 / 21

  42. Tomorrow. . . ◮ Generating trapdoors ( A with short basis or equivalent) 16 / 21

  43. Tomorrow. . . ◮ Generating trapdoors ( A with short basis or equivalent) ◮ Removing the random oracle from signatures & IBE 16 / 21

  44. Tomorrow. . . ◮ Generating trapdoors ( A with short basis or equivalent) ◮ Removing the random oracle from signatures & IBE ◮ More surprising applications 16 / 21

  45. Tomorrow. . . ◮ Generating trapdoors ( A with short basis or equivalent) ◮ Removing the random oracle from signatures & IBE ◮ More surprising applications Selected bibliography for this talk: MR’04 D. Micciancio and O. Regev, “Worst-Case to Average-Case Reductions Based on Gaussian Measures,” FOCS’04 / SICOMP’07. GPV’08 C. Gentry, C. Peikert, V. Vaikuntanathan, “Trapdoors for Hard Lattices and New Cryptographic Constructions,” STOC’08. P’10 C. Peikert, “An Efficient and Parallel Gaussian Sampler for Lattices,” Crypto’10. 16 / 21

  46. Bonus Material: A Better Discrete Gaussian Sampling Algorithm 17 / 21

  47. Performance of Nearest-Plane Sampling Algorithm? Good News, and Bad News. . . ✔ Tight: std dev s ≈ max � ˜ s i � = max dist between adjacent planes 18 / 21

  48. Performance of Nearest-Plane Sampling Algorithm? Good News, and Bad News. . . ✔ Tight: std dev s ≈ max � ˜ s i � = max dist between adjacent planes ✗ Not efficient: runtime = Ω( n 3 ) , high-precision arithmetic 18 / 21

  49. Performance of Nearest-Plane Sampling Algorithm? Good News, and Bad News. . . ✔ Tight: std dev s ≈ max � ˜ s i � = max dist between adjacent planes ✗ Not efficient: runtime = Ω( n 3 ) , high-precision arithmetic ✗ Inherently sequential: n adaptive iterations 18 / 21

  50. Performance of Nearest-Plane Sampling Algorithm? Good News, and Bad News. . . ✔ Tight: std dev s ≈ max � ˜ s i � = max dist between adjacent planes ✗ Not efficient: runtime = Ω( n 3 ) , high-precision arithmetic ✗ Inherently sequential: n adaptive iterations ✗ No efficiency improvement in the ring setting [NTRU’98,M’02,. . . ] 18 / 21

  51. Performance of Nearest-Plane Sampling Algorithm? Good News, and Bad News. . . ✔ Tight: std dev s ≈ max � ˜ s i � = max dist between adjacent planes ✗ Not efficient: runtime = Ω( n 3 ) , high-precision arithmetic ✗ Inherently sequential: n adaptive iterations ✗ No efficiency improvement in the ring setting [NTRU’98,M’02,. . . ] A Different Sampling Algorithm [P’10] ◮ Simple & efficient: n 2 online adds and mults (mod q ) 18 / 21

  52. Performance of Nearest-Plane Sampling Algorithm? Good News, and Bad News. . . ✔ Tight: std dev s ≈ max � ˜ s i � = max dist between adjacent planes ✗ Not efficient: runtime = Ω( n 3 ) , high-precision arithmetic ✗ Inherently sequential: n adaptive iterations ✗ No efficiency improvement in the ring setting [NTRU’98,M’02,. . . ] A Different Sampling Algorithm [P’10] ◮ Simple & efficient: n 2 online adds and mults (mod q ) Even better: ˜ O ( n ) time in the ring setting 18 / 21

  53. Performance of Nearest-Plane Sampling Algorithm? Good News, and Bad News. . . ✔ Tight: std dev s ≈ max � ˜ s i � = max dist between adjacent planes ✗ Not efficient: runtime = Ω( n 3 ) , high-precision arithmetic ✗ Inherently sequential: n adaptive iterations ✗ No efficiency improvement in the ring setting [NTRU’98,M’02,. . . ] A Different Sampling Algorithm [P’10] ◮ Simple & efficient: n 2 online adds and mults (mod q ) Even better: ˜ O ( n ) time in the ring setting ◮ Fully parallel: n 2 /P operations on any P ≤ n 2 processors 18 / 21

  54. Performance of Nearest-Plane Sampling Algorithm? Good News, and Bad News. . . ✔ Tight: std dev s ≈ max � ˜ s i � = max dist between adjacent planes ✗ Not efficient: runtime = Ω( n 3 ) , high-precision arithmetic ✗ Inherently sequential: n adaptive iterations ✗ No efficiency improvement in the ring setting [NTRU’98,M’02,. . . ] A Different Sampling Algorithm [P’10] ◮ Simple & efficient: n 2 online adds and mults (mod q ) Even better: ˜ O ( n ) time in the ring setting ◮ Fully parallel: n 2 /P operations on any P ≤ n 2 processors ◮ High quality: same ∗ Gaussian std dev as nearest-plane alg ∗ in cryptographic applications 18 / 21

  55. A First Attempt ◮ [Babai’86] “round-off:” c �→ S · frac ( S − 1 · c ) . (Fast & parallel!) s 2 s 1 coset L + c O 19 / 21

  56. A First Attempt ◮ [Babai’86] “round-off:” c �→ S · frac ( S − 1 · c ) . (Fast & parallel!) ◮ Deterministic round-off is insecure [NR’06] . . . s 2 s 1 coset L + c O 19 / 21

  57. A First Attempt ◮ [Babai’86] “round-off:” c �→ S · frac ( S − 1 · c ) $ . (Fast & parallel!) ◮ Deterministic round-off is insecure [NR’06] . . . . . . but what about randomized rounding? s 2 s 1 coset L + c O 19 / 21

  58. A First Attempt ◮ [Babai’86] “round-off:” c �→ S · frac ( S − 1 · c ) $ . (Fast & parallel!) ◮ Deterministic round-off is insecure [NR’06] . . . . . . but what about randomized rounding? s 2 s 1 coset L + c O 19 / 21

  59. A First Attempt ◮ [Babai’86] “round-off:” c �→ S · frac ( S − 1 · c ) $ . (Fast & parallel!) ◮ Deterministic round-off is insecure [NR’06] . . . . . . but what about randomized rounding? s 2 s 1 coset L + c O ◮ Non-spherical discrete Gaussian: has covariance � x · x t � ≈ S · S t . Σ := E x 19 / 21

  60. A First Attempt ◮ [Babai’86] “round-off:” c �→ S · frac ( S − 1 · c ) $ . (Fast & parallel!) ◮ Deterministic round-off is insecure [NR’06] . . . . . . but what about randomized rounding? s 2 s 1 coset L + c O ◮ Non-spherical discrete Gaussian: has covariance � x · x t � ≈ S · S t . Σ := E x Covariance can be measured — and it leaks S ! (up to rotation) 19 / 21

  61. Inspiration: Some Facts About Gaussians 1 Continuous Gaussian ↔ positive definite covariance matrix Σ . (pos def means: u t Σ u > 0 for all unit u .) 20 / 21

Recommend


More recommend