Solving SVP and CVP in 2 ๐ Time Using Discrete Gaussian Sampling Divesh Aggarwal National University of Singapore (NUS) Daniel Dadush Centrum Wiskunde en Informatica (CWI) Oded Regev Noah Stephens-Davidowitz New York University (NYU)
Lattices A lattice โ โ โ ๐ is all integral ๐ 2 combinations of some basis ๐ 1 B = ๐ 1 , โฆ , ๐ ๐ . โ(๐ถ) denotes lattice generated by ๐ถ . โ
Act I: The Shortest Vector Problem
Shortest Vector Problem (SVP) Given: Lattice basis ๐ถ ๐ โ ๐ร๐ . Goal: Compute shortest non-zero vector in โ(๐ถ) . 0 ๐ง โ
Shortest Vector Problem (SVP) ๐ 1 โ = length of shortest non-zero vector ๐ 1 โ 0 ๐ง โ
Algorithms for SVP Time Space [Kan86,HS07,MW15] ๐ ๐(๐) poly ๐ (Enumeration) [AKS01] 2 ๐(๐) 2 ๐(๐) (Sieving) [NV08, PS09, MV10a, 2 2.465๐+๐(๐) 2 1.233๐+๐(๐) โฆ] [MV10b] 2 2๐+๐(๐) 2 ๐+๐(๐) (Voronoi cell, deterministic, CVP) 2 ๐+๐(๐) 2 ๐+๐(๐) [ADRS15]
Our Algorithm
Gaussian Distribution
Gaussian Distribution
Discrete Gaussian Distribution
Discrete Gaussian Distribution
Discrete Gaussian Distribution
Discrete Gaussian Distribution
Discrete Gaussian Distribution
Discrete Gaussian Distribution
Discrete Gaussian Distribution
Discrete Gaussian Distribution
Discrete Gaussian Distribution shortest vector! If we can obtain โenoughโ samples from the discrete Gaussian with the โrightโ (small) parameter, then we can solve SVP.
Discrete Gaussian Distribution We need at most 1.38 ๐ vectors with ๐ก โ ๐ 1 โ / ๐ [KL78]. (uses bounds on the kissing number) ๐ธ โ,๐ก is very well-studied for very high parameters, ๐ก โฟ ๐ ๐ (โ) , i.e. above the โsmoothing parameterโ of the lattice. [Kle00, GPV08] show how to sample in this regime in polynomial time. (Previously could not do much better, even in exponential time.)
Discrete Gaussian Distribution Easy Hard [Kle00, GPV08] Our goal Can we use samples from the LHS to get samples from the RHS?
Discrete Gaussian Distribution = 2
Discrete Gaussian Distribution ? = 2
Discrete Gaussian Distribution 0
Converting Gaussian Vectors What if we condition on the result being in the lattice? Progress! Unfortunately, this requires us to throw out a lot of vectors. We only keep one from every โ 2 ๐ vectors each time we do this, leading to a very slow algorithm!
Converting Gaussian Vectors + = 2
Converting Gaussian Vectors ? + = 2
Converting Gaussian Vectors 0
Converting Gaussian Vectors What about the average of two discrete Gaussian vectors conditioned on the result being in the lattice?
Converting Gaussian Vectors When do we have ? ๐ง 1 + ๐ง 2 2 โ โ if and only if We have ๐ง 1 , ๐ง 2 are in the same coset of 2โ . (Note that there are 2 ๐ cosets)
Converting Gaussian Vectors What about the average of two discrete Gaussian vectors conditioned on the result being in the lattice?
Converting Gaussian Vectors โ โ = { ๐ง 1 , ๐ง 2 โถ ๐ง 1 โก ๐ง 2 mod 2โ } โ ร โ What about the average of two discrete Gaussian vectors conditioned on the result being in the lattice?
Converting Gaussian Vectors avg ๐ง 1 , ๐ง 2 = ( ๐ง 1 +๐ง 2 , ๐ง 1 โ๐ง 2 ) 2 2 What about the average of two discrete Gaussian vectors conditioned on the result being in the lattice?
Converting Gaussian Vectors avg โ โ = โ ร โ What about the average of two discrete Gaussian vectors conditioned on the result being in the lattice?
Converting Gaussian Vectors avg โ โ = โ ร โ If we sample ๐ง 1 , ๐ง 2 ~๐ธ โ,๐ก , ๐ง 1 +๐ง 2 , ๐ง 1 โ๐ง 2 avg ๐ง 1 , ๐ง 2 = = (y 1 , ๐ง 2 ) 2 then their average will be distributed as ๐ธ โ, 2 , 2 2 ๐ก if we condition on the result being in the lattice. ๐ง 1 , ๐ง 2 โผ ๐ธ โ โ ,๐ก โ avg ๐ง 1 , ๐ง 2 โผ ๐ธ โรโ, ๐ก 2 Progress!
Stitching a Discrete Gaussian Together ๐ง 1 +๐ง 2 = ๐ง | ๐ง 1 +๐ง 2 Pr โ โ 2 2 ๐ง 1 ,๐ง 2 ~๐ธ โ,๐ก 2 ๐ง 1 +๐ง 2 Pr ๐ธ โ,๐ก โ ๐ Pr = ๐ง โ 2 ๐ง 1 ,๐ง 2 ~๐ธ 2โ+๐ ,๐ก ๐ โโ(mod 2โ) Generating a single ๐ฌ ๐, ๐ sample: ๐ 2 . 1. Sample ๐ โ โ (๐๐๐ 2โ) with probability โ Pr D โ,๐ก โ ๐ 2. Output (๐ 1 + ๐ 2 )/2 where ๐ 1 , ๐ 2 โผ ๐ธ 2โ+๐ ,๐ก .
Discrete Gaussian Combiner ๐ iid ๐ธ โ,๐ก samples ( ๐ โ 2 ๐ ) Input: ๐ 1 , โฆ , ๐ 1. โBucketโ samples according to their coset (mod 2โ) . 2. Repeat many times: 2 . 1. Sample coset ๐ with probability โ Pr D โ,๐ก โ ๐ 2. Output (๐ ๐ + ๐ ๐ )/2 , for ๐ ๐ , ๐ ๐ โ ๐ . 3. Remove ๐ ๐ , ๐ ๐ from list. Donโt have access to this distribution!
Rejection Sampling ๐ : Achieving โ ๐๐ฌ ๐ฌ ๐,๐ โ ๐ Same as trivial strategy! First Pass: Sample ๐ โผ ๐ธ โ,๐ก (mod 2โ) . Accept ๐ with probability Pr[๐ธ โ,๐ก โ ๐ ] o/w reject. Implementation: Sample ๐ 1 โผ ๐ธ โ,๐ก and let ๐ be ๐ 1 (mod 2โ) . Sample ๐ 2 โผ ๐ธ โ,๐ก . Output ๐ if ๐ 1 โก ๐ 2 (mod 2โ) .
Rejection Sampling ๐ : Achieving โ ๐๐ฌ ๐ฌ ๐,๐ โ ๐ Second Try: Sample ๐ โผ ๐ธ โ,๐ก (mod 2โ) . Pr ๐ธ โ,๐ก โ๐ Accept ๐ with probability o/w reject, ๐ max where ๐ max = ๐โโ(mod 2โ) Pr[๐ธ โ,๐ก โ ๐] max Implementation: ???
Discrete Gaussian Combiner ๐ iid ๐ธ โ,๐ก samples ( ๐ โ 2 ๐ ) Input: ๐ 1 , โฆ , ๐ Use first ๐/6 samples to estimate ๐ max . ๐ โฆ ๐๐ max /3 โ(๐๐๐ 2โ) # samples in 2 ๐ buckets each bucket First Last 1 ๐ max samples โฏ 1 ๐ max samples
Discrete Gaussian Combiner ๐ iid ๐ธ โ,๐ก samples ( ๐ โ 2 ๐ ) Input: ๐ 1 , โฆ , ๐ 1. Compute ๐ max and bucket counts (previous slide). 2. For ๐ ranging over last ๐/6 samples: 1. Let ๐ = ๐ ๐ (๐๐๐ 2โ) . 2. Find first unused bucket count ๐ ๐ for coset ๐ . ๐ ๐ ๐ ๐(1) } , 3. With probability min {1, output (๐ ๐ + ๐ ๐ )/2 where ๐ ๐ is any sample contributing to ๐ ๐ .
How Many Vectors Do We Get? ๐ โ # input vectors 2 ๐ Pr ๐ธ โ,๐ก โ๐ # output vectors โ ๐ โ max Pr[๐ธ โ,๐ก โ๐] ๐ Worst case bound: probability is at least 1 |support| . ๐ 2 ๐ 2 after a single step! May drop to
How Many Vectors Do We Get? ๐ง ๐ก 2 โ ๐งโโ ๐ โ ๐ ๐ก โ
How Many Vectors Do We Get? max ๐ ๐ก (2โ + ๐) ๐ = ๐ ๐ก (2โ) ๐ง ๐ก 2 โ ๐งโโ ๐ โ ๐ ๐ก โ
How Many Vectors Do We Get? Recall that we only need 1.38 ๐ samples to solve SVP! ๐ ๐ก โ โค 2 ๐ 2 ๐ 2 (โ) ๐ก Setting ๐ โ 2 ๐ gives
Key Estimates Poisson summation formula: โniceโ function ๐ 1 det(โ) ๐ฆโโ โ ๐ ๐ฆ ๐ 2๐๐โฉ๐ฆ,๐ฎโช ๐งโโ ๐ ๐ง + ๐ฎ = ๐ฆ ๐ก 2 : Plug in ๐ โ๐ det(โ) ๐ฆโโ โ ๐ โ๐ ๐ก๐ฆ 2 ๐ 2๐๐โฉ๐ฆ,๐ฎโช ๐ก ๐ ๐ ๐ก โ + ๐ฎ = ๐ก ๐ 1 ๐ก (โ โ ) ๐ ๐ก โ = det(โ) ๐
Key Estimates det(โ) ๐ฆโโ โ ๐ โ๐ ๐ก๐ฆ 2 ๐ 2๐๐โฉ๐ฆ,๐ฎโช ๐ก ๐ ๐ ๐ก โ + ๐ฎ = ๐ก ๐ 1 ๐ก (โ โ ) ๐ ๐ก โ = det(โ) ๐ Corollary 1: max ๐ ๐ก โ + ๐ฎ = ๐ ๐ก (โ) ๐ฎ Corollary 2: ๐ ๐ฝ๐ก โ โค ๐ฝ ๐ ๐ ๐ก (โ) for ๐ฝ โฅ 1 .
Final Algorithm SVPSolver ( โ) Use GPV to get โ 2 ๐ samples from ๐ธ โ,๐ก with ๐ก โซ ๐ 1 (โ) . 1. 2. Run the (โsquaringโ) discrete Gaussian combiner on the result repeatedly. 3. Output โ 2 ๐/2 samples from ๐ธ โ,๐ก with ๐ก โ ๐ 1 (โ) ๐ . 4. We can then simply output a shortest non-zero vector from our samples.
Act II: The Closest Vector Problem
Closest Vector Problem (CVP) Given: Lattice basis ๐ถ ๐ โ ๐ร๐ , target ๐ฎ ๐ โ ๐ . Goal: Compute ๐ง ๐ โ(๐ถ) minimizing ๐ฎ โ ๐ง . ๐ง dist(๐ฎ, โ) ๐ฎ โ
Closest Vector Problem (CVP) CVP seems to be the harder problem: there is a dimension preserving reduction from SVP to CVP [GMSS 99 ] .
Algorithms for CVP Time CVP? Deterministic? [Kan86,HS07,MW15] ๐ ๐(๐) Yes Yes (Enumeration) [AKS02, BN09, HPS11, 2 ๐(๐) Approximate No โฆ] (Sieving) [MV10b] 2 2๐+๐(๐) Yes Yes (Voronoi cell) [ADRS15] 2 ๐+๐(๐) Approximate No (Discrete Gaussian) 2 ๐+๐(๐) [ADS15] Yes No
Disclaimer The algorithm is quite complicated, so the following is a over-simplified high level sketch.
The Discrete Gaussian Distribution
The Discrete Gaussian Distribution
The Discrete Gaussian Distribution
The Discrete Gaussian Distribution
The Discrete Gaussian Distribution
The Discrete Gaussian Distribution
The Discrete Gaussian Distribution
Recommend
More recommend