outline
play

Outline 1 Introduction to LWE The LWE Problem Motivation 2 - PowerPoint PPT Presentation

Combinatorial methods for solving LWE 1 Thomas Johansson 1 1 Dept. of Electrical and Information Technology, Lund University 1 Last part to appear at Asiacrypt 2018 London September, 2017 Outline 1 Introduction to LWE The LWE Problem Motivation


  1. Combinatorial methods for solving LWE 1 Thomas Johansson 1 1 Dept. of Electrical and Information Technology, Lund University 1 Last part to appear at Asiacrypt 2018 London September, 2017

  2. Outline 1 Introduction to LWE The LWE Problem Motivation 2 Background and reformulating LWE 3 The BKW Algorithm 4 Coded-BKW Lattice Codes Coded-BKW Results - Complexity 5 A New Algorithm– Coded-BKW with sieving 6 Conclusions Thomas Johansson , 2 / 58

  3. Outline 1 Introduction to LWE The LWE Problem Motivation 2 Background and reformulating LWE 3 The BKW Algorithm 4 Coded-BKW Lattice Codes Coded-BKW Results - Complexity 5 A New Algorithm– Coded-BKW with sieving 6 Conclusions Thomas Johansson , 3 / 58

  4. Learning with Errors (LWE) There is a secret vector s in Z n q . We may query an oracle (who knows s ): The LWE oracle with parameters ( n , q , X ): 1. Uniformly picks r from Z n q . 2. Picks a ’noise’ e ← X . 3. Outputs the pair ( r , v = � r , s � + e ) as a sample. Thomas Johansson , 3 / 58

  5. Learning with Errors (LWE) There is a secret vector s in Z n q . We may query an oracle (who knows s ): The LWE oracle with parameters ( n , q , X ): 1. Uniformly picks r from Z n q . 2. Picks a ’noise’ e ← X . 3. Outputs the pair ( r , v = � r , s � + e ) as a sample. The search problem (informal): Find s after collecting enough samples. Thomas Johansson , 3 / 58

  6. Learning with Errors (LWE) There is a secret vector s in Z n q . We may query an oracle (who knows s ): The LWE oracle with parameters ( n , q , X ): 1. Uniformly picks r from Z n q . 2. Picks a ’noise’ e ← X . 3. Outputs the pair ( r , v = � r , s � + e ) as a sample. The search problem (informal): Find s after collecting enough samples. Error distribution X α q Discrete Gaussian over Z q with mean 0 and standard deviation σ = α q . Thomas Johansson , 3 / 58

  7. Example: Z 13 = {− 6 , − 5 , . . . , 0 , . . . , 5 , 6 } , n = 5 e i small 3 s 1 + 5 s 2 + 2 s 3 − 4 s 4 − s 5 + e 1 = 6 4 s 1 − 1 s 2 + 3 s 3 − 4 s 4 + 3 s 5 + e 2 = 9 − 2 s 1 + 2 s 2 + 2 s 3 + 3 s 4 − 3 s 5 + e 3 = 0 s 1 + 0 s 2 − 4 s 3 − 4 s 4 + 1 s 5 + e 4 = − 1 0 s 1 − 5 s 2 + 2 s 3 − 2 s 4 + s 5 + e 5 = − 5 − 3 s 1 + s 2 + 2 s 3 − s 4 − 4 s 5 + e 6 = 2 2 s 1 − 1 s 2 + 3 s 3 − 1 s 4 + 3 s 5 + e 7 = 5 Thomas Johansson , 4 / 58

  8. Different LWE problems The search problem (informal): Find s after collecting enough samples. The distinguishing problem (informal): After collecting enough samples, determine whether they come from an LWE oracle or whether they are purely random samples. Similar complexity. General strategy: Guess some small part of s , rewrite and use distinguisher to decide if a guess is correct. Thomas Johansson , 5 / 58

  9. Error distribution? Z q = {− ( q − 1 ) / 2 , . . . , − 1 , 0 , 1 , . . . , q − 1 ) / 2 } Error distribution X α q Discrete Gaussian over Z q with mean 0 and standard deviation σ = α q . Other distributions are possible... Thomas Johansson , 6 / 58

  10. Secret vector distribution? Z q = {− ( q − 1 ) / 2 , . . . , − 1 , 0 , 1 , . . . , q − 1 ) / 2 } Standard case: s in Z n q , uniformly distributed A simple transformation allows us to assume s in X n α q . Binary LWE: s in { 0 , 1 } n or similar choices like s in { 0 , − 1 , 1 } n . Other distributions are possible... Thomas Johansson , 7 / 58

  11. Related problems Ring-LWE s ( x ) in Z q [ x ] /φ ( x ) , uniformly distributed. φ ( x ) is a degree n (cyclotomic) polynomial. The oracle selects a random r ( x ) and outputs ( r ( x ) , v ( x ) = r ( x ) s ( x ) + e ( x )) as a sample. e ( x ) degree n − 1 plynomial with coefficients from X n α q . A solver/distinguisher for LWE solves also Ring-LWE. LPN LWE when q = 2 (Hamming metric) Thomas Johansson , 8 / 58

  12. Motivation ◮ LWE and its greatness. ◮ Known to be as hard as worst-case hard lattice problems. ◮ Efficient cryptographic primitives. ◮ Extremely versatile, e.g., Fully Homomorphic Encryption (FHE) schemes. ◮ Post-quantum cryptography ◮ Complexity of solving LWE? ◮ Especially for practical security. Say, how to choose the smallest parameters for a security level (e.g., 80-bit security)? Thomas Johansson , 9 / 58

  13. Solving Algorithms Mainly three types: 1. Reduce to some lattice problem. ◮ Short Integer Solution (SIS) problem ◮ Bounded Distance Decoding (BDD) problem 2. Arora-Ge [AroraGe11] ◮ Performs asymptotically well for very small noise. 3. BKW 2 (Combinatorial) 2 Unbounded number of samples are provided. Thomas Johansson , 10 / 58

  14. Outline 1 Introduction to LWE The LWE Problem Motivation 2 Background and reformulating LWE 3 The BKW Algorithm 4 Coded-BKW Lattice Codes Coded-BKW Results - Complexity 5 A New Algorithm– Coded-BKW with sieving 6 Conclusions Thomas Johansson , 11 / 58

  15. Background In n -dimensional Euclidean space R n , the intuitive notion of the length of a vector x = ( x 1 , x 2 , . . . , x n ) is captured by the L 2 -norm � x 2 1 + · · · + x 2 � x � = n . The Euclidean distance between two vectors x and y in R n is defined as � x − y � . Thomas Johansson , 11 / 58

  16. Discrete Gaussian The discrete Gaussian distribution over Z with mean 0 and variance σ 2 , denoted D Z ,σ , is the probability distribution obtained by assigning a probability proportional to exp ( − x 2 / 2 σ 2 ) to each x ∈ Z . The X distribution 3 with variance σ is the distribution on Z q obtained by folding D Z ,σ mod q , i.e., accumulating the value of the probability mass function over all integers in each residue class mod q . Similarly, we can define the discrete Gaussian over Z n with variance σ 2 , denoted D Z n ,σ , as the product distribution of n independent copies of D Z ,σ . 3 It is also denoted X σ , and we omit σ if there is no ambiguity. Thomas Johansson , 12 / 58

  17. Distinguishing between two distributions The number of samples required to distinguish between the uniform distribution on Z q and X σ . A good approximation using the divergence 4 is √ � 2 � σ 2 π ∆( X σ � U ) ≈ e − π . q In particular, we need about √ � 2 � σ 2 π M ≈ ε · e π . q samples for a distinguishing advantage of ε . 4 Divergence, relative entropy, Kullback-Leibler divergence Thomas Johansson , 13 / 58

  18. Number of samples For secret vector recovery, we assume that for a right guess, the observed symbol is X σ distributed; otherwise, it is uniformly random. Distinguish the secret from Q candidates. We follow the theory from linear cryptanalysis, that the number M of required samples to test is about � � ln ( Q ) O , ∆( X σ � U ) where ∆( X σ � U ) is the divergence between X σ and the uniform distribution U in Z q . Thomas Johansson , 14 / 58

  19. The LWE problem again We ask for m samples from the LWE distribution L s , X and the response is ( r 1 , z 1 ) , ( r 2 , z 2 ) , . . . , ( r m , z m ) , where r i ∈ Z n q , z i ∈ Z q . Introduce z = ( z 1 , z 2 , . . . , z m ) and y = ( y 1 , y 2 , . . . , y m ) = sR . We � � can then write R = r T r T · · · r T and z = sR + e , where 1 2 n $ z i = y i + e i = � s , r i � + e i and e i ← X is the noise. A decoding problem. The matrix R serves as the generator matrix for a linear code over Z q and z is the received word. Finding the codeword y = sR such that the Euclidean distance || y − z || is minimum will give the secret vector s . Thomas Johansson , 15 / 58

  20. Transform s to be Gaussian If s is drawn from the uniform distribution, there is a simple transformation that can be applied, namely, we may through Gaussian elimination transform R into systematic form. Assume that the first n columns are linearly independent and form the matrix R 0 . Define D = R 0 − 1 . With a change of variables s = sD − 1 − ( z 1 , z 2 , . . . , z n ) we get an equivalent problem described ˆ by ^ m ) , where ^ r T r T r T R = ( I , ˆ n + 1 , ˆ n + 2 , · · · , ˆ R = DR . We compute z = z − ( z 1 , z 2 , . . . , z n )ˆ ˆ R = ( 0 , ˆ z n + 1 , ˆ z n + 2 , . . . , ˆ z m ) . After this initial step, each entry in the secret vector ˆ s is now distributed according to X . Thomas Johansson , 16 / 58

  21. Rewriting Recall that we have the LWE samples in the form z = sR + e . We write this as � R � ( s , e ) = z . (1) I The unknown ( s , e ) on the left-hand side have all iid entries of the � R � same size. The matrix above is denoted as H 0 = and it is a I known quantity, as well as z . Thomas Johansson , 17 / 58

  22. Lattice-based algorithms for solving LWE Thomas Johansson , 18 / 58

  23. Outline 1 Introduction to LWE The LWE Problem Motivation 2 Background and reformulating LWE 3 The BKW Algorithm 4 Coded-BKW Lattice Codes Coded-BKW Results - Complexity 5 A New Algorithm– Coded-BKW with sieving 6 Conclusions Thomas Johansson , 19 / 58

Recommend


More recommend