learning strikes again the case of the drs signature
play

Learning Strikes Again: the Case of the DRS Signature Scheme Yang Yu - PowerPoint PPT Presentation

Learning Strikes Again: the Case of the DRS Signature Scheme Yang Yu 1 eo Ducas 2 L 1 Tsinghua University 2 Centrum Wiskunde & Informatica Asiacrypt 2018 Brisbane, Australia 1 / 27 This is a cryptanalysis work... Target: DRS a NIST


  1. Learning Strikes Again: the Case of the DRS Signature Scheme Yang Yu 1 eo Ducas 2 L´ 1 Tsinghua University 2 Centrum Wiskunde & Informatica Asiacrypt 2018 Brisbane, Australia 1 / 27

  2. This is a cryptanalysis work... Target: DRS — a NIST lattice-based signature proposal 2 / 27

  3. This is a cryptanalysis work... Target: DRS — a NIST lattice-based signature proposal Techniques: learning & lattice 2 / 27

  4. This is a cryptanalysis work... Target: DRS — a NIST lattice-based signature proposal Techniques: learning & lattice Statistical learning ⇒ secret key information leaks 2 / 27

  5. This is a cryptanalysis work... Target: DRS — a NIST lattice-based signature proposal Techniques: learning & lattice Statistical learning ⇒ secret key information leaks Lattice techniques ⇒ better use of leaks 2 / 27

  6. This is a cryptanalysis work... Target: DRS — a NIST lattice-based signature proposal Techniques: learning & lattice Statistical learning ⇒ secret key information leaks Lattice techniques ⇒ better use of leaks They claim that Parameter Set- I offers at least 128-bits of security. We show that it actually offers at most 80-bits of security! 2 / 27

  7. Outline 1 Background 2 DRS signature 3 Learning secret key coefficients 4 Exploiting the leaks 3 / 27

  8. Outline 1 Background 2 DRS signature 3 Learning secret key coefficients 4 Exploiting the leaks 4 / 27

  9. Lattice Definition A lattice L is a discrete subgroup of R m . 5 / 27

  10. Lattice Definition A lattice L is a discrete subgroup of R m . A lattice is generated by its basis g 2 G = ( g 1 , · · · , g n ) ∈ R n × m , e.g. L = { xG | x ∈ Z n } . g 1 5 / 27

  11. Lattice Definition A lattice L is a discrete subgroup of R m . A lattice is generated by its basis g 2 b 2 G = ( g 1 , · · · , g n ) ∈ R n × m , e.g. b 1 L = { xG | x ∈ Z n } . g 1 L has infinitely many bases G is good, B is bad. 5 / 27

  12. Finding Close Vectors Each basis defines a parallelepiped P . v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m v 6 / 27

  13. Finding Close Vectors Each basis defines a parallelepiped P . v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m v m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m v Babai’s round-off algorithm outputs v ∈ L such that v − m ∈ P . 6 / 27

  14. GGH & NTRUSign Schemes Public key: P , secret key: S Sign 1 Hash the message to a random vector m 2 Round m (using S ) to v ∈ L Verify 1 Check v ∈ L (using P ) 2 Check v is close to m 7 / 27

  15. GGH & NTRUSign are insecure! v − m ∈ P ( S ) ⇒ ( v , m ) leaks some information of S . 8 / 27

  16. GGH & NTRUSign are insecure! v − m ∈ P ( S ) ⇒ ( v , m ) leaks some information of S . GGH and NTRUSign were broken by “learning the parallelepiped” [NR06]. Some countermeasures were also broken by a similar attack [DN12]. 8 / 27

  17. Countermeasures Let us focus on Hash-then-Sign approach! Provably secure method [GPV08] rounding based on Gaussian sampling v − m is independent of S 9 / 27

  18. Countermeasures Let us focus on Hash-then-Sign approach! Provably secure method [GPV08] rounding based on Gaussian sampling v − m is independent of S Heuristic method [PSW08] rounding based on CVP w.r.t ℓ ∞ -norm the support of v − m is independent of S DRS [PSDS17] is an instantiation, submitted to the NIST. 9 / 27

  19. Outline 1 Background 2 DRS signature 3 Learning secret key coefficients 4 Exploiting the leaks 10 / 27

  20. DRS DRS = D iagonal-dominant R eduction S ignature 11 / 27

  21. DRS DRS = D iagonal-dominant R eduction S ignature Parameters: ( n , D , b , N b , N 1 ) n : the dimension D : the diagonal coefficient b : the magnitude of the large coefficients ( i.e. {± b } ) N b : the number of large coefficients per row vector N 1 : the number of small coefficients ( i.e. {± 1 } ) per row vector   D D   S =  +   ...    D 11 / 27

  22. DRS DRS = D iagonal-dominant R eduction S ignature Parameters: ( n , D , b , N b , N 1 ) n : the dimension D : the diagonal coefficient b : the magnitude of the large coefficients ( i.e. {± b } ) N b : the number of large coefficients per row vector N 1 : the number of small coefficients ( i.e. {± 1 } ) per row vector   D D   S =  + ← “absolute circulant”   ...    D 11 / 27

  23. Message reduction algorithm Input: a message m ∈ Z n , the secret matrix S Output: a reduced message w such that w − m ∈ L and � w � ∞ < D 1: w ← m , i ← 0 2: repeat w ← w − ⌊ w i D ⌋ · s i 3: i ← ( i + 1) mod n 4: 5: until � w � ∞ < D 6: return w 12 / 27

  24. Message reduction algorithm Input: a message m ∈ Z n , the secret matrix S Output: a reduced message w such that w − m ∈ L and � w � ∞ < D 1: w ← m , i ← 0 2: repeat w ← w − ⌊ w i D ⌋ · s i 3: i ← ( i + 1) mod n 4: 5: until � w � ∞ < D 6: return w Intuition: use s i to reduce w i decreases a lot for j � = i , w j increases a bit � w � 1 is reduced ⇒ reduction always terminates! 12 / 27

  25. Resistance to NR attack The support of w : ( − D , D ) n DRS domain P ( S ) 13 / 27

  26. Resistance to NR attack The support of w : ( − D , D ) n DRS domain P ( S ) The support is “zero-knowledge” 13 / 27

  27. Resistance to NR attack The support of w : ( − D , D ) n DRS domain P ( S ) The support is “zero-knowledge”, but maybe the distribution is not! 13 / 27

  28. Outline 1 Background 2 DRS signature 3 Learning secret key coefficients 4 Exploiting the leaks 14 / 27

  29. Intuition ( D,D ) w j w i ( − D, − D ) ( D,D ) ( D,D ) ( D,D ) w j w j w j w i w i w i ( − D, − D ) ( − D, − D ) ( − D, − D ) S i , j = − b S i , j = 0 S i , j = b 15 / 27

  30. Figure out the model Can we devise a formula S i , j ≈ f ( W i , j ) ? 16 / 27

  31. Figure out the model Can we devise a formula S i , j ≈ f ( W i , j ) ? Seems complicated! cascading phenomenon: a reduction triggers another one. other parasite correlations 16 / 27

  32. Figure out the model Can we devise a formula S i , j ≈ f ( W i , j ) ? Seems complicated! cascading phenomenon: a reduction triggers another one. other parasite correlations ⇒ Search for the best linear fit f ? 16 / 27

  33. Figure out the model Can we devise a formula S i , j ≈ f ( W i , j ) ? Seems complicated! cascading phenomenon: a reduction triggers another one. other parasite correlations ⇒ Search for the best linear fit f ? Search space for all linear f : too large! 16 / 27

  34. Figure out the model Can we devise a formula S i , j ≈ f ( W i , j ) ? Seems complicated! cascading phenomenon: a reduction triggers another one. other parasite correlations ⇒ Search for the best linear fit f ? Search space for all linear f : too large! ⇒ choose some features { f i } and search in span( { f i } ) , i.e. f = � x ℓ f ℓ 16 / 27

  35. Training — feature selection Lower degree moments: f 2 ( W ) = E ( w i · | w i | 1 / 2 · w j ) f 1 ( W ) = E ( w i w j ) f 3 ( W ) = E ( w i · | w i | · w j ) 1 1 1 0.75 0.75 0.75 0.5 0.5 0.5 0.25 0.25 0.25 0.0 0.0 0.0 y y y -0.25 -0.25 -0.25 -0.5 -0.5 -0.5 -0.75 -0.75 -0.75 -1 -1 -1 -1 -0.75 -0.5 -0.25 0.0 0.25 0.5 0.75 1 -1 -0.75 -0.5 -0.25 0.0 0.25 0.5 0.75 1 -1 -0.75 -0.5 -0.25 0.0 0.25 0.5 0.75 1 x x x 17 / 27

Recommend


More recommend