two grumpy giants and a baby d j bernstein university of
play

Two grumpy giants and a baby D. J. Bernstein University of - PDF document

Two grumpy giants and a baby D. J. Bernstein University of Illinois at Chicago Tanja Lange Technische Universiteit Eindhoven Discrete-logarithm problems Fix a prime . Input: generator of group of order ; element of same group.


  1. Two grumpy giants and a baby D. J. Bernstein University of Illinois at Chicago Tanja Lange Technische Universiteit Eindhoven

  2. Discrete-logarithm problems Fix a prime ❵ . Input: generator ❣ of group of order ❵ ; element ❤ of same group. Output: integer ❦ ✷ Z ❂❵ such that ❤ = ❣ ❦ , where group is written multiplicatively. “ ❦ = log ❣ ❤ ”. How difficult is computation of ❦ ?

  3. Dependence on the group Group Z ❂❵ under addition, represented in the usual way: DLP is very easy. Divide ❤ by ❣ modulo ❵ ; time exp( ❖ (log log ❵ )). Order- ❵ subgroup of ( Z ❂♣ ) ✄ assuming prime ♣ = 2 ❵ + 1: DLP is not so easy. Best known attacks: “index calculus” methods; time exp((log ❵ ) 1 ❂ 3+ ♦ (1) ).

  4. Order- ❵ subgroup of ( Z ❂♣ ) ✄ for much larger ♣ : DLP is much more difficult. Best known attacks: “generic” attacks, the focus of this talk. Time exp((1 ❂ 2 + ♦ (1)) log ❵ ). Order- ❵ subgroup of properly chosen elliptic-curve group: DLP is again difficult. Best known attacks: “negating” variants of generic attacks. (See Schwabe talk, last CWG.)

  5. Real-world importance Apple, “iOS Security”, 2012.05: “Some files may need to be written while the device is locked. A good example of this is a mail attachment downloading in the background. This behavior is achieved by using asymmetric elliptic curve cryptography (ECDH over Curve25519).” Also used for “iCloud Backup”. More examples: DNSCrypt; elliptic-curve signatures in German electronic passports.

  6. Generic algorithms Will focus on algorithms that work for every group of order ❵ . Allowed operations: neutral element 1; multiplication ❛❀ ❜ ✼✦ ❛❜ . Will measure algorithm cost by counting # multiplications. Success probability: average over groups and over algorithm randomness.

  7. Each group element computed by the algorithm is trivially expressed as ❤ ① ❣ ② for known ( ①❀ ② ) ✷ ( Z ❂❵ ) 2 . 1 = ❤ ① ❣ ② for ( ①❀ ② ) = (0 ❀ 0). ❣ = ❤ ① ❣ ② for ( ①❀ ② ) = (0 ❀ 1). ❤ = ❤ ① ❣ ② for ( ①❀ ② ) = (1 ❀ 0). If algorithm multiplies ❤ ① 1 ❣ ② 1 by ❤ ① 2 ❣ ② 2 then it obtains ❤ ① ❣ ② where ( ①❀ ② ) = ( ① 1 ❀ ② 1 ) + ( ① 2 ❀ ② 2 ).

  8. Slopes If ❤ ① 1 ❣ ② 1 = ❤ ① 2 ❣ ② 2 and ( ① 1 ❀ ② 1 ) ✻ = ( ① 2 ❀ ② 2 ) then log ❣ ❤ is the negative of the slope ( ② 2 � ② 1 ) ❂ ( ① 2 � ① 1 ). (Impossible to have ① 1 = ① 2 : if ① 1 = ① 2 then ❣ ② 1 = ❣ ② 2 so ② 1 = ② 2 , contradiction.) Algorithm immediately recognizes collisions of group elements by putting each ( ❤ ① ❣ ② ❀ ①❀ ② ) into, e.g., a red-black tree. (Low memory? Parallel? Distributed? Not in this talk.)

  9. Baby-step-giant-step (1971 Shanks) Choose ♥ ✕ 1, ♣ typically ♥ ✙ ❵ . Points ( ①❀ ② ): ♥ + 1 “baby steps” (0 ❀ 0) ❀ (0 ❀ 1) ❀ (0 ❀ 2) ❀ ✿ ✿ ✿ ❀ (0 ❀ ♥ ); ♥ + 1 “giant steps” (1 ❀ 0) ❀ (1 ❀ ♥ ) ❀ (1 ❀ 2 ♥ ) ❀ ✿ ✿ ✿ ❀ (1 ❀ ♥ 2 ). Can use more giant steps. Stop when log ❣ ❤ is found.

  10. Performance of BSGS Slope ❥♥ � ✐ from (0 ❀ ✐ ) to (1 ❀ ❥♥ ). Covers slopes � ♥❀ ✿ ✿ ✿ ❀ � 1 ❀ 0 ❀ 1 ❀ 2 ❀ 3 ❀ ✿ ✿ ✿ ❀ ♥ 2 ✠ ✟ , using 2 ♥ � 1 multiplications. Finds all discrete logarithms if ❵ ✔ ♥ 2 + ♥ + 1. ♣ Worst case with ♥ ✙ ❵ : ♣ (2 + ♦ (1)) ❵ multiplications. ♣ (In fact always ❁ 2 ❵ .) ♣ Average case with ♥ ✙ ❵ : ♣ (1 ✿ 5 + ♦ (1)) ❵ multiplications.

  11. Interleaving (2000 Pollard) Improve average case to ♣ (4 ❂ 3 + ♦ (1)) ❵ multiplications: (0 ❀ 0) ❀ (1 ❀ 0) ❀ (0 ❀ 1) ❀ (1 ❀ ♥ ) ❀ (0 ❀ 2) ❀ (1 ❀ 2 ♥ ) ❀ (0 ❀ 3) ❀ (1 ❀ 3 ♥ ) ❀ . . . (0 ❀ ♥ ) ❀ (1 ❀ ♥ 2 ). ❘ 1 0 (2 ① ) 2 ❞① . 4 ❂ 3 arises as

  12. Interleaving (2000 Pollard) Improve average case to ♣ (4 ❂ 3 + ♦ (1)) ❵ multiplications: (0 ❀ 0) ❀ (1 ❀ 0) ❀ (0 ❀ 1) ❀ (1 ❀ ♥ ) ❀ (0 ❀ 2) ❀ (1 ❀ 2 ♥ ) ❀ (0 ❀ 3) ❀ (1 ❀ 3 ♥ ) ❀ . . . (0 ❀ ♥ ) ❀ (1 ❀ ♥ 2 ). ❘ 1 0 (2 ① ) 2 ❞① . 4 ❂ 3 arises as Oops: Have to start with (0 ❀ ♥ ) as step towards (1 ❀ ♥ ). But this costs only ❖ (log ❵ ).

  13. Random self-reductions Defender slows down BSGS by choosing discrete logs found as late as possible.

  14. Random self-reductions Defender slows down BSGS by choosing discrete logs found as late as possible. Attacker compensates by applying a “worst-case- to-average-case reduction”: compute log ❣ ❤ as log ❣ ( ❤❣ r ) � r for uniform random r ✷ Z ❂❵ . Negligible extra cost.

  15. Is BSGS optimal? After ♠ multiplications have ♠ + 3 points in ( Z ❂❵ ) 2 . Can hope for ( ♠ + 3)( ♠ + 2) ❂ 2 different slopes in Z ❂❵ .

  16. Is BSGS optimal? After ♠ multiplications have ♠ + 3 points in ( Z ❂❵ ) 2 . Can hope for ( ♠ + 3)( ♠ + 2) ❂ 2 different slopes in Z ❂❵ . 1994 Nechaev, 1997 Shoup: proof that generic algorithms have success probability ❖ ( ♠ 2 ❂❵ ). Proof actually gives ✔ (( ♠ + 3)( ♠ + 2) ❂ 2 + 1) ❂❵ .

  17. Is BSGS optimal? After ♠ multiplications have ♠ + 3 points in ( Z ❂❵ ) 2 . Can hope for ( ♠ + 3)( ♠ + 2) ❂ 2 different slopes in Z ❂❵ . 1994 Nechaev, 1997 Shoup: proof that generic algorithms have success probability ❖ ( ♠ 2 ❂❵ ). Proof actually gives ✔ (( ♠ + 3)( ♠ + 2) ❂ 2 + 1) ❂❵ . BSGS: at best ✙ ♠ 2 ❂ 4 slopes, taking ♥ ✙ ♠❂ 2. Factor of 2 away from the bound.

  18. The rho method (1978 Pollard, r = 3 “mixed”; many subsequent variants) Initial computation: r uniform random “steps” ( s 1 ❀ t 1 ) ❀ ✿ ✿ ✿ ❀ ( s r ❀ t r ) ✷ ( Z ❂❵ ) 2 . ❖ ( r log ❵ ) multiplications; negligible if r is small. The “walk”: Starting from ( ① ✐ ❀ ② ✐ ) ✷ ( Z ❂❵ ) 2 compute ( ① ✐ +1 ❀ ② ✐ +1 ) = ( ① ✐ ❀ ② ✐ ) + ( s ❥ ❀ t ❥ ) where ❥ ✷ ❢ 1 ❀ ✿ ✿ ✿ ❀ r ❣ is a hash of ❤ ① ✐ ❣ ② ✐ .

  19. Performance of rho Model walk as truly random. Using ♠ multiplications: ✙ ♠ points ( ① ✐ ❀ ② ✐ ); ✙ ♠ 2 ❂ 2 pairs of points; slope ✕ is missed with chance ✙ (1 � 1 ❂❵ ) ♠ 2 ❂ 2 ✙ exp( � ♠ 2 ❂ (2 ❵ )). Average # multiplications 0 exp( � ♠ 2 ❂ (2 ❵ )) ✙ P ✶ ❘ ✶ exp( � ♠ 2 ❂ (2 ❵ )) ❞♠ ✙ 0 ♣ ♣ ♣ = ✙❂ 4 2 ❵ = (1 ✿ 25 ✿ ✿ ✿ ) ❵ . ♣ Better than (4 ❂ 3 + ♦ (1)) ❵ .

  20. Performance of rho Model walk as truly random. Using ♠ multiplications: ✙ ♠ points ( ① ✐ ❀ ② ✐ ); ✙ ♠ 2 ❂ 2 pairs of points; slope ✕ is missed with chance ✙ (1 � 1 ❂❵ ) ♠ 2 ❂ 2 ✙ exp( � ♠ 2 ❂ (2 ❵ )). Average # multiplications 0 exp( � ♠ 2 ❂ (2 ❵ )) ✙ P ✶ ❘ ✶ exp( � ♠ 2 ❂ (2 ❵ )) ❞♠ ✙ 0 ♣ ♣ ♣ = ✙❂ 4 2 ❵ = (1 ✿ 25 ✿ ✿ ✿ ) ❵ . ♣ Better than (4 ❂ 3 + ♦ (1)) ❵ . Don’t ask about the worst case.

  21. Anti-collisions Bad news: The walk is worse than random. Very often have ( ① ✐ +1 ❀ ② ✐ +1 ) = ( ① ✐ ❀ ② ✐ ) + ( s ❥ ❀ t ❥ ) followed later by ( ① ❦ +1 ❀ ② ❦ +1 ) = ( ① ❦ ❀ ② ❦ ) + ( s ❥ ❀ t ❥ ). Slope from ( ① ❦ +1 ❀ ② ❦ +1 ) to ( ① ✐ +1 ❀ ② ✐ +1 ) is not new: same as slope from ( ① ❦ ❀ ② ❦ ) to ( ① ✐ ❀ ② ✐ ) ✿ Repeated slope: “anti-collision”.

  22. ♠ 2 ❂ 2 was too optimistic. About (1 ❂r ) ♠ 2 ❂ 2 pairs use same step, so only (1 � 1 ❂r ) ♠ 2 ❂ 2 chances. This replacement model ✮ ♣ ♣ ♣ ( ✙❂ 2 ❂ 1 � 1 ❂r + ♦ (1)) ❵ . ♣ Can derive 1 � 1 ❂r from more complicated 1981 ♣ Brent–Pollard ❱ heuristic. 1998 Blackburn–Murphy: ♣ explicit 1 � 1 ❂r . 2009 Bernstein–Lange: simplified heuristic; q ❥ ♣ 2 1 � P generalized ❥ .

  23. Higher-degree anti-collisions Actually, rho is even worse! Often have ( ① ✐ +1 ❀ ② ✐ +1 )=( ① ✐ ❀ ② ✐ )+( s ❥ ❀ t ❥ ) ( ① ✐ +2 ❀ ② ✐ +2 )=( ① ✐ +1 ❀ ② ✐ +1 )+( s ❤ ❀ t ❤ ) followed later by ( ① ❦ +1 ❀ ② ❦ +1 )=( ① ❦ ❀ ② ❦ )+( s ❤ ❀ t ❤ ) ( ① ❦ +2 ❀ ② ❦ +2 )=( ① ❦ +1 ❀ ② ❦ +1 )+( s ❥ ❀ t ❥ ) so slope from ( ① ❦ +2 ❀ ② ❦ +2 ) to ( ① ✐ +2 ❀ ② ✐ +2 ) is not new. “Degree-2 local anti-collisions”: 1 � 1 ❂r � 1 ❂r 2 + 1 ❂r 3 . ♣ 1 ❂ See paper for more.

  24. Is rho optimal? Allow r to grow slowly with ❵ . (Not quickly: remember cost of initial computation.) ♣ 1 � 1 ❂r ✦ 1. 1 � 1 ❂r � 1 ❂r 2 + 1 ❂r 3 ✦ 1. ♣ Experimental evidence ✮ ♣ ♣ average ( ✙❂ 2 + ♦ (1)) ❵ . But still have many global anti-collisions: slopes appearing repeatedly.

Recommend


More recommend