code based cryptography
play

Code-based Cryptography Christiane Peters Technical University of - PowerPoint PPT Presentation

Code-based Cryptography Christiane Peters Technical University of Denmark ECC 2011 September 20, 2011 Code-based cryptography 1. Background 2. The McEliece cryptosystem 3. Information-set-decoding attacks 4. Designs: Wild McEliece 5.


  1. Code-based Cryptography Christiane Peters Technical University of Denmark ECC 2011 September 20, 2011

  2. Code-based cryptography 1. Background 2. The McEliece cryptosystem 3. Information-set-decoding attacks 4. Designs: Wild McEliece 5. Announcements 1/44

  3. 1. Background 2. The McEliece cryptosystem 3. Information-set-decoding attacks 4. Designs: Wild McEliece 5. Announcements

  4. Coding Theory • An encoder transforms a message word into a codeword by adding redundancy. • Goal: protect against errors in a noisy channel. channel E sender encoder decoder receiver • The decoder uses a decoding algorithm to correct errors which might have occurred during transmission. 2/44

  5. Error-correcting linear codes • A linear code C of length n and dimension k is a k -dimensional subspace of F n q . • A generator matrix for C is a k × n matrix G such that � m G : m ∈ F k � C = . q • The matrix G corresponds to a map F k q → F n q sending a message m of length k to a length- n codeword in F n q . 3/44

  6. Generator matrix of a linear code The rows of the matrix   1 0 0 0 0 1 1 0 1 0 0 1 1 0   G =  .   0 0 1 0 1 0 1  0 0 0 1 1 1 1 generate a linear code of length n = 7 and dimension k = 4 over F 2 . Example of a codeword: c = (0011) G = (0011010). 4/44

  7. Hamming distance • The Hamming distance between two words in F n q is the number of coordinates where they differ. • The Hamming weight of a word is the number of non-zero coordinates. • The minimum distance of a linear code C is the smallest Hamming weight of a non-zero codeword in C . The example code is in fact the (7 , 4 , 3) binary Hamming code which has minimum distance 3. And the example codeword has minimum weight c = (0011010). 5/44

  8. Decoding problem Classical decoding problem: find the closest codeword c ∈ C to a given y ∈ F n q , assuming that there is a unique closest codeword. There are lots of code families with fast decoding algorithms • E.g., Hamming codes, BCH codes, Reed-Solomon codes, Goppa codes/alternant codes, Gabidulin codes, Reed-Muller codes, Algebraic-geometric codes, etc. 6/44

  9. Generic decoding is hard However, given a binary linear code with no obvious structure. • Berlekamp, McEliece, van Tilborg (1978) showed that the general decoding problem for linear codes over F 2 is NP-complete. • About 2 (0 . 5+ o (1)) n / log 2 ( n ) binary operations required for a code of length n and dimension ≈ 0 . 5 n . 7/44

  10. Parity-check matrix of a linear code • Recall that a linear code C is generated by some matrix G • Switch perspective and look at the corresponding parity-check matrix H . H G T = 0. • In particular, Hc T = 0 for all codewords c . • Use Gaussian elimination to compute the ( n − k ) × n kernel matrix H from given G . 8/44

  11. Syndrome decoding • Decoder gets input y ∈ F n q and tries to determine an error vector e of a given weight w such that c = y − e is a codeword. Syndrome-formulation of the problem: • Given y compute the syndrome s = Hy T = H ( c + e ) T = He T . • Tricky part is to find a weight- w word e such that s = He T . 9/44

  12. 1. Background 2. The McEliece cryptosystem 3. Information-set-decoding attacks 4. Designs: Wild McEliece 5. Announcements

  13. Assumptions • This talk looks at“text-book” versions of cryptosystems. • Plaintexts are not randomized. • There exist CCA2-secure conversions of code-based cryptography which should be used when implementing the systems. 10/44

  14. Code-based cryptography • McEliece proposed a public-key cryptosystem based on error-correcting codes in 1978. • Secret key is a linear error-correcting code with an efficient decoding algorithm. • Public key is a transformation of the secret inner code which is hard to decode. 11/44

  15. Encryption • Given public system parameters n , k , w . • The public key is a random-looking k × n matrix G with entries in F q . • Encrypt a message m ∈ F k q as mG + e where e ∈ F n q is a random error vector of weight w . 12/44

  16. Secret key The public key G has a hidden Goppa-code structure allowing fast decoding: G = SG ′ P where • G ′ is the generator matrix of a Goppa code Γ of length n and dimension k and error-correcting capability w ; • S is a random k × k invertible matrix; and • P is a random n × n permutation matrix. The triple ( G ′ , S , P ) forms the secret key. Note: Detecting this structure, i.e., finding G ′ given G , seems even more difficult than attacking a random G . 13/44

  17. Decryption The legitimate receiver knows S , G ′ and P with G = SG ′ P and a decoding algorithm for Γ. How to decrypt y = mG + e . 1. Compute yP − 1 = mSG ′ + eP − 1 . 2. Apply the decoding algorithm of Γ to find mSG ′ which is a codeword in Γ from which one obtains m . 14/44

  18. 1. Background 2. The McEliece cryptosystem 3. Information-set-decoding attacks 4. Designs: Wild McEliece 5. Announcements

  19. Generic attack Disclaimer: for simplicity, focus on codes over F 2 in the following. Attacker tries to build a decoder which gets as input • the parity-check matrix H (compute from public matrix G ), • the ciphertext y ∈ F n q , and • the public error weight w . The algorithm tries to determine an error vector e of weight w such that s = Hy T = He T . The best known generic decoders rely on information-set decoding. 15/44

  20. Problem . . . . . . . . . 1 1 1 0 0 1 0 0 1 1 0 1 1 . . . . . . 0 0 0 1 0 1 1 1 1 1 1 0 . . . . . . . . . c 1 c 2 c 3 . . . . . . c n s = c 2 ⊕ c 3 ⊕ c 18 ⊕ c 20 ⊕ c 24 ⊕ Given an ( n − k ) × n matrix, a syndrome s . Goal: find w columns of H with xor s . 16/44

  21. Row randomization . . . . . . . . . 1 1 1 0 0 1 0 0 1 1 0 1 1 . . . . . . 0 0 0 1 0 1 1 1 1 1 1 0 . . . . . . . . . c 1 c 2 c 3 . . . . . . c n s = c 2 ⊕ c 3 ⊕ c 18 ⊕ c 20 ⊕ c 24 ⊕ Can arbitrarily permute rows without changing the problem. Goal: find w columns of H with xor s . 16/44

  22. Row randomization . . . . . . . . . 1 0 0 1 1 1 1 1 0 0 0 1 1 . . . . . . 0 0 0 1 0 1 1 1 1 1 1 0 . . . . . . . . . c 1 c 2 c 3 . . . . . . c n s = c 2 ⊕ c 3 ⊕ c 18 ⊕ c 20 ⊕ c 24 ⊕ Can arbitrarily permute rows without changing the problem. Goal: find w columns of H with xor s . 16/44

  23. Column normalization . . . . . . . . . 1 0 0 1 1 1 1 1 0 0 0 1 1 . . . . . . 0 0 0 1 0 1 1 1 1 1 1 0 . . . . . . . . . c 1 c 2 c 3 . . . . . . c n s = c 2 ⊕ c 3 ⊕ c 18 ⊕ c 20 ⊕ c 24 ⊕ Can arbitrarily permute columns without changing the problem. Goal: find w columns of H with xor s . 16/44

  24. Column normalization . . . . . . . . . 0 1 0 1 1 1 1 1 0 0 1 0 1 . . . . . . 0 0 1 0 0 1 1 1 1 1 1 0 . . . . . . . . . c 1 c 2 c 3 . . . . . . c n s = c 1 ⊕ c 3 ⊕ c 18 ⊕ c 20 ⊕ c 24 ⊕ Can arbitrarily permute columns without changing the problem. Goal: find w columns of H with xor s . 16/44

  25. Information-set decoding 1 0 0 0 · · · · · · · · · 0 1 · · · · · · · · · 1 0 0 1 0 0 · · · · · · · · · 0 0 · · · · · · · · · 1 1 0 0 1 0 · · · · · · · · · 0 1 · · · · · · · · · 0 1 0 0 0 1 · · · · · · · · · 0 0 · · · · · · · · · 1 0 . ... . . 0 0 0 0 · · · · · · · · · 1 0 · · · · · · · · · 1 0 c 1 c 2 c 3 c 4 . . . c n − k c n s=c 3 ⊕ c 7 ⊕ c 28 ⊕ c 30 ⊕ c 37 ⊕ Can add one column to another. Built identity matrix. Goal: find w columns which xor s . 17/44

  26. Basic information-set decoding 1962 Prange: • Perhaps xor involves none of the last k columns. • If so, immediately see that s is constructed from w columns of H . • If not, re-randomize and restart. 1988 Lee–Brickell: • More likely that xor involves exactly 2 of the last k columns. • Check for each pair ( i , j ) with n − k < i < j ≤ n if s ⊕ c i ⊕ c j has weight w − 2. 18/44

  27. Lee–Brickell w − 2 col’s/n − k 2 col’s/k 1 ... 1 c j c i s Check for each pair ( i , j ) with n − k < i < j ≤ n if s ⊕ c i ⊕ c j has weight w − 2. 19/44

  28. Improvements 1989 Leon, 1989 Krouk: • Check for each i,j whether s ⊕ c i ⊕ c j has weight w − 2 and the first ℓ bits all zero. • Fast to test. 1989 Stern: • Collision decoding: square-root improvement. Find collisions between first ℓ bits of s ⊕ c i and the first ℓ bits of c j . • For each collision, check whether s ⊕ c i ⊕ c j has weight w − 2. 20/44

  29. Collision decoding 0 col’s/ w − 2 col’s/ 2 col’s/k n − k − ℓ ℓ 1 ... 1 c j c i s Check for collisions on ℓ bits of s ⊕ c i and c j . 21/44

  30. Collision decoding 0 col’s/ w − 2 p col’s/ 2 p col’s/k n − k − ℓ ℓ 1 ... 1 c i 1 c i 2 c j 1 c j 2 s Check for collisions on ℓ bits of s ⊕ c i 1 ⊕ · · · ⊕ c i p and c j 1 ⊕ · · · ⊕ c j p . 21/44

Recommend


More recommend