ecen 5022 cryptography
play

ECEN 5022 Cryptography Introduction to Information Theory Peter - PowerPoint PPT Presentation

Entropy Mutual Information Perfect Cryptosystem ECEN 5022 Cryptography Introduction to Information Theory Peter Mathys University of Colorado Spring 2008 Peter Mathys ECEN 5022 Cryptography Entropy Entropy Mutual Information Binary


  1. Entropy Mutual Information Perfect Cryptosystem ECEN 5022 Cryptography Introduction to Information Theory Peter Mathys University of Colorado Spring 2008 Peter Mathys ECEN 5022 Cryptography

  2. Entropy Entropy Mutual Information Binary Entropy Function Perfect Cryptosystem Conditional Entropy Entropy ◮ Definition: The entropy H ( X ) of a discrete RV X with alphabet A is defined as � H ( X ) = − p X ( x ) log p X ( x ) . x ∈A H ( X ) is bounded as 0 ≤ H ( X ) ≤ log |A| . If log 2 is used then the units of H ( X ) are bits. ◮ Definition: The joint entropy H ( X , Y ) of discrete RVs X and Y with alphabets A and B is defined as � � H ( X , Y ) = − p X , Y ( x , y ) log p X , Y ( x , y ) . x ∈A y ∈B Bounds: 0 ≤ H ( X , Y ) ≤ H ( X ) + H ( Y ) ≤ log |A| + log |B| . Peter Mathys ECEN 5022 Cryptography

  3. Entropy Entropy Mutual Information Binary Entropy Function Perfect Cryptosystem Conditional Entropy Binary Entropy Function Binary Entropy Function H(p) = − p*log 2 p − (1−p)*log 2 (1−p) 1 0.9 0.8 0.7 0.6 H(p) [bits] 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 p ◮ Definition: The binary entropy function is defined as H ( p ) = − p log p − (1 − p ) log(1 − p ). If log 2 is used the result is in bits. Peter Mathys ECEN 5022 Cryptography

  4. Entropy Entropy Mutual Information Binary Entropy Function Perfect Cryptosystem Conditional Entropy Conditional Entropy ◮ Definition: The conditional entropy H ( Y | X ) of discrete RV Y given discrete RV X (with alphabets B and A ) is defined as H ( Y | X ) = � x ∈A p X ( x ) H ( Y | X = x ) = − � x ∈A p X ( x ) � y ∈B p Y | X ( y | x ) log p Y | X ( y | x ) = − � � y ∈B p X , Y ( x , y ) log p Y | X ( y | x ) x ∈A Bounds: 0 ≤ H ( Y | X ) ≤ H ( Y ) ≤ log |B| . ◮ H ( Y | X ) = H ( Y ) (and H ( X | Y ) = H ( X )) iff X and Y are statistically independent . ◮ Chain rule: H ( X , Y ) = H ( X ) + H ( Y | X ) = H ( Y ) + H ( X | Y ). This follows from Bayes rule. Peter Mathys ECEN 5022 Cryptography

  5. Entropy Mutual Information Mutual Information Conditional Mutual Information Perfect Cryptosystem Mutual Information ◮ Definition: The mutual information I ( X ; Y ) between two discrete RVs X and Y with alphabets A and B is defined as p X , Y ( x , y ) log p X , Y ( x , y ) � � I ( X ; Y ) = p X ( x ) p Y ( y ) x ∈A y ∈B = H ( X ) − H ( X | Y ) = H ( Y ) − H ( Y | X ) = H ( X ) + H ( Y ) − H ( X , Y ) Bounds: 0 ≤ I ( X ; Y ) ≤ max { log |A| , log |B|} . Peter Mathys ECEN 5022 Cryptography

  6. Entropy Mutual Information Mutual Information Conditional Mutual Information Perfect Cryptosystem Conditional Mutual Information ◮ Definition: The conditional mutual information I ( X ; Y | Z ) between two discrete RVs X and Y with alphabets A and B , given discrete RV Z with alphabet C is defined as p X , Y | Z ( x , y | z ) � � � I ( X ; Y | Z ) = p X , Y , Z ( x , y , z ) log p X | Z ( x | z ) p Y | Z ( y | z ) x ∈A y ∈B z ∈C = H ( X | Z ) − H ( X | YZ ) = H ( Y | Z ) − H ( Y | XZ ) = H ( X | Z ) + H ( Y | Z ) − H ( X , Y | Z ) Peter Mathys ECEN 5022 Cryptography

  7. Entropy Perfect Cryptosystem Mutual Information Entropy of Language Perfect Cryptosystem Unicity Distance Perfect Cryptosystem M C E ( K, M ) Key K ◮ A perfect cryptosystem should have the following properties I ( M ; C ) = 0 , I ( M ; C | K ) = H ( M ) . ◮ From the first statement I ( M ; C ) = H ( M ) − H ( M | C ) = 0 and thus H ( M | C ) = H ( M ), i.e., M , C must be statistically independent. ◮ From the second statement I ( M ; C | K ) = H ( M | K ) − H ( M | CK ) = H ( M | K ) = H ( M ) (2’nd equality follows since H ( M | CK ) = 0) and thus M and K must be statistically independent. Peter Mathys ECEN 5022 Cryptography

  8. Entropy Perfect Cryptosystem Mutual Information Entropy of Language Perfect Cryptosystem Unicity Distance Key Equivocation ◮ Let the discrete RVs M , K , C denote the plaintext, key, ciphertext, respectively, of a cryptosystem. ◮ Definition. H ( K | C ) is called the key equivocation . ◮ Theorem. If M , K are statistically independent then H ( K | C ) = H ( K ) + H ( M ) − H ( C ). ◮ Proof: Express H ( M , K , C ) in two ways as H ( M , K , C ) = H ( M , K ) + H ( C | M , K ) = H ( M , K ) � �� � =0 = H ( M ) + H ( K | M ) = H ( M ) + H ( K ) , � �� � = H ( K ) and as H ( M , K , C ) = H ( K , C ) + H ( M | K , C ) = H ( K , C ). � �� � =0 = ⇒ H ( K | C ) = H ( K , C ) − H ( C ) = H ( M , K , C ) − H ( C ) = H ( M ) + H ( K ) − H ( C ) . Peter Mathys ECEN 5022 Cryptography

  9. Entropy Perfect Cryptosystem Mutual Information Entropy of Language Perfect Cryptosystem Unicity Distance Entropy of Language ◮ Plaintext RV: M n = ( M 1 , M 2 , . . . , M n ). ◮ Ciphertext RV: C n = ( C 1 , C 2 , . . . , C n ). ◮ Question: How much ciphertext is needed to dermine key uniquely? ◮ Definition: The rate or entropy of a language L is defined as H ( M n ) H L = lim . n n →∞ For English 1 . 0 ≤ H L ≤ 1 . 5. ◮ Definition: The redundancy of a language L with alphabet M is defined as H L R L = 1 − log |M| . English has redundancy ≈ 0 . 75. Peter Mathys ECEN 5022 Cryptography

  10. Entropy Perfect Cryptosystem Mutual Information Entropy of Language Perfect Cryptosystem Unicity Distance Spurious Keys ◮ Definition. The set of possible keys k for encryption function E k given some ciphertext c ∈ C n of length n is � � � ∃ m ∈ M n with Pr ( m ) > 0 and c = E k ( m ) � K ( c ) = k ∈ K “Set of keys that yield plausible plaintexts given ciphertext c .” ◮ Definition. Only one member of K ( c ) is correct. The rest are called spurious keys and denoted S n . ◮ The expected number of spurious keys, averaged over all ciphertexts c ∈ C n of length n , is � � � E [ S n ] = | K ( c ) | − 1 Pr ( c ) . c ∈C n Peter Mathys ECEN 5022 Cryptography

  11. Entropy Perfect Cryptosystem Mutual Information Entropy of Language Perfect Cryptosystem Unicity Distance Unicity Distance ◮ Theorem. If |C| = |M| and keys are chosen equiprobably from the keyspace K then, given a ciphertext of sufficiently large length n , the expected number of spurious keys satisfies |K| E [ S n ] ≥ |M| nR L − 1 , where |K| is the size of the keyspace, |M| is the size of the plaintext alphabet, and R L is the redundancy of the plaintext language. ◮ Setting E [ S n ] = 0 yields nR L log |M| ≥ log |K| and thus log |K| n ≥ n 0 = R L log |M| . The quantity n 0 is called the unicity distance of a cryptosystem. Peter Mathys ECEN 5022 Cryptography

  12. Entropy Perfect Cryptosystem Mutual Information Entropy of Language Perfect Cryptosystem Unicity Distance Examples ◮ For all examples |M| = 26 = ⇒ log 2 |M| = 4 . 70 and R L = 0 . 75. ◮ Shift (Caesar) cipher . |K| = 26 = ⇒ log 2 |K| = 4 . 70 4 . 70 Unicity distance: n 0 = 0 . 75 × 4 . 70 = 1 . 333 . ◮ Simple substitution cipher . |K| = 26! = ⇒ log 2 |K| = 88 . 38 88 . 38 Unicity distance: n 0 = 0 . 75 × 4 . 70 = 25 . 07 . ◮ One-time pad . |K| = 26 n = ⇒ log 2 |K| = 4 . 70 n 4 . 70 n Unicity distance: n 0 = 0 . 75 × 4 . 70 = 1 . 333 n . The only solution to n ≥ n 0 = 1 . 333 n is n = 0. Peter Mathys ECEN 5022 Cryptography

Recommend


More recommend