Secret Key Agreement: General Capacity and Second-Order Asymptotics Masahito Hayashi Himanshu Tyagi Shun Watanabe
Two party secret key agreement Maurer 93, Ahlswede-Csiszár 93 F X Y K y K x A random variable K constitutes an ( � , δ ) -SK if: P ( K x = K y = K ) ≥ 1 − � : recoverability 1 2 ∥ P K F − P unif P F ∥ ≤ δ : security 1
Two party secret key agreement Maurer 93, Ahlswede-Csiszár 93 F X Y K y K x A random variable K constitutes an ( � , δ ) -SK if: P ( K x = K y = K ) ≥ 1 − � : recoverability 1 2 ∥ P K F − P unif P F ∥ ≤ δ : security 1
Two party secret key agreement Maurer 93, Ahlswede-Csiszár 93 F X Y K y K x A random variable K constitutes an ( � , δ ) -SK if: P ( K x = K y = K ) ≥ 1 − � : recoverability 1 2 ∥ P K F − P unif P F ∥ ≤ δ : security What is the maximum length S ( X, Y ) of a SK that can be generated? 1
Where do we stand? Maurer 93, Ahlswede-Csiszár 93 S ( X n , Y n ) = nI ( X ∧ Y ) + o ( n ) (Secret key capacity) Csiszár-Narayan 04 Secret key capacity for multiple terminals Renner-Wolf 03, 05 Single-shot bounds on S ( X, Y ) 2
Where do we stand? Maurer 93, Ahlswede-Csiszár 93 S ( X n , Y n ) = nI ( X ∧ Y ) + o ( n ) (Secret key capacity) Csiszár-Narayan 04 Secret key capacity for multiple terminals Renner-Wolf 03, 05 Single-shot bounds on S ( X, Y ) Typical construction: X sends a compressed version of itself to Y , and the K is extracted from shared X using a 2 -universal hash family 2
Where do we stand? Maurer 93, Ahlswede-Csiszár 93 S ( X n , Y n ) = nI ( X ∧ Y ) + o ( n ) (Secret key capacity) Csiszár-Narayan 04 Secret key capacity for multiple terminals Renner-Wolf 03, 05 Single-shot bounds on S ( X, Y ) Typical construction: X sends a compressed version of itself to Y , and the K is extracted from shared X using a 2 -universal hash family Converse?? 2
Where do we stand? Maurer 93, Ahlswede-Csiszár 93 Fano’s inequality S ( X n , Y n ) = nI ( X ∧ Y ) + o ( n ) (Secret key capacity) Csiszár-Narayan 04 Fano’s inequality Secret key capacity for multiple terminals Renner-Wolf 03, 05 ∼ Potential function method Single-shot bounds on S ( X, Y ) Typical construction: X sends a compressed version of itself to Y , and the K is extracted from shared X using a 2 -universal hash family Converse?? 2
Converse: Conditional independence testing bound The source of our rekindled excitement about this problem: Theorem ( Tyagi-Watanabe 2014) Given � , δ ≥ 0 with � + δ < 1 and 0 < η < 1 − � − δ . It holds that � � S � , δ ( X, Y ) ≤ − log β � + δ + η P XY , P X P Y + 2 log(1 / η ) 3
Converse: Conditional independence testing bound The source of our rekindled excitement about this problem: Theorem ( Tyagi-Watanabe 2014) Given � , δ ≥ 0 with � + δ < 1 and 0 < η < 1 − � − δ . It holds that � � S � , δ ( X, Y ) ≤ − log β � + δ + η P XY , P X P Y + 2 log(1 / η ) β � (P , Q) � T : P[T] ≥ 1 − � Q[T] , inf where � � P[T] = P( v )T(0 | v ) Q[T] = Q( v )T(0 | v ) v v 3
Converse: Conditional independence testing bound The source of our rekindled excitement about this problem: Theorem ( Tyagi-Watanabe 2014) Given � , δ ≥ 0 with � + δ < 1 and 0 < η < 1 − � − δ . It holds that � � S � , δ ( X, Y ) ≤ − log β � + δ + η P XY , P X P Y + 2 log(1 / η ) β � (P , Q) � T : P[T] ≥ 1 − � Q[T] , inf where � � P[T] = P( v )T(0 | v ) Q[T] = Q( v )T(0 | v ) v v In the spirit of meta-converse of Polyanskiy, Poor, and Verdu 3
Single-shot achievability? Recall the two steps of SK agreement: Step 1 (aka Information reconciliation). Slepian-Wolf code to send X to Y Step 2 (aka Randomness extraction or privacy amplification). “Random function” K to extract uniform random bits from X as K ( X ) Example. For ( X, Y ) ≡ ( X n , Y n ) Rate of communication in step 1 = H ( X | Y ) = H ( X ) − I ( X ∧ Y ) Rate of randomness extraction in step 2 = H ( X ) The di ff erence is the secret key capacity 4
Single-shot achievability? Recall the two steps of SK agreement: Step 1 (aka Information reconciliation). Slepian-Wolf code to send X to Y Step 2 (aka Randomness extraction or privacy amplification). “Random function” K to extract uniform random bits from X as K ( X ) Example. For ( X, Y ) ≡ ( X n , Y n ) Rate of communication in step 1 = H ( X | Y ) = H ( X ) − I ( X ∧ Y ) Rate of randomness extraction in step 2 = H ( X ) The di ff erence is the secret key capacity Are we done? 4
Single-shot achievability? Recall the two steps of SK agreement: Step 1 (aka Information reconciliation). Slepian-Wolf code to send X to Y Step 2 (aka Randomness extraction or privacy amplification). “Random function” K to extract uniform random bits from X as K ( X ) Example. For ( X, Y ) ≡ ( X n , Y n ) Rate of communication in step 1 = H ( X | Y ) = H ( X ) − I ( X ∧ Y ) Rate of randomness extraction in step 2 = H ( X ) The di ff erence is the secret key capacity Are we done? Not quite. Let’s take a careful look 4
Step 1: Slepian-Wolf theorem Miyake Kanaya 95, Han 03 Lemma (Slepian-Wolf coding) There exists a code ( e, d ) of size M with encoder e : X → { 1 , ..., M } , and a decoder d : { 1 , ..., M } × Y → X , such that P XY ( { ( x, y ) | x ̸ = d ( e ( x ) , y ) } ) + 2 − γ . � � ≤ P XY { ( x, y ) | − log P X | Y ( x | y ) ≥ log M − γ } 5
Step 1: Slepian-Wolf theorem Miyake Kanaya 95, Han 03 Lemma (Slepian-Wolf coding) There exists a code ( e, d ) of size M with encoder e : X → { 1 , ..., M } , and a decoder d : { 1 , ..., M } × Y → X , such that P XY ( { ( x, y ) | x ̸ = d ( e ( x ) , y ) } ) + 2 − γ . � � ≤ P XY { ( x, y ) | − log P X | Y ( x | y ) ≥ log M − γ } − log P X | Y = − log P X − log(P Y | X / P Y ) Compare with H ( X | Y ) = H ( X ) − I ( X ∧ Y ) The second term is a proxy for the mutual information 5
Step 1: Slepian-Wolf theorem Miyake Kanaya 95, Han 03 Lemma (Slepian-Wolf coding) There exists a code ( e, d ) of size M with encoder e : X → { 1 , ..., M } , and a decoder d : { 1 , ..., M } × Y → X , such that P XY ( { ( x, y ) | x ̸ = d ( e ( x ) , y ) } ) ≤ P XY ( { ( x, y ) | ≥ log M − γ } ) + 2 − γ . − log P X | Y = − log P X − log(P Y | X / P Y ) Compare with H ( X | Y ) = H ( X ) − I ( X ∧ Y ) The second term is a proxy for the mutual information Communication rate needed is approximately equal to ( large probability upper bound on − log P X ) − log(P Y | X / P Y ) 5
Step 2: Leftover hash lemma Lesson from the step 1: Communication rate is approximately ( large probability upper bound on − log P X ) − log(P Y | X / P Y ) Recall that the min entropy of X is given by H min (P X ) = − log max P X ( x ) x Impagliazzo et. al. 89, Bennett et. al. 95, Renner-Wolf 05 Lemma (Leftover hash) There exists a function K of X taking values in K such that � |K||Z| 2 − H min (P X ) ∥ P KZ − P unif P Z ∥ ≤ 6
Step 2: Leftover hash lemma Lesson from the step 1: Communication rate is approximately ( large probability upper bound on − log P X ) − log(P Y | X / P Y ) Recall that the min entropy of X is given by H min (P X ) = − log max P X ( x ) x Impagliazzo et. al. 89, Bennett et. al. 95, Renner-Wolf 05 Lemma (Leftover hash) There exists a function K of X taking values in K such that � |K||Z| 2 − H min (P X ) ∥ P KZ − P unif P Z ∥ ≤ Randomness can be extracted at a rate approximately equal to (large probability lower bound on − log P X ) 6
Step 2: Leftover hash lemma Lesson from the step 1: Communication rate is approximately ( large probability upper bound on − log P X ) − log(P Y | X / P Y ) Recall that the min entropy of X is given by Information Spectrum of X H min (P X ) = − log max P X ( x ) x Impagliazzo et. al. 89, Bennett et. al. 95, Renner-Wolf 05 Lemma (Leftover hash) Loss in SK rate There exists a function K of X taking values in K such that − log P X ( X ) � |K||Z| 2 − H min (P X ) ∥ P KZ − P unif P Z ∥ ≤ Randomness can be extracted at a rate approximately equal to (large probability lower bound on − log P X ) 6
Spectrum slicing A slice of the spectrum λ min λ max ∆ − log P X ( X ) Slice the spectrum of X into L bins of length ∆ and send the bin number to Y 7
Single-shot achievability Theorem For every γ > 0 and 0 ≤ λ ≤ λ min , there exists an ( � , δ ) -SK K taking values in K with � P XY ( X, Y ) � � ≤ P log P X ( X ) P Y ( Y ) ≤ λ + γ + ∆ ∈ ( λ min , λ max )) + 1 +P ( − log P X ( X ) / L δ ≤ 1 � |K| 2 − ( λ − 2 log L ) 2 8
Secret key capacity for general sources Consider a sequence of sources ( X n , Y n ) The SK capacity C is defined as 1 C � sup lim inf nS � n , δ n ( X n , Y n ) n →∞ � n , δ n where the sup is over all � n , δ n ≥ 0 such that n →∞ � n + δ n = 0 lim 9
Secret key capacity for general sources Consider a sequence of sources ( X n , Y n ) The SK capacity C is defined as 1 C � sup lim inf nS � n , δ n ( X n , Y n ) n →∞ � n , δ n where the sup is over all � n , δ n ≥ 0 such that n →∞ � n + δ n = 0 lim The inf-mutual information rate I ( X ∧ Y ) is defined as � � I ( X ∧ Y ) � sup α | n →∞ P ( Z n < α ) = 0 lim where Z n = 1 P X n Y n ( X n , Y n ) n log P X n ( X n ) P Y n ( Y n ) 9
Recommend
More recommend