secure information exchange for secure information
play

Secure Information Exchange for Secure Information Exchange for - PowerPoint PPT Presentation

Secure Information Exchange for Secure Information Exchange for Omniscience Omniscience Chung Chan (CityU) Joint work with Navin Kashyap (IISc), Praneeth Kumar Vippathalla, (IISc) and Qiaoqiao Zhou (CUHK) Audio slides:


  1. Secure Information Exchange for Secure Information Exchange for Omniscience Omniscience Chung Chan (CityU) Joint work with Navin Kashyap (IISc), Praneeth Kumar Vippathalla, (IISc) and Qiaoqiao Zhou (CUHK) Audio slides: https://www.cs.cityu.edu.hk/~ccha23/isit2020

  2. Secure information exchange Secure information exchange Formulation Formulation Public Private Target Censored Network info. info. info. Nodes info. n = ( Z (1) (2) ( n ) Z : , Z , … , Z ) i.i.d. as Z 1 0 H ( F ) → 0 0 0 0 r 0 0 Z 0 n (discussion rate) 1 I ( F ∧ Y ∣ Z ) → u (utility) n n F 0 0 0 0 0 n Y 0 X 0 1 I ( F ∧ X ∣ Z ) → n n l (leakage) 0 0 0 n Z 1 n Z 1 1 H ( F ) → r 1 1 1 I ( F ∧ Y ∣ Z ) → n n u 1 n F 1 1 1 1 Y 1 n X 1 1 I ( F ∧ X ∣ Z ) → n n l 1 1 1 n Interactive Public discussion Z V Y V X V F V Characterize given source . R := closure{( u , ℓ , r ) achievable by some F } P X , Y , Z V V V V V V

  3. Related problems Related problems By restricting the source model, the problem reduces to: Private information extraction problem [Asoodeh et al 19] V = {1, 2} and , , are null Z = ( X , Y ) X 1 Y 1 Z 2 1 2 2 Information bottleneck [Tishby et al 99] V = {1, 2} , , and are null X 1 Y 1 X 2 Z 2

  4. Secure omniscience Secure omniscience A special scenario of secure information exchange A special scenario of secure information exchange Z w = Z (censored info.) X w U w 1 I ( F ∧ X ∣ Z ) → ℓ (leakage) n n w w w wiretapper n Z 1 unlimited r 1 u = H ( Z ∣ Z ) (omniscience) 1 1 F 1 U 1 active users in A Z h unlimited r h F h h helpers in U \ A no target info. Z U users F U Characterize . R : L = inf{ℓ ∣( u , ℓ , r ) ∈ R , u = H ( Z ∣ Z ) ∀ i ∈ A } w V V V i V i

  5. Example Example With uniformly random and independent bits: , X , X , X a b c Z w n n n n X + X , X + X c a b b X + X , X + n n n n X c a b b w X , X n n a b Z 1 X a X + n n X b a F 1 X b 1 X , X n n b c X c Z 2 X + n n X c b F 2 2 X + X + n n n X c a b Z 3 3 Z U F A = U 1 = 0 ℓ = lim sup n I ( ∧ Z ∣ Z ) = 0 n n F R w w L U n →∞ = Z w n

  6. Communication for Omniscience Communication for Omniscience [Csiszar and Narayan 04] Z w = Z (censored info.) X w U w 1 I ( F ∧ X ∣ Z ) → ℓ (leakage) n n w w w wiretapper n Z 0 unlimited r 1 u = H ( Z ∣ Z ) (omniscience) 1 1 F 0 U 1 active users in A Z h unlimited r h F h h helpers in U \ A no target info. Z U users F U Characterize . CO = inf{ : r ∣( u , ℓ , r ) ∈ R , u = H ( Z ∣ Z ) ∀ i ∈ A } ∑ i ∈ U R i V V V i V i

  7. Minimum leakage vs minimum discussion rate Minimum leakage vs minimum discussion rate Proposition if is null. R = Z w R CO L 1 n n ℓ = lim sup n I ( F ∧ Z ∣ Z ) w w U n →∞ = I ( F ∧ Z )= H ( F ) n U Proposition and are not simultaneously achievable in general. R L R CO From [Csiszar and Narayan 04], := {1, 2} ⊆ U := {1, 2, 3} A ∣ { ∑ 3 } := ( X + X , X + X ) Z w ∣ ∣ = min r + r ≥ 0, r + r ≥ 1, r + r ≥ 1 a b b c R r ∣ CO 1 2 1 3 2 3 i ∣ i =1 := ( X , X ) Z 1 a b solved uniquely by = 1 ( r , r , r ) = (0, 0, 1) 1 2 3 := ( X , X ) Z 2 b c -achieving scheme: := ( X + X + X ) F = F = n Z 3 R CO Z 3 3 a b c ℓ = H ( Z ∣ Z ) = 1 ≥ 0 = R L w 3 w Claim: Any scheme with cannot have ( r , r , r ) = (0, 0, 1) R = 0 1 2 3 L

  8. Main Results Main Results Lower bound on minimum leakage Lower bound on minimum leakage Definition (Multiterminal secret key agreement [Csiszar and Narayan 04]) 1 1 C : S = sup n H ( K ) s.t. lim sup n H ( K ∣ Z , F ) = n 0 ∀ i ∈ A (recoverability) i n →∞ K , F 1 lim sup n I ( K ∧ F , Z ) = n 0 (secrecy) w n →∞ Theorem 1 For the secure omniscience scenario with , ∣ A ∣ ≥ 2 R L ≥ H ( Z ∣ Z ) − C w S U ≥ R ( Z ∣ W ) − I ( Z ∧ Z ∣ W ) CO w U U for any random variable satisfying the Markov condition . I ( W ∧ Z ∣ Z ) = 0 W w U

  9. Proof Idea Proof Idea Theorem 1 For the secure omniscience scenario with , ∣ A ∣ ≥ 2 R L ≥ H ( Z ∣ Z ) − C w S U ≥ R ( Z ∣ W ) − I ( Z ∧ Z ∣ W ) CO w U U for any random variable satisfying the Markov condition . I ( W ∧ Z ∣ Z ) = 0 W w U For the first lower bound, similar to an argument in [Csiszar and Narayan 04], C ≥ H ( Z ∣ Z ) − R L S w U by privacy amplication after secure omniscience. The second lower bound follows from C ≤ H ( Z ∣ W ) − ( Z ∣ W ), R S CO U U an upper bound on in [Csiszar and Narayan 04]. C S

  10. Example Example Theorem 1 For the secure omniscience scenario with , ∣ A ∣ ≥ 2 R L ≥ H ( Z ∣ Z ) − C w S U ≥ R ( Z ∣ W ) − I ( Z ∧ Z ∣ W ) CO w U U for any random variable satisfying the Markov condition . I ( W ∧ Z ∣ Z ) = 0 W w U user 1 is active = U := {1, 2, 3, 4} ∵ C ≤ H ( Z ) = 1 A S 1 := X + X + X Z w a b c 1≥ := X a Z 1 R L ≥ H ( Z ∣ Z ) − C S by Theorem 1 w U := ( X , X ) Z 2 a b = H ( X , X , X ∣ X + X + X ) a b c a b c := ( X , X ) Z 3 = 2 b c := X c Z 4 = 2 − 1 = 1 Is the lower bound achievable?

  11. Main Results Main Results Upper bound on minimum leakage Upper bound on minimum leakage Theorem 2 For the secure omniscience scenario, 1 ′ ′ m m m R ≤ [ R ( Z ∣ F ) + I ( Z ∧ F ∣ Z )] ≤ R CO L CO w U U m where is a public discussion for block length . F ′ m ≥ 1 Z w m Z w w m Z 1 Z 1 ′ F 1 1 F ′ Z U U

  12. Proof idea Proof idea Theorem 2 For the secure omniscience scenario, 1 ′ ′ m m m R ≤ [ R ( Z ∣ F ) + I ( Z ∧ F ∣ Z )] ≤ R CO L CO w U U m where is a public discussion for block length . F ′ m ≥ 1 ∑ ′′ ≤ m ℓ 1 ′ ′′ ′ ′ ℓ w + ℓ = lim sup m I ( Z m ∧ F ∣ Z ) m r w i w w U i ∈ U m →∞ Z w Z w m n Z w ′ m Z , F n w w w n ′ m m Z , F Z 1 n 1 ′ concat. Z , F m Z 1 ′′ 1 r 1 ′ ′′ blocks F 1 1 F 1 n 1 m F ′ F ′′ Z U U Z U U Omniscience possible with . ′′ ′ r = ( Z ∣ F ) m How to attain omniscience? ∑ i ∈ U R CO i U

  13. Example Example Theorem 2 For the secure omniscience scenario, 1 ′ ′ m m m R ≤ [ R ( Z ∣ F ) + I ( Z ∧ F ∣ Z )] ≤ R CO L CO w U U m where is a public discussion for block length . F ′ m ≥ 1 R L ≥ 1 by Theorem 1 = U := {1, 2, 3, 4} A by Theorem 2 := X + X + X Z w R L ≤ 1 a b c := X a Z 1 does not work but works. m = 1 m = 2 := ( X , X ) Z 2 m m Try X a X b a b M := := ( X , X ) Z 3 (1) (1) [ X a (2) ] [ X b (2) ] [ 1 1 0 ] b c ′ = + F 2 1 := X c Z 4 X a X b ′ = X m + ( M + I ) X m F 3 c b N.b., already achieves omniscience, so . F ′ ′ ( Z ∣ F ) = 0 m R CO U Do the upper and lower bounds match in general?

  14. Main Results Main Results Tightness of the upper and lower bounds Tightness of the upper and lower bounds Theorem 3 For any finite linear source with two users, i.e., , A = U = {1, 2} R = H ( Z , Z ∣ Z ) − I ( Z ∧ Z ∣ G ) L 1 2 w 1 2 where is the maximum common function of and , i.e., the unique solution to G Z w Z 1 ( Z ∧ Z ) : = max H ( G ). J GK w 1 G : H ( G ∣ Z )= H ( G ∣ Z )=0 w 1 Proof of ≤ Proof of ≥ Choose , and Choose , and m = 1 W = G to align with . substitute . F ′ C = I ( Z ∧ Z ∣ G ) Z w S 1 2 Theorem 2 Theorem 1 With , ∣ A ∣ ≥ 2 1 ′ ′ m m m R ≤ [ R ( Z ∣ F ) + I ( Z ∧ F ∣ Z )] ≥ H ( Z ∣ Z ) − L CO R C S w U U w L U m for any with . with discussion for block length . F ′ I ( W ∧ Z ∣ Z ) = 0 m ≥ 1 W w U

  15. Bounds do not match in general Bounds do not match in general Theorem 1 With , ∣ A ∣ ≥ 2 ≥ H ( Z ∣ Z ) − R C S w L U for any with . I ( W ∧ Z ∣ Z ) = 0 W w U Proposition The lower bound on is loose. R L C ≤ H ( Z ) = 1 S 1 := {1, 2} ⊆ U := {1, 2, 3} A R ≥ H ( Z ∣ Z ) − C = 1 − 1 = 0 := X + X Z w L w S U a b = Z := X Z 1 Claim Minimum leakage is R ≥ 1. 2 a L := X b Z 3 Optimal scheme: F = F = ℓ = 1 n X b 3 w

  16. Extensions and challenges Extensions and challenges Tightness for hypergraphical sources can be proved. More explicit characterization using graph entropy is possible. R L Lower bound can be improved for the counter-example. remains unknown for finite linear sources. R L for secure linear function computation, extending [Tyagi et. al 11]. R L

  17. References References I. Csiszár and P. Narayan, “Secrecy capacities for multiple terminals,” IEEE Transactions on Information Theory, vol. 50, no. 12, pp. 3047–3061, Dec. 2004. A. Gohari and V. Anantharam, “Information-theoretic key agreement of multiple terminals—Part I,” IEEE Transactions on Information Theory, vol. 56, no. 8, pp. 3973 – 3996, Aug. 2010. S. Asoodeh, M. Diaz, F. Alajaji, and T. Linder, “Estimation efficiency under privacy constraints,” IEEE Transactions on Information Theory, vol. 65, no. 3, pp. 1512–1534, March 2019. N. Tishby, F. C. Pereira, and W. Bialek, “The information bottleneck method,” in Thirty- Seventh Annual Allerton Conference on Communication, Control, and Computing, Sep. 1999. H. Tyagi, P. Narayan, and P. Gupta, “When is a function securely computable?” IEEE Transactions on Information Theory, vol. 57, no. 10, pp. 6337–6350, 2011.

Recommend


More recommend