tight bounds for communication assisted agreement
play

Tight bounds for Communication assisted agreement distillation - PowerPoint PPT Presentation

Tight bounds for Communication assisted agreement distillation Jaikumar Radhakrishnan Tata Institute of Fundamental Research, Mumbai Joint work with Venkat Guruswami, Carnegie Mellon University Agreement distillation Alice Bob Input: X {


  1. Tight bounds for Communication assisted agreement distillation Jaikumar Radhakrishnan Tata Institute of Fundamental Research, Mumbai Joint work with Venkat Guruswami, Carnegie Mellon University

  2. Agreement distillation Alice Bob Input: X ∈ { 0 , 1 } N Input: Y ∈ { 0 , 1 } N Output: f A ( X ) ∈ { 0 , 1 } k Output: f B ( Y ) ∈ { 0 , 1 } k ( X , Y ) ∼ BSC ( ε ) : Pr [ X i � = Y i ] = ε Goal f A ( X ) uniformly distributed in { 0 , 1 } k Pr [ f A ( X ) = f B ( Y )] close to 1

  3. Agreement distillation Alice Bob Input: X ∈ { 0 , 1 } N Input: Y ∈ { 0 , 1 } N Output: f A ( X ) ∈ { 0 , 1 } k Output: f B ( Y ) ∈ { 0 , 1 } k ( X , Y ) ∼ BSC ( ε ) : Pr [ X i � = Y i ] = ε Goal f A ( X ) uniformly distributed in { 0 , 1 } k Pr [ f A ( X ) = f B ( Y )] close to 1

  4. Naive protocol: no communication Alice Bob Input: X ∈ { 0 , 1 } N Input: Y ∈ { 0 , 1 } N Output: f A ( X ) = X 1 X 2 . . . X k Output: f B ( Y ) = Y 1 Y 2 . . . Y k Success probability Pr [ f A ( X ) = f B ( Y )] = ( 1 − ε ) k ≈ exp ( − ε k )

  5. Naive protocol: no communication Alice Bob Input: X ∈ { 0 , 1 } N Input: Y ∈ { 0 , 1 } N Output: f A ( X ) = X 1 X 2 . . . X k Output: f B ( Y ) = Y 1 Y 2 . . . Y k Success probability Pr [ f A ( X ) = f B ( Y )] = ( 1 − ε ) k ≈ exp ( − ε k )

  6. Can we do better? Yes, a little better (Bogdanov & Mossel 2011) Alice and Bob can agree with probability least ≈ 2 − ( ε/ ( 1 − ε )) k . But, no better (Bogdanov & Mossel 2011) Alice and Bob can agree with probability at most ≈ 2 − ( ε/ ( 1 − ε )) k .

  7. Can we do better? Yes, a little better (Bogdanov & Mossel 2011) Alice and Bob can agree with probability least ≈ 2 − ( ε/ ( 1 − ε )) k . But, no better (Bogdanov & Mossel 2011) Alice and Bob can agree with probability at most ≈ 2 − ( ε/ ( 1 − ε )) k .

  8. Can we do better? Yes, a little better (Bogdanov & Mossel 2011) Alice and Bob can agree with probability least ≈ 2 − ( ε/ ( 1 − ε )) k . But, no better (Bogdanov & Mossel 2011) Alice and Bob can agree with probability at most ≈ 2 − ( ε/ ( 1 − ε )) k .

  9. How much can communication help? Alice Bob Input: X ∈ { 0 , 1 } N Input: Y ∈ { 0 , 1 } N = ⇒ Output: f A ( X ) = { 0 , 1 } k Output: f B ( Y , M ) ∈ { 0 , 1 } k M How many bits must Alice send Bob to ensure agreement with constant probability? What is the trade-off between communicaton and probability of agreement?

  10. How much can communication help? Alice Bob Input: X ∈ { 0 , 1 } N Input: Y ∈ { 0 , 1 } N = ⇒ Output: f A ( X ) = { 0 , 1 } k Output: f B ( Y , M ) ∈ { 0 , 1 } k M How many bits must Alice send Bob to ensure agreement with constant probability? What is the trade-off between communicaton and probability of agreement?

  11. The trade-off Definition C BSC ( ε ) ( k , η ) is the minimum number of bits Alice transmits to Bob in a protocol where g A ( X ) is uniformly distributed in { 0 , 1 } k Pr [ g A ( X ) = g B ( Y , M )] ≥ η γ Probability of agreement = 2 − γ k Communication = ck BM ’10: If c = 0, then γ = ε/ ( 1 − ε ) This work: If c = 4 ε ( 1 − ε ) , then γ → 0. c

  12. The trade-off Definition C BSC ( ε ) ( k , η ) is the minimum number of bits Alice transmits to Bob in a protocol where g A ( X ) is uniformly distributed in { 0 , 1 } k Pr [ g A ( X ) = g B ( Y , M )] ≥ η γ Probability of agreement = 2 − γ k ε/ ( 1 − ε ) Communication = ck BM ’10: If c = 0, then γ = ε/ ( 1 − ε ) This work: If c = 4 ε ( 1 − ε ) , then γ → 0. c

  13. The trade-off Definition C BSC ( ε ) ( k , η ) is the minimum number of bits Alice transmits to Bob in a protocol where g A ( X ) is uniformly distributed in { 0 , 1 } k Pr [ g A ( X ) = g B ( Y , M )] ≥ η γ Probability of agreement = 2 − γ k ε/ ( 1 − ε ) Communication = ck BM ’10: If c = 0, then γ = ε/ ( 1 − ε ) B := 4 ε ( 1 − ε ) This work: If c = 4 ε ( 1 − ε ) , then γ → 0. c

  14. The trade-off Definition C BSC ( ε ) ( k , η ) is the minimum number of bits Alice transmits to Bob in a protocol where g A ( X ) is uniformly distributed in { 0 , 1 } k Pr [ g A ( X ) = g B ( Y , M )] ≥ η γ Probability of agreement = 2 − γ k ε/ ( 1 − ε ) Communication = ck � c = B ( 1 − γ ) − 2 B ( 1 − B ) γ BM ’10: If c = 0, then γ = ε/ ( 1 − ε ) B := 4 ε ( 1 − ε ) This work: If c = 4 ε ( 1 − ε ) , then γ → 0. c

  15. Related work Communication complexity: Canonne, Guruswami, Meka, and Sudan (2015) used capacity-achieving codes to ensure agreement with high probability with ( h ( ε ) + o ( 1 )) k bits of communication. Information theory: The case k = 1 is the subject of a recent conjecture of Courtade and Kumar (2014): The function g A ( X ) = X 1 maximizes I [ g A ( X ) : Y ] . Chandar and Tchamkerten (2014) showed that the corresponding conjecture is false for large k .

  16. The protocol of Bogdanov and Mossel View { + 1 , − 1 } N as points on an N -dimensional sphere. Pick 2 k well-sparated vectors, labelled by { 0 , 1 } k . Alice: f A ( X ) = closest vector to X Bob: f B ( Y ) = closest vector to Y Proof idea The projections along the various directions are Gaussian and approximately independent.

  17. The protocol of Bogdanov and Mossel View { + 1 , − 1 } N as points on an N -dimensional sphere. Pick 2 k well-sparated vectors, labelled by { 0 , 1 } k . Alice: f A ( X ) = closest vector to X Bob: f B ( Y ) = closest vector to Y Proof idea The projections along the various directions are Gaussian and approximately independent.

  18. The protocol of Bogdanov and Mossel View { + 1 , − 1 } N as points on an N -dimensional sphere. Pick 2 k well-sparated vectors, labelled by { 0 , 1 } k . Alice: f A ( X ) = closest vector to X Bob: f B ( Y ) = closest vector to Y Proof idea The projections along the various directions are Gaussian and approximately independent.

  19. The protocol of Bogdanov and Mossel View { + 1 , − 1 } N as points on an N -dimensional sphere. Pick 2 k well-sparated vectors, labelled by { 0 , 1 } k . Alice: f A ( X ) = closest vector to X Bob: f B ( Y ) = closest vector to Y Proof idea The projections along the various directions are Gaussian and approximately independent.

  20. The protocol of Bogdanov and Mossel View { + 1 , − 1 } N as points on an N -dimensional sphere. Pick 2 k well-sparated vectors, labelled by { 0 , 1 } k . Alice: f A ( X ) = closest vector to X Bob: f B ( Y ) = closest vector to Y Proof idea The projections along the various directions are Gaussian and approximately independent.

  21. The protocol of Bogdanov and Mossel View { + 1 , − 1 } N as points on an N -dimensional sphere. Pick 2 k well-sparated vectors, labelled by { 0 , 1 } k . Alice: f A ( X ) = closest vector to X Bob: f B ( Y ) = closest vector to Y Proof idea The projections along the various directions are Gaussian and approximately independent.

  22. Alice’s view The ambient space is { + 1 , − 1 } N . The space is partitioned into disks. When X falls in a disk, Alice reports the label of its center. Each disk has volume ≈ 2 − k .

  23. Bob’s view The disks are bigger and overlap. Bogdanov and Mossel ’10: About 2 − ( ε/ ( 1 − ε )) k of the volume is covered by only one disk.

  24. Bob’s view The disks are bigger and overlap. Bogdanov and Mossel ’10: About 2 − ( ε/ ( 1 − ε )) k of the volume is covered by only one disk.

  25. Alice’s message The space is partitioned into disks. The disks are colored using 2 c colors. When X falls in a disk, Alice reports the label of its center. Alice sends Bob the color of the disk ( c bits).

  26. Alice’s message The space is partitioned into disks. The disks are colored using 2 c colors. When X falls in a disk, Alice reports the label of its center. Alice sends Bob the color of the disk ( c bits).

  27. Alice’s message The space is partitioned into disks. The disks are colored using 2 c colors. When X falls in a disk, Alice reports the label of its center. Alice sends Bob the color of the disk ( c bits).

  28. Bob’s view Again, the disks are bigger and overlap. But, most points are covered by only one disk of a given color. Bob uniquely identifies the disk (and its center). How many colors must Alice use?

  29. Bob’s view Again, the disks are bigger and overlap. But, most points are covered by only one disk of a given color. Bob uniquely identifies the disk (and its center). How many colors must Alice use?

  30. Bob’s view Again, the disks are bigger and overlap. But, most points are covered by only one disk of a given color. Bob uniquely identifies the disk (and its center). How many colors must Alice use?

  31. Bob’s view Again, the disks are bigger and overlap. But, most points are covered by only one disk of a given color. Bob uniquely identifies the disk (and its center). How many colors must Alice use?

  32. Bob’s view Again, the disks are bigger and overlap. But, most points are covered by only one disk of a given color. Bob uniquely identifies the disk (and its center). How many colors must Alice use?

Recommend


More recommend