trading information complexity for error
play

Trading Information Complexity for Error Yaqiao Li joint work with - PowerPoint PPT Presentation

Trading Information Complexity for Error Yaqiao Li joint work with Yuval Dagan, Yuval Filmus, Hamed Hatami School of Computer Science McGill University July 8, 2017 Yaqiao Li (McGill University) July 8, 2017 1 / 22 Main results How much


  1. Trading Information Complexity for Error Yaqiao Li joint work with Yuval Dagan, Yuval Filmus, Hamed Hatami School of Computer Science McGill University July 8, 2017 Yaqiao Li (McGill University) July 8, 2017 1 / 22

  2. Main results How much information one can save by allowing an error ǫ . Yaqiao Li (McGill University) July 8, 2017 2 / 22

  3. Main results How much information one can save by allowing an error ǫ . Showed a separation between two concepts in information complexity. Yaqiao Li (McGill University) July 8, 2017 2 / 22

  4. Main results How much information one can save by allowing an error ǫ . Showed a separation between two concepts in information complexity. Determined communication complexity of computing disjointness function with error ǫ . Yaqiao Li (McGill University) July 8, 2017 2 / 22

  5. Information complexity Extension of Shannon’s information theory towards studying communication complexity. Shannon (1916-2001) Yaqiao Li (McGill University) July 8, 2017 3 / 22

  6. Communication complexity Alice receives an input X ∈ { 0 , 1 } n , Bob receives Y ∈ { 0 , 1 } n . Yaqiao Li (McGill University) July 8, 2017 4 / 22

  7. Communication complexity Alice receives an input X ∈ { 0 , 1 } n , Bob receives Y ∈ { 0 , 1 } n . They want to compute f ( X , Y ) : { 0 , 1 } n × { 0 , 1 } n → { 0 , 1 } collaboratively using a protocol. Yaqiao Li (McGill University) July 8, 2017 4 / 22

  8. Communication complexity Alice receives an input X ∈ { 0 , 1 } n , Bob receives Y ∈ { 0 , 1 } n . They want to compute f ( X , Y ) : { 0 , 1 } n × { 0 , 1 } n → { 0 , 1 } collaboratively using a protocol. A protocol π is an algorithm that defines what Alice and Bob do, in order to compute f ( X , Y ) . Yaqiao Li (McGill University) July 8, 2017 4 / 22

  9. Communication complexity Alice receives an input X ∈ { 0 , 1 } n , Bob receives Y ∈ { 0 , 1 } n . They want to compute f ( X , Y ) : { 0 , 1 } n × { 0 , 1 } n → { 0 , 1 } collaboratively using a protocol. A protocol π is an algorithm that defines what Alice and Bob do, in order to compute f ( X , Y ) . CC ( π ) := how many bits of communication are sent in π ? Yaqiao Li (McGill University) July 8, 2017 4 / 22

  10. Information complexity Same setting; Yaqiao Li (McGill University) July 8, 2017 5 / 22

  11. Information complexity Same setting; assume inputs ( X , Y ) ∼ µ , to measure information (entropy). Yaqiao Li (McGill University) July 8, 2017 5 / 22

  12. Information complexity Same setting; assume inputs ( X , Y ) ∼ µ , to measure information (entropy). How much information the players have to reveal about their inputs? Yaqiao Li (McGill University) July 8, 2017 5 / 22

  13. Information complexity Same setting; assume inputs ( X , Y ) ∼ µ , to measure information (entropy). How much information the players have to reveal about their inputs? Information cost of a protocol π : IC µ ( π ) = information about Y that Alice learns + information about X that Bob learns. Yaqiao Li (McGill University) July 8, 2017 5 / 22

  14. Information cost: Example 1 AND: { 0 , 1 } × { 0 , 1 } → { 0 , 1 } Product distribution µ : Pr [ X = 1 ] = 1 Pr [ Y = 1 ] = 1 2 . 2 , Yaqiao Li (McGill University) July 8, 2017 6 / 22

  15. Information cost: Example 1 AND: { 0 , 1 } × { 0 , 1 } → { 0 , 1 } Product distribution µ : Pr [ X = 1 ] = 1 Pr [ Y = 1 ] = 1 2 . 2 , Naive protocol π Alice sends her input X to Bob; Bob sends his input Y to Alice. Yaqiao Li (McGill University) July 8, 2017 6 / 22

  16. Information cost: Example 1 AND: { 0 , 1 } × { 0 , 1 } → { 0 , 1 } Product distribution µ : Pr [ X = 1 ] = 1 Pr [ Y = 1 ] = 1 2 . 2 , Naive protocol π Alice sends her input X to Bob; Bob sends his input Y to Alice. Alice learns Bob’s input: learned information = H ( Y ) = 1; Yaqiao Li (McGill University) July 8, 2017 6 / 22

  17. Information cost: Example 1 AND: { 0 , 1 } × { 0 , 1 } → { 0 , 1 } Product distribution µ : Pr [ X = 1 ] = 1 Pr [ Y = 1 ] = 1 2 . 2 , Naive protocol π Alice sends her input X to Bob; Bob sends his input Y to Alice. Alice learns Bob’s input: learned information = H ( Y ) = 1; Bob learns Alice’s input: learned information = H ( X ) = 1; Yaqiao Li (McGill University) July 8, 2017 6 / 22

  18. Information cost: Example 1 AND: { 0 , 1 } × { 0 , 1 } → { 0 , 1 } Product distribution µ : Pr [ X = 1 ] = 1 Pr [ Y = 1 ] = 1 2 . 2 , Naive protocol π Alice sends her input X to Bob; Bob sends his input Y to Alice. Alice learns Bob’s input: learned information = H ( Y ) = 1; Bob learns Alice’s input: learned information = H ( X ) = 1; Information cost of π : IC µ ( π ) = 1 + 1 = 2 = CC ( π ) . Yaqiao Li (McGill University) July 8, 2017 6 / 22

  19. Information cost: Example 2 AND: { 0 , 1 } × { 0 , 1 } → { 0 , 1 } Product distribution µ : Pr [ X = 1 ] = 1 Pr [ Y = 1 ] = 1 2 . 2 , Yaqiao Li (McGill University) July 8, 2017 7 / 22

  20. Information cost: Example 2 AND: { 0 , 1 } × { 0 , 1 } → { 0 , 1 } Product distribution µ : Pr [ X = 1 ] = 1 Pr [ Y = 1 ] = 1 2 . 2 , Better protocol τ Alice sends her input X to Bob; Bob computes and outputs AND ( X , Y ) . Yaqiao Li (McGill University) July 8, 2017 7 / 22

  21. Information cost: Example 2 AND: { 0 , 1 } × { 0 , 1 } → { 0 , 1 } Product distribution µ : Pr [ X = 1 ] = 1 Pr [ Y = 1 ] = 1 2 . 2 , Better protocol τ Alice sends her input X to Bob; Bob computes and outputs AND ( X , Y ) . Note: CC ( τ ) = 2 = CC ( π ) . Yaqiao Li (McGill University) July 8, 2017 7 / 22

  22. Information cost: Example 2 AND: { 0 , 1 } × { 0 , 1 } → { 0 , 1 } Product distribution µ : Pr [ X = 1 ] = 1 Pr [ Y = 1 ] = 1 2 . 2 , Better protocol τ Alice sends her input X to Bob; Bob computes and outputs AND ( X , Y ) . Note: CC ( τ ) = 2 = CC ( π ) . Why is the protocol τ better than π ? Yaqiao Li (McGill University) July 8, 2017 7 / 22

  23. Information cost: Example 2 - continued better protocol τ Alice sends her input X to Bob; Bob computes and outputs AND ( X , Y ) . Information cost IC µ ( τ ) = ? Yaqiao Li (McGill University) July 8, 2017 8 / 22

  24. Information cost: Example 2 - continued better protocol τ Alice sends her input X to Bob; Bob computes and outputs AND ( X , Y ) . Information cost IC µ ( τ ) = ? Bob always learns X : learned information = H ( X ) = 1; Yaqiao Li (McGill University) July 8, 2017 8 / 22

  25. Information cost: Example 2 - continued better protocol τ Alice sends her input X to Bob; Bob computes and outputs AND ( X , Y ) . Information cost IC µ ( τ ) = ? Bob always learns X : learned information = H ( X ) = 1; When X = 1, Alice learns Y ; Yaqiao Li (McGill University) July 8, 2017 8 / 22

  26. Information cost: Example 2 - continued better protocol τ Alice sends her input X to Bob; Bob computes and outputs AND ( X , Y ) . Information cost IC µ ( τ ) = ? Bob always learns X : learned information = H ( X ) = 1; When X = 1, Alice learns Y ; When X = 0, Alice learns nothing; Yaqiao Li (McGill University) July 8, 2017 8 / 22

  27. Information cost: Example 2 - continued better protocol τ Alice sends her input X to Bob; Bob computes and outputs AND ( X , Y ) . Information cost IC µ ( τ ) = ? Bob always learns X : learned information = H ( X ) = 1; When X = 1, Alice learns Y ; When X = 0, Alice learns nothing; Alice learns information = 1 2 H ( Y ) = 0 . 5. Yaqiao Li (McGill University) July 8, 2017 8 / 22

  28. Information cost: Example 2 - continued better protocol τ Alice sends her input X to Bob; Bob computes and outputs AND ( X , Y ) . Information cost IC µ ( τ ) = ? Bob always learns X : learned information = H ( X ) = 1; When X = 1, Alice learns Y ; When X = 0, Alice learns nothing; Alice learns information = 1 2 H ( Y ) = 0 . 5. Information cost of the protocol τ : IC µ ( τ ) = 1 + 0 . 5 = 1 . 5 < 2 = IC µ ( π ) = ⇒ τ is better! . Yaqiao Li (McGill University) July 8, 2017 8 / 22

  29. Information complexity of a function f Definition IC µ ( f , 0 ) := inf π IC µ ( π ) , where π computes f with no error. Yaqiao Li (McGill University) July 8, 2017 9 / 22

  30. Information complexity of a function f Definition IC µ ( f , 0 ) := inf π IC µ ( π ) , where π computes f with no error. Example We saw for µ being the uniform distribution, IC µ ( AND , 0 ) ≤ IC µ ( τ ) = 1 . 5 Yaqiao Li (McGill University) July 8, 2017 9 / 22

  31. Information complexity of a function f Definition IC µ ( f , 0 ) := inf π IC µ ( π ) , where π computes f with no error. Example We saw for µ being the uniform distribution, IC µ ( AND , 0 ) ≤ IC µ ( τ ) = 1 . 5 optimal? Yaqiao Li (McGill University) July 8, 2017 9 / 22

  32. Information complexity of AND Theorem (BGPW’13) Let µ be the uniform distribution, then IC µ ( AND , 0 ) ≈ 1 . 36 ... Yaqiao Li (McGill University) July 8, 2017 10 / 22

  33. Information complexity of AND Theorem (BGPW’13) Let µ be the uniform distribution, then IC µ ( AND , 0 ) ≈ 1 . 36 ... Theorem (BGPW’13) max IC µ ( AND , 0 ) ≈ 1 . 49 ... µ Yaqiao Li (McGill University) July 8, 2017 10 / 22

  34. Information complexity: Example 3 XOR: { 0 , 1 } × { 0 , 1 } → { 0 , 1 } Product distribution µ : Pr [ X = 1 ] = 1 Pr [ Y = 1 ] = 1 2 . 2 , Yaqiao Li (McGill University) July 8, 2017 11 / 22

Recommend


More recommend