implicit communication for control
play

Implicit Communication for Control Gireeja Ranade Works of Pulkit - PowerPoint PPT Presentation

Implicit Communication for Control Gireeja Ranade Works of Pulkit Grover, Anant Sahai, Se Yong Park Slides courtesy Pulkit Grover Decentralized control: observer-controller problem An abstraction C w 2 C w 1 Evolving C w Plant 3 C


  1. Implicit Communication for Control Gireeja Ranade Works of Pulkit Grover, Anant Sahai, Se Yong Park Slides courtesy Pulkit Grover

  2. Decentralized control: “observer-controller” problem An abstraction C w 2 C w 1 Evolving C w Plant 3 C w 6 C w C w 5 4 2 /32

  3. Decentralized control: a simple example x C w 6 C w 1 Blurry Weak C w 5 C 2 System C 1 C w 2 C w C w 3 4 coordination 3 /22

  4. Coordination via explicit communication External channel C w x 6 C w 1 Blurry C w 5 System C w Weak 2 C 2 C 1 C w C w 3 4 estimation control External channel separate estimation and control point-to-point communication [Shannon ’48] 4 /22

  5. Coordination via implicit communication? message C 2 C 1 Explicit communication Implicit communication 1. channel: the system itself 2. messages: endogenously generated 5 /22

  6. Implicit communication: an example C w 6 C w 1 x 1 x 0 Implicit channel C w 5 System Blurry C w 2 Weak C w C w C 2 3 4 C 1 ∼ N (0 , σ 2 0 ) x 0 + x 1 x 2 + + x 0 - + C 2 C 1 u 1 u 2 z ∼ N (0 , 1) u 2 x 2 � k 2 E � u 2 � � x 2 �� min + E [Witsenhausen ’68] 2 1 2 6 /22 w

  7. Toy implicit communication problem: Witsenhausen’s counterexample ∼ N (0 , σ 2 0 ) x 0 + x 1 x 2 + + x 0 - + C 2 C 1 u 1 u 2 z ∼ N (0 , 1) u 2 x 2 � k 2 E � u 2 � � x 2 �� min + E [Witsenhausen ’68] 2 1 2 w Implicit communication interpretation : source Implicit message Implicit channel x 0 z u 1 x 1 u 2 + + E D h x 1 ) 2 i u 2 u 2 ⇥ ⇤ MMSE = E ( x 1 − b b E ≤ P x 1 x 1 1 1 7 /22

  8. A brief history of Witsenhausen’s counterexample ∼ N (0 , σ 2 0 ) x 0 + x 1 x 2 + + x 0 - + C 2 C 1 u 1 u 2 z ∼ N (0 , 1) u 2 k 2 E u 2 x 2 x 2 � � � � �� min + E 2 2 1 w Linear, Quadratic, Gaussian Nonlinear strategies can outperform linear [Witsenhausen ’68] . . . by an unbounded factor [Mitter, Sahai ’99] Finding optimal strategy is NP-hard [Papadimitriou, Tsitsiklis ’84] Semi-exhaustive search techniques [Baglietto, Parisini, Zoppoli ’01] [Lee, Lau, Ho ’01] [Lee, Marden, Shamma ’09] 8 /22

  9. Understanding Witsenhausen’s counterexample: A deterministic abstraction Implicit communication interpretation : x 0 z u 1 x 1 u 2 + + E D h x 1 ) 2 i u 2 u 2 ⇥ ⇤ MMSE = E ( x 1 − b b E ≤ P x 1 x 1 1 1 b 1 x 1 x 0 b 2 b 3 b x 1 b 4 D b 5 [Avestimehr, Diggavi, Tse ’08] E [Grover, Sahai ’09] z u 1 9 /22

  10. Strategies for the deterministic abstraction b 1 x 1 x 0 b 2 b 3 � D b 4 x 1 b 5 E z bits forced to zero by u 1 0.01011 0 -0.01 0.01 0.10 x 0 quantization! u 1 10 /22

  11. Asymptotically infinite-length extension of Witsenhausen’s counterexample x m ∼ N (0 , σ 2 0 I ) 0 x m z m x 0 z ∼ N (0 , I ) 0 x m b x m u m � u 1 x 1 x 1 1 1 1 + + E D ⇢ k 2 1 k 2 ⇤� � ⇥ ⇤ ⇥ x 1 ) 2 ⇤ ⇥ 1 k 2 ⇤ ⇥ + 1 k 2 E u 2 u 2 min + E ( x 1 − b b b k u m u m k x m x m 1 � b x m x m min m E m E x 1 x 1 1 1 1 1 1 hope : can use laws of large numbers to simplify 11 /22

  12. A strategy : vector quantization ∼ N (0 , σ 2 ∼ N (0 , I ) x 0 0 I ) z C = k 2 P + MMSE ¯ MMSE � y u 1 x 1 u b x 1 = + + E D x 0 x 0 u 1 u 1 u w x 1 Noise sphere for blurry m s 2 0 C = k 2 P + MMSE ¯ MMSE C = k 2 + 0 ¯ Asymptotic upper bound : 12 /22

  13. Ratio of upper and lower bounds x m x m + x m ∼ N (0 , σ 2 x m 1 0 I ) + + 2 0 0 - + C 2 C 1 u m u m 2 1 z m ⇢ k 2 1 k 2 ⇤� ⇥ 1 k 2 ⇤ ⇥ + 1 k u m k x m 1 � b b x m m E u m m E x m x m min 1 1 1 Upper Bound Lower Bound < 4 . 13 Ratio log 10 ( σ 0 ) [Grover, Sahai ’08] log 10 ( k ) 13 /22

  14. . . . with dirty - paper coding strategy Upper Bound Lower Bound < 2 5 4 Ratio 3 2 1 2 2 1 0 0 − 2 − 1 2 ) − 2 − 4 log 10 ( σ 0 log 10 (k) 14 /22

  15. Conjectured optimal strategy: Dirty-paper coding x 0 α x 0 u 1 x 0 x 1 x 0 α x 0 � m ( P + α 2 σ 2 0 ) x 1 [Baglietto, Parisini, Zoppoli ’01] [Lee, Lau and Ho ’01] 15 /22 x 0

  16. Finite-vector lengths 16 /22

  17. Quantization upper bounds x 0 r p r c ζ = r c r c r p 17 /22

  18. Lower bound Use old lower bound [Grover, Sahai ’08] Theorem � + � 2 √ ��� ¯ P ≥ 0 k 2 P + C min ≥ inf κ ( P ) − P σ 2 0 κ ( P ) = P ) 2 + 1 √ ( σ 0 + ratio of upper bound (quantization/linear) ratio of scalar upper bound to vector lower bound Need tighter lower bounds 40 for tiny blocklengths and vector lower bound diverges to infinity! 30 [Shannon ’59][Shannon, Gallager, Berlekamp ’67] 20 [Blahut ’74][Pinsker ’67][Sahai ’06][Sahai, Grover 10 ’07][Polyanskiy, Poor, Verdu ’08] 0 Need bounds that work in a 1.5 0 1 distortion setting! − 0.5 − 1 0.5 − 1.5 18 /22 0 log 10 ( σ 0 ) − 2 log 10 (k)

  19. Large-deviation technique to tighten the lower bound Lower bound reality z m x m 1 Noise sphere x 1 Noise can behave atypically! Atypical noise behaviors are typical under a different distribution � + � 2 √ ��� ¯ P ≥ 0 k 2 P + C min ≥ inf κ ( P ) − P σ 2 0 σ G 2 κ ( P ) = P ) 2 + σ G 2 √ ( σ 0 + 19 /22

  20. “Sphere - packing” extension of lower bound ∼ N (0 , 1) x 0 z � u 1 x 1 x 1 + + E D P = E [ u 2 1 ] x 1 ) 2 ] MMSE = E [( x 1 − b ! 2 ! 2 ! 2 ! 2 ! 2 ! 2 ! 2 10 10 10 10 10 10 10 Scalar lower MMSE MMSE MMSE MMSE MMSE MMSE MMSE bound ! 4 ! 4 ! 4 ! 4 ! 4 ! 4 ! 4 10 10 10 10 10 10 10 ! G = 1 ! G = 1 ! G = 1 ! G = 1 ! G = 1 ! G = 1 ! G = 1 ! G = 1.25 ! G = 1.25 ! G = 1.25 ! G = 1.25 ! G = 1.25 ! G = 1.25 ! G = 3.0 ! G = 3.9 ! G = 3.0 ! G = 3.9 ! 6 ! 6 ! 6 ! 6 ! 6 ! 6 ! 6 ! G = 2.1 ! G = 2.1 ! G = 2.1 ! G = 2.1 ! G = 3.0 10 10 10 10 10 10 10 1 1 1 1 1 1 1 1.5 1.5 1.5 1.5 1.5 1.5 1.5 2 2 2 2 2 2 2 2.5 2.5 2.5 2.5 2.5 2.5 2.5 3 3 3 3 3 3 3 Power P Power P Power P Power P Power P Power P Power P 20 /22

  21. Summary Implicit communication promises substantial gains . . . can be understood using information theory Deterministic abstractions yield useful insights Large-deviation techniques are needed to obtain finite-length results 21 /22

  22. The finite-length lower bound For σ 2 G ⇤ 1 and L > 0 ¯ J min ( m, k 2 , σ 2 P ⇤ 0 k 2 P + η ( P, σ 2 0 , σ 2 0 ) ⇤ inf G , L ) , ⌃ + � 2 ⌃ ⌥⇧� ⇧ � mL 2 ( σ 2 σ m ⇧ G � 1) η ( P, σ 2 0 , σ 2 G κ 2 ( P, σ 2 0 , σ 2 G , L ) = c m ( L ) exp G , L ) � P , ⇧ ⌃ 2 σ 2 0 σ 2 where κ 2 ( P, σ 2 0 , σ 2 G , L ) := , � G ⌃ 2 m ( L ) e 1 � dm ( L ) ( ( σ 0 + G ) m P ) 2 + d m ( L ) σ 2 c ⌃ m +2 2 2 Pr( ⇧ Z m ⇧ 2 ⇥ mL 2 ) = (1 � ψ ( m, L ⇧ m )) � 1 , 1 , c m ( L ) := ) ( � Pr( ⇧ Z m ⇧ 2 ⇥ mL 2 ) = 1 � ψ ( m +2 ,L ⌃ m ) d m ( L ) := Pr( ⇧ Z m +2 ⇧ 2 ⇥ mL 2 ) 1 � ψ ( m,L ⌃ m ) , 22 /22

Recommend


More recommend