analyzing large communication networks
play

Analyzing Large Communication Networks Shirin Jalali joint work - PowerPoint PPT Presentation

Analyzing Large Communication Networks Shirin Jalali joint work with Michelle Effros and Tracey Ho Dec. 2015 1 The gap Fundamental questions: i. What is the best achievable performance? ii. How to communicate over such networks? Huge gap


  1. Analyzing Large Communication Networks Shirin Jalali joint work with Michelle Effros and Tracey Ho Dec. 2015 1

  2. The gap Fundamental questions: i. What is the best achievable performance? ii. How to communicate over such networks? Huge gap between theoretically analyzable and practical networks Transmitter Receiver visualization of the various routes through a portion of the Internet from “The Opte Project”. 2

  3. This talk Bridge the gap � develop generic network analysis tools and techniques Contributions: � Noisy wireline networks: o Separation of source-network coding and channel coding is optimal � Wireless networks: o Find outer and inner bounding noiseless networks. � Noiseless wireline networks: o HNS algorithm 3

  4. Noisy wired networks 4

  5. General wireline network Example: Internet Each user: � sends data � receives data from other users Users observe dependent information 5

  6. Wireline network U ( b ) Represented by a directed graph: U ( a ) � nodes = users and relays � directed edges = point-to-point noisy channels Node a : � observes random process U ( a ) � sources are dependent � reconstructs a subset of processes observed by other nodes X Y ≡ � lossy or lossless reconstructions p ( y | x ) 6

  7. Node operations a Node a observes U ( a ), L . Encoding at Node a: � t = 1,2,..., n � Map U ( a ), L and received signals up to time t − 1 to the inputs of X 1, t Y t − 1 its outgoing channels 1 X 2, t X j , t = f j , t ( U ( a ), L , Y t − 1 , Y t − 1 ) 1 2 X 3, t Y t − 1 2 U ( a ), L 7

  8. Node operations Decoding at Node a : � At time t = n , maps U ( a ), L and its received signals to the reconstruction blocks. Y n 1 Y n 2 U ( a ), L U ( c → a ), L : reconstruction of node a from the data at node c ˆ � 8

  9. Performance measure 1. Rate: n = source blocklength Joint source-channel-network: κ � L channel blocklength 2. Reconstruction quality: � U ( a ), L : observed block by node a U ( a → c ), L : reconstruction of node c from the data at node a ˆ � i. Block-error probability ( Lossless reconstruction ): P( U ( a ), L �= ˆ U ( a → c ), L ) → 0 ii. Expected average distortion ( Lossy reconstruction ): E[ d ( U ( a ), L , ˆ U ( a → c ), L )] → D ( a , c ) 9

  10. Separation of source-network coding and channel-network coding Does separation hurt the performance? X Y ≡ p ( x | y ) C = max I ( X ; Y ) p ( x ) bit-pipe of capacity C carries ⌊ nC ⌋ bits error-free over n communications. Theorem (SJ, Effros 2015) Separation of source-network coding and channel coding is optimal in a wireline network with dependent sources. 10

  11. Separation of source-network coding and channel-network coding Does separation hurt the performance? X Y ≡ p ( x | y ) C = max I ( X ; Y ) p ( x ) bit-pipe of capacity C carries ⌊ nC ⌋ bits error-free over n communications. Theorem (SJ, Effros 2015) Separation of source-network coding and channel coding is optimal in a wireline network with dependent sources. 10

  12. Separation: wireline networks Single source multicast: [Borade 2002], [Song, Yeung, Cai 2006] Independent sources with lossless reconstructions: [Hassibi, Shadbakht 2007] [Koetter, Effros, Medard 2009] multi- demands dependent lossless lossy continuous source sources channels no multicast no yes no no [Borade 2002][Song et al. 2006] yes arbitrary no yes no yes [Hassibi et al. 2007] [Koetter et al. 2009] 11

  13. Results 1. Separation of source-network coding and channel coding in wireline network with lossy and lossless reconstructions 2. Equivalence of zero-distortion and lossless reconstruction in general memoryless networks multi- demands dependent lossless lossy continuous source sources channels no multicast no yes no no [Borade 2002] [Song et al. 2006] yes arbitrary no yes no yes [Koetter et al. 2009] yes arbitrary yes yes yes yes [SJ et al. 2015] 12

  14. Lossy reconstructions: Proof idea Challenge: optimal region in not known! Approach: any performance achievable on original network is achievable on the network of bit-pipes and vice versa. Main ingredients: � stacked networks � channel simulation 13

  15. Stacked network Notation: n = source blocklength � Rate κ = L channel blocklength � N : original network Defintions: � D ( κ , N ) : set achievable distortions on N � N : m -fold stacked version consisting of m copies of the original network [Koetter et al. 2009] U ( a ), L U ( b ), L U ( b ),2 L U ( a ),2 L U ( b ),3 L L + 1 L + 1 U ( a ),3 L 2 L + 1 2 L + 1 Theorem (SJ, Effros 2015) D ( κ , N ) = D ( κ , N ) 14

  16. D ( κ , N b ) = D ( κ , N ) N = original network N b = corresponding network of bit-pipes ? D ( κ , N ) = D ( κ , N b ) It is enough to show that D ( κ , N ) = D ( κ , N b ). i. D ( κ , N b ) ⊂ D ( κ , N ) : easy (channel coding across the layers) ii. D ( κ , N ) ⊂ D ( κ , N b ) 15

  17. Proof of D ( κ , N ) ⊂ D ( κ , N b ) Consider a noisy channel in N and its copies in N . For t = 1,..., n : X t Y t X t ,1 Y t ,1 X t ,2 Y t ,2 X t , m Y t , m Define: t ] ( x , y ) = | { i : ( X t , i , Y t , i ) = ( x , y )} | p [ X m ˆ t , Y m m 16

  18. Proof of D ( κ , N ) ⊂ D ( κ , N b ) Consider a noisy channel in N and its copies in N . For t = 1,..., n : X t Y t X t ,1 Y t ,1 X t ,2 Y t ,2 X t , m Y t , m Define: t ] ( x , y ) = | { i : ( X t , i , Y t , i ) = ( x , y )} | p [ X m ˆ t , Y m m 16

  19. Proof of D ( κ , N ) ⊂ D ( κ , N b ) In the original network: E[ d ( U L , ˆ U L )] = d ( U L , ˆ U L ) � � � � � � E � ( X t , Y t ) = ( x , y ) P ( X t , Y t ) = ( x , y ) . x , y Applying the same code across the layers in the m -fold stacked network: d ( U mL , ˆ U mL ) � � E � d ( U L , ˆ U L ) � � � E � ( X t , Y t ) = ( x , y ) E[ ˆ p [ X m t ] ( x , y )]. = t , Y m x , y Goal: p t ( x ) p ( y | x ) ≈ E[ ˆ p [ X m t ] ( x , y )] t , Y m 17

  20. Proof of D ( κ , N ) ⊂ D ( κ , N b ) In the original network: E[ d ( U L , ˆ U L )] = d ( U L , ˆ U L ) � � � � � � E � ( X t , Y t ) = ( x , y ) P ( X t , Y t ) = ( x , y ) . x , y Applying the same code across the layers in the m -fold stacked network: d ( U mL , ˆ U mL ) � � E � d ( U L , ˆ U L ) � � � E � ( X t , Y t ) = ( x , y ) E[ ˆ p [ X m t ] ( x , y )]. = t , Y m x , y Goal: p t ( x ) p ( y | x ) ≈ E[ ˆ p [ X m t ] ( x , y )] t , Y m 17

  21. Channel simulation Channel p Y | X ( y | x ) with i.i.d. input X ∼ p X ( x ) X Y DMC Simulate this channel: mR bits X m Y m Enc. Dec. such that n →∞ � p X , Y − ˆ p [ X m , Y m ] � TV − → 0, a.s. If R > I ( X ; Y ) , such family of codes exists. Since R = C = max p ( x ) I ( X ; Y ) , such a code always exists. 18

  22. Channel simulation Channel p Y | X ( y | x ) with i.i.d. input X ∼ p X ( x ) X Y DMC Simulate this channel: mR bits X m Y m Enc. Dec. such that n →∞ � p X , Y − ˆ p [ X m , Y m ] � TV − → 0, a.s. If R > I ( X ; Y ) , such family of codes exists. Since R = C = max p ( x ) I ( X ; Y ) , such a code always exists. 18

  23. Channel simulation Channel p Y | X ( y | x ) with i.i.d. input X ∼ p X ( x ) X Y DMC Simulate this channel: mR bits X m Y m Enc. Dec. such that n →∞ � p X , Y − ˆ p [ X m , Y m ] � TV − → 0, a.s. If R > I ( X ; Y ) , such family of codes exists. Since R = C = max p ( x ) I ( X ; Y ) , such a code always exists. 18

  24. Results So far we proved separation of lossy source-network coding and channel coding multi- demands correlated lossless lossy continuous source sources channels no multicast no yes no no [Borade 2002][Song et al. 2006] yes arbitrary no yes no yes [Koetter et al. 2009] yes arbitrary yes no yes no [SJ et al. 2010] 19

  25. Lossless vs. D = 0 A family of lossless codes is also zero-distotion Lossless reconstruction: P( U L �= ˆ U L ) → 0 For bounded distortion: U L )] ≤ d max P( U L �= ˆ E[ d ( U L , ˆ U L ) → 0 But: A family of zero-distortion codes is not lossless E[ d ( U L , ˆ U L )] → 0, only implies { i : U i �= ˆ U i } → 0. n 20

  26. Lossless vs. D = 0 : point-to-point network LR U L U L ˆ Enc. Dec. Lossless reconstruction: R ≥ H ( U ) Lossy reconstruction: I ( U ; ˆ R ( D ) = min U ) u | u ):E d ( U , ˆ p ( ˆ U ) ≤ D � At D = 0 : I ( U ; ˆ R (0) = min U ) = I ( U ; U ) = H ( U ). u | u ):E[ d ( U , ˆ p ( ˆ U )] = 0 � minimum required rates for lossless reconstruction and D = 0 coincide. 21

  27. Lossless vs. D = 0 : multi-user network Explicit characterization of the rate-region is unknown for general multi-user networks. [Gu et al. 2010] proved the equivalence of zero-distortion and lossless reconstruction in error-free wireline networks: R ( D ) | D = 0 = R L 22

  28. Lossless vs. D = 0 : multi-user network In a general memoryless network [wired or wireless]: → X i P( Y 1 ,..., Y m | X 1 ,..., X m ) ← Y i Theorem (SJ, Effros 2015) If for any s ∈ S , H ( U s | U S \ s ) > 0 , then achievability of zero-distortion is equivalent to achievability of lossless reconstruction. 23

  29. Recap Wireline networks: Proved that we can replace noisy point-to-point channels with error-free bit pipes X Y ≡ p ( x | y ) C = max I ( X ; Y ) p ( x ) What about wireless networks? 24

Recommend


More recommend