towards an algebraic network information theory
play

Towards an Algebraic Network Information Theory Bobak Nazer Boston - PowerPoint PPT Presentation

Towards an Algebraic Network Information Theory Bobak Nazer Boston University Charles River Information Theory Day April 28, 2014 Network Information Theory Goal: Roughly speaking, for a given network, determine necessary and sufficient


  1. Random Binning • Codebook 1: Independently and uniformly assign each source sequence s 1 to a label { 1 , 2 , . . . , 2 nR 1 } • Codebook 2: Independently and uniformly assign each source sequence s 2 to a label { 1 , 2 , . . . , 2 nR 2 } • Decoder: Look for jointly typical pair ( ˆ s 1 , ˆ s 2 ) within the received bin. Union bound: � � jointly typical ( ˆ s 1 , ˆ s 2 ) � = ( s 1 , s 2 ) in bin ( ℓ 1 , ℓ 2 ) P � 2 − n ( R 1 + R 2 ) ≤ jointly typical ( ˜ s 1 , ˜ s 2 ) ≤ 2 n ( H ( S 1 ,S 2 )+ ǫ ) 2 − n ( R 1 + R 2 ) • Need R 1 + R 2 > H ( S 1 , S 2 ) .

  2. Random Binning • Codebook 1: Independently and uniformly assign each source sequence s 1 to a label { 1 , 2 , . . . , 2 nR 1 } • Codebook 2: Independently and uniformly assign each source sequence s 2 to a label { 1 , 2 , . . . , 2 nR 2 } • Decoder: Look for jointly typical pair ( ˆ s 1 , ˆ s 2 ) within the received bin. Union bound: � � jointly typical ( ˆ s 1 , ˆ s 2 ) � = ( s 1 , s 2 ) in bin ( ℓ 1 , ℓ 2 ) P � 2 − n ( R 1 + R 2 ) ≤ jointly typical ( ˜ s 1 , ˜ s 2 ) ≤ 2 n ( H ( S 1 ,S 2 )+ ǫ ) 2 − n ( R 1 + R 2 ) • Need R 1 + R 2 > H ( S 1 , S 2 ) . • Similarly, R 1 > H ( S 1 | S 2 ) and R 2 > H ( S 2 | S 1 )

  3. Slepian-Wolf Problem: Binning Illustration · · · 1 2 3 4 2 nR 1 1 2 3 4 . . . 2 nR 2

  4. Slepian-Wolf Problem: Binning Illustration · · · 1 2 3 4 2 nR 1 1 2 3 4 . . . 2 nR 2

  5. Random Linear Binning • Assume we have chosen an injective mapping from the source alphabets to F p . • Codebook 1: Generate matrix G 1 with i.i.d. uniform entries drawn from F p . Each sequence s 1 is binned via matrix multiplication, w 1 = G 1 s 1 . • Codebook 2: Generate matrix G 2 with i.i.d. uniform entries drawn from F p . Each sequence s 2 is binned via matrix multiplication, w 2 = G 2 s 2 .

  6. Random Linear Binning • Assume we have chosen an injective mapping from the source alphabets to F p . • Codebook 1: Generate matrix G 1 with i.i.d. uniform entries drawn from F p . Each sequence s 1 is binned via matrix multiplication, w 1 = G 1 s 1 . • Codebook 2: Generate matrix G 2 with i.i.d. uniform entries drawn from F p . Each sequence s 2 is binned via matrix multiplication, w 2 = G 2 s 2 . • Bin assignments are uniform and pairwise independent (except for s ℓ = 0 )

  7. Random Linear Binning • Assume we have chosen an injective mapping from the source alphabets to F p . • Codebook 1: Generate matrix G 1 with i.i.d. uniform entries drawn from F p . Each sequence s 1 is binned via matrix multiplication, w 1 = G 1 s 1 . • Codebook 2: Generate matrix G 2 with i.i.d. uniform entries drawn from F p . Each sequence s 2 is binned via matrix multiplication, w 2 = G 2 s 2 . • Bin assignments are uniform and pairwise independent (except for s ℓ = 0 ) • Can apply the same union bound analysis as random binning.

  8. Slepian-Wolf Rate Region Slepian-Wolf Theorem R 2 Reliable compression possible if and S-W only if: R 1 ≥ H ( S 1 | S 2 ) R 2 ≥ H ( S 2 | S 1 ) h B ( θ ) R 1 + R 2 ≥ H ( S 1 , S 2 ) R 1 + R 2 = 1 + h B ( θ ) Random linear binning is as good R 1 h B ( θ ) as random i.i.d. binning.

  9. Slepian-Wolf Rate Region Slepian-Wolf Theorem R 2 Reliable compression possible if and S-W only if: R 1 ≥ H ( S 1 | S 2 ) = h B ( θ ) R 2 ≥ H ( S 2 | S 1 ) = h B ( θ ) h B ( θ ) R 1 + R 2 ≥ H ( S 1 , S 2 ) = 1 + h B ( θ ) R 1 + R 2 = 1 + h B ( θ ) Random linear binning is as good R 1 h B ( θ ) as random i.i.d. binning. Example: Doubly Symmetric Binary Source S 1 ∼ Bern (1 / 2) U ∼ Bern ( θ ) S 2 = S 1 ⊕ U

  10. K¨ orner-Marton Problem • Binary sources R 1 s 1 E 1 • s 1 is i.i.d. Bernoulli( 1 / 2 ) D ˆ u • s 2 is s 1 corrupted by R 2 Bernoulli( θ ) noise s 2 E 2 • Decoder wants the modulo- 2 sum . u = s 1 ⊕ s 2 Rate Region: Set of rates ( R 1 , R 2 ) such that there exist encoders and decoders with vanishing probability of error P { ˆ u � = u } → 0 as n → ∞ Are any rate savings possible over sending s 1 and s 2 in their entirety?

  11. Random Binning • Sending s 1 and s 2 with random binning requires R 1 + R 2 > 1 + h B ( θ ) . • What happens if we use rates such that R 1 + R 2 < 1 + h B ( θ ) ? • There will be exponentially many pairs ( s 1 , s 2 ) in each bin! • This would be fine if all pairs in a bin have the same sum, s 1 + s 2 . But this probability goes to zero exponentially fast!

  12. K¨ orner-Marton Problem: Random Binning Illustration · · · 1 2 3 4 2 nR 1 1 2 3 4 . . . 2 nR 2

  13. K¨ orner-Marton Problem: Random Binning Illustration · · · 1 2 3 4 2 nR 1 1 2 3 4 . . . 2 nR 2

  14. Linear Binning • Use the same random matrix G for linear binning at each encoder: w 1 = Gs 1 w 2 = Gs 2 • Idea from K¨ orner-Marton ’79: Decoder adds up the bins. w 1 ⊕ w 2 = Gs 1 ⊕ Gs 2 = G ( s 1 ⊕ s 2 ) = Gu • G is good for compressing u if R > H ( U ) = h B ( θ ) . K¨ orner-Marton Theorem Reliable compression of the sum is possible if and only if: R 1 ≥ h B ( θ ) R 2 ≥ h B ( θ ) .

  15. K¨ orner-Marton Problem: Linear Binning Illustration · · · 1 2 3 4 2 nR 1 1 2 3 4 . . . 2 nR 2

  16. K¨ orner-Marton Problem: Linear Illustration · · · 1 2 3 4 2 nR 1 1 2 3 4 . . . 2 nR 2

  17. K¨ orner-Marton Rate Region R 2 S-W K-M h B ( p ) h B ( p ) R 1 Linear codes can improve performance! (for distributed computation of dependent sources)

  18. (Algebraic) Network Source Coding • Krithivasan-Pradhan ’09: Nested lattice coding framework for distributed Gaussian source coding. • Krithivasan-Pradhan ’11: Nested group coding framework for distributed source coding for discrete memoryless sources. • Can show that these rate regions sometimes outperform the Berger-Tung region (best known performance via i.i.d. ensembles).

  19. (Algebraic) Network Source Coding • Krithivasan-Pradhan ’09: Nested lattice coding framework for distributed Gaussian source coding. • Krithivasan-Pradhan ’11: Nested group coding framework for distributed source coding for discrete memoryless sources. • Can show that these rate regions sometimes outperform the Berger-Tung region (best known performance via i.i.d. ensembles). • Now let’s take a look at an algebraic framework for network channel coding.

  20. Compute-and-Forward Goal: Convert noisy Gaussian networks into noiseless finite field ones.

  21. Compute-and-Forward Goal: Convert noisy Gaussian networks into noiseless finite field ones. x 1 w 1 E 1 h 1 z w 1 ˆ w 2 ˆ x 2 h 2 y w 2 E 2 D . . . h K . . w K ˆ . . . . x K w K E K

  22. Compute-and-Forward Goal: Convert noisy Gaussian networks into noiseless finite field ones. x 1 w 1 E 1 h 1 z w 1 ˆ w 2 ˆ x 2 h 2 y w 2 E 2 D . . . h K . . w K ˆ . . . . x K w K E K F k R n F k p p

  23. Compute-and-Forward Goal: Convert noisy Gaussian networks into noiseless finite field ones. Compute-and-Forward x 1 w 1 E 1 h 1 z w 1 ˆ w 2 ˆ x 2 h 2 y w 2 E 2 D . . . h K . . w K ˆ . . . . x K w K E K F k R n F k p p

  24. Compute-and-Forward Goal: Convert noisy Gaussian networks into noiseless finite field ones. Compute-and-Forward x 1 u 1 w 1 E 1 w 1 h 1 z w 1 ˆ w 1 ˆ w 2 ˆ w 2 ˆ u 2 x 2 h 2 y w 2 w 2 E 2 D . . . Q . D . . . h K . . . . . w K ˆ w K ˆ . . . . . . x K u K w K E K w K F k R n F k p p

  25. Compute-and-Forward Goal: Convert noisy Gaussian networks into noiseless finite field ones. Compute-and-Forward x 1 u 1 w 1 E 1 w 1 h 1 z w 1 ˆ w 1 ˆ w 2 ˆ w 2 ˆ u 2 x 2 h 2 y w 2 w 2 E 2 D . . . Q . D . . . h K . . . . . w K ˆ w K ˆ . . . . . . x K u K w K E K w K F k R n F k F k p p p

  26. Compute-and-Forward Goal: Convert noisy Gaussian networks into noiseless finite field ones. Compute-and-Forward x 1 u 1 w 1 E 1 w 1 h 1 z w 1 ˆ w 1 ˆ w 2 ˆ w 2 ˆ u 2 x 2 h 2 y w 2 w 2 E 2 D . . . Q . D . . . h K . . . . . w K ˆ w K ˆ . . . . . . x K u K w K E K w K F k R n F k F k p p p • Which linear combinations can be sent over a given channel?

  27. Compute-and-Forward Goal: Convert noisy Gaussian networks into noiseless finite field ones. Compute-and-Forward x 1 u 1 w 1 E 1 w 1 h 1 z w 1 ˆ w 1 ˆ w 2 ˆ w 2 ˆ u 2 x 2 h 2 y w 2 w 2 E 2 D . . . Q . D . . . h K . . . . . w K ˆ w K ˆ . . . . . . x K u K w K E K w K F k R n F k F k p p p • Which linear combinations can be sent over a given channel? • Where can this help us?

  28. Compute-and-Forward: Problem Statement • Messages are finite field vectors, x 1 w 1 E 1 w ℓ ∈ F k h 1 p . z u 1 ˆ • Real-valued inputs and outputs, ˆ u 2 x 2 h 2 y w 2 E 2 D . x ℓ , y ∈ R n . . . h L n E � x ℓ � 2 ≤ P . • Power constraint, 1 . u M ˆ . . L � u m = q mℓ w ℓ • Gaussian noise, z ∼ N ( 0 , I ) . x L w L E L ℓ =1 • Equal rates: R = k n log 2 p

  29. Compute-and-Forward: Problem Statement • Messages are finite field vectors, x 1 w 1 E 1 w ℓ ∈ F k h 1 p . z u 1 ˆ • Real-valued inputs and outputs, u 2 ˆ x 2 h 2 y w 2 E 2 D . x ℓ , y ∈ R n . . . h L n E � x ℓ � 2 ≤ P . • Power constraint, 1 . u M ˆ . . L � u m = q mℓ w ℓ • Gaussian noise, z ∼ N ( 0 , I ) . x L w L E L ℓ =1 • Equal rates: R = k n log 2 p • Decoder wants M linear combinations of the messages with � � � vanishing probability of error lim n →∞ P m { ˆ u m � = u m } = 0 .

  30. Compute-and-Forward: Problem Statement • Messages are finite field vectors, x 1 w 1 E 1 w ℓ ∈ F k h 1 p . z ˆ u 1 • Real-valued inputs and outputs, ˆ u 2 x 2 h 2 y w 2 E 2 D . x ℓ , y ∈ R n . . . h L n E � x ℓ � 2 ≤ P . • Power constraint, 1 . u M ˆ . . L � u m = q mℓ w ℓ • Gaussian noise, z ∼ N ( 0 , I ) . x L w L E L ℓ =1 • Equal rates: R = k n log 2 p • Decoder wants M linear combinations of the messages with � � � vanishing probability of error lim n →∞ P m { ˆ u m � = u m } = 0 . • Receiver can use its channel state information (CSI) to match the linear combination coefficients q mℓ ∈ F p to the channel coefficients h ℓ ∈ R . Transmitters do not require CSI.

  31. Compute-and-Forward: Problem Statement • Messages are finite field vectors, x 1 w 1 E 1 w ℓ ∈ F k h 1 p . z u 1 ˆ • Real-valued inputs and outputs, u 2 ˆ x 2 h 2 y w 2 E 2 D . x ℓ , y ∈ R n . . . h L n E � x ℓ � 2 ≤ P . • Power constraint, 1 . u M ˆ . . L � u m = q mℓ w ℓ • Gaussian noise, z ∼ N ( 0 , I ) . x L w L E L ℓ =1 • Equal rates: R = k n log 2 p • Decoder wants M linear combinations of the messages with � � � vanishing probability of error lim n →∞ P m { ˆ u m � = u m } = 0 . • Receiver can use its channel state information (CSI) to match the linear combination coefficients q mℓ ∈ F p to the channel coefficients h ℓ ∈ R . Transmitters do not require CSI. • What rates are achievable as a function of h ℓ and q mℓ ?

  32. Computation Rate • Want to characterize achievable rates as a function of h ℓ and q mℓ .

  33. Computation Rate • Want to characterize achievable rates as a function of h ℓ and q mℓ . • Easier to think about integer rather than finite field coefficients.

  34. Computation Rate • Want to characterize achievable rates as a function of h ℓ and q mℓ . • Easier to think about integer rather than finite field coefficients. • The linear combination with integer coefficient vector a m = [ a m 1 a m 2 · · · a mL ] T ∈ Z L corresponds to L � u m = q mℓ w ℓ where q mℓ = [ a mℓ ] mod p ℓ =1 (where we assume an implicit mapping between F p and Z p ).

  35. Computation Rate • Want to characterize achievable rates as a function of h ℓ and q mℓ . • Easier to think about integer rather than finite field coefficients. • The linear combination with integer coefficient vector a m = [ a m 1 a m 2 · · · a mL ] T ∈ Z L corresponds to L � u m = q mℓ w ℓ where q mℓ = [ a mℓ ] mod p ℓ =1 (where we assume an implicit mapping between F p and Z p ). • Key Definition: The computation rate region described by R comp ( h , a ) is achievable if, for any ǫ > 0 and n, p large enough, a receiver can decode any linear combinations with integer coefficient vectors a 1 , . . . , a M ∈ Z L for which the message rate R satisfies R < min m R comp ( h , a m )

  36. Compute-and-Forward: Achievable Rates Theorem (Nazer-Gastpar ’11) The computation rate region described by � � 1 P 2 log + R comp ( h , a ) = max α 2 + P � α h − a � 2 α ∈ R is achievable.

  37. Compute-and-Forward: Achievable Rates Theorem (Nazer-Gastpar ’11) The computation rate region described by � � R comp ( h , a ) = 1 P 2 log + a T � P − 1 I + hh T � − 1 a is achievable.

  38. Compute-and-Forward: Achievable Rates Theorem (Nazer-Gastpar ’11) The computation rate region described by � � R comp ( h , a ) = 1 P 2 log + a T � P − 1 I + hh T � − 1 a is achievable. Compute-and-Forward x 1 h 1 z w 1 E 1 w 1 u 1 ˆ . . . . . y . . . Q . F k F k . . . D h L p p x L w L E L w L u M ˆ R n R comp ( h , a m ) for some a m ∈ Z L satisfying [ a m ] mod p = q m . if R < min m

  39. Compute-and-Forward: Achievable Rates Theorem (Nazer-Gastpar ’11) The computation rate region described by � � R comp ( h , a ) = 1 P 2 log + a T � P − 1 I + hh T � − 1 a is achievable. Special Cases: � � • Perfect Match: R comp ( a , a ) = 1 1 2 log + � a � 2 + P

  40. Compute-and-Forward: Achievable Rates Theorem (Nazer-Gastpar ’11) The computation rate region described by � � R comp ( h , a ) = 1 P 2 log + a T � P − 1 I + hh T � − 1 a is achievable. Special Cases: � � • Perfect Match: R comp ( a , a ) = 1 1 2 log + � a � 2 + P • Decode a Message: � � h 2 � 1 0 · · · 0] T � = 1 m P R comp h , [ 0 · · · 0 2 log 1 + � � �� � h 2 1 + P m − 1 zeros ℓ ℓ � = m

  41. Compute-and-Forward: Effective Noise L � y = h ℓ x ℓ + z ℓ =1 L L � � = a ℓ x ℓ + ( h ℓ − a ℓ ) x ℓ + z ℓ =1 ℓ =1 Desired Codebook: • Closed under integer linear combinations = ⇒ lattice codebook.

  42. Compute-and-Forward: Effective Noise L � y = h ℓ x ℓ + z ℓ =1 L L � � = a ℓ x ℓ + ( h ℓ − a ℓ ) x ℓ + z ℓ =1 ℓ =1 Effective Noise Desired Codebook: • Closed under integer linear combinations = ⇒ lattice codebook. • Independent effective noise = ⇒ dithering.

  43. Compute-and-Forward: Effective Noise L � y = h ℓ x ℓ + z ℓ =1 L L − Decode − − → L � � � = a ℓ x ℓ + ( h ℓ − a ℓ ) x ℓ + z q ℓ w ℓ ℓ =1 ℓ =1 ℓ =1 Effective Noise Desired Codebook: • Closed under integer linear combinations = ⇒ lattice codebook. • Independent effective noise = ⇒ dithering. • Isomorphic to F k ⇒ nested lattice codebook. p =

  44. Nested Lattices • A lattice is a discrete subgroup of R n .

  45. Nested Lattices • A lattice is a discrete subgroup of R n . • Nearest neighbor quantizer: Q Λ ( x ) = arg min � x − λ � 2 λ ∈ Λ

  46. Nested Lattices • A lattice is a discrete subgroup of R n . • Nearest neighbor quantizer: Q Λ ( x ) = arg min � x − λ � 2 λ ∈ Λ • Two lattices Λ and Λ FINE are nested if Λ ⊂ Λ FINE

  47. Nested Lattices • A lattice is a discrete subgroup of R n . • Nearest neighbor quantizer: Q Λ ( x ) = arg min � x − λ � 2 λ ∈ Λ • Two lattices Λ and Λ FINE are nested if Λ ⊂ Λ FINE

  48. Nested Lattices • A lattice is a discrete subgroup of R n . • Nearest neighbor quantizer: Q Λ ( x ) = arg min � x − λ � 2 λ ∈ Λ • Two lattices Λ and Λ FINE are nested if Λ ⊂ Λ FINE • Quantization error serves as modulo operation: [ x ] mod Λ = x − Q Λ ( x ) . Distributive Law: � � for all a ∈ Z . x 1 + a [ x 2 ] mod Λ mod Λ = [ x 1 + a x 2 ] mod Λ

  49. Nested Lattice Codes • Nested Lattice Code: Formed by taking all elements of Λ FINE that lie in the fundamental Voronoi region of Λ .

  50. Nested Lattice Codes • Nested Lattice Code: Formed by taking all elements of Λ FINE that lie in the fundamental Voronoi region of Λ . • Fine lattice Λ FINE protects against noise.

  51. Nested Lattice Codes • Nested Lattice Code: Formed by taking all elements of Λ FINE that lie in the fundamental Voronoi region of Λ . • Fine lattice Λ FINE protects against noise. • Coarse lattice Λ enforces the power constraint. √ B ( 0 , nP )

  52. Nested Lattice Codes • Nested Lattice Code: Formed by taking all elements of Λ FINE that lie in the fundamental Voronoi region of Λ . • Fine lattice Λ FINE protects against noise. • Coarse lattice Λ enforces the power constraint. • Existence of good nested lattice codes: √ Loeliger ’97, Forney-Trott-Chung ’00, B ( 0 , nP ) Erez-Litsyn-Zamir ’05, Ordentlich-Erez ’12. • Erez-Zamir ’04: Nested lattice codes can achieve the point-to-point Gaussian capacity.

  53. Compute-and-Forward: Illustration All users employ the same nested lattice code:

  54. Compute-and-Forward: Illustration Choose message vectors over finite field w ℓ ∈ F k p : w 1 w 2

  55. Compute-and-Forward: Illustration Map w ℓ to lattice point t ℓ = φ ( w ℓ ) : w 1 w 2

  56. Compute-and-Forward: Illustration Transmit lattice points over the channel: w 1 x 1 z h 1 y h 2 x 2 w 2 h = [ 1 . 4 2 . 1 ] a m = [ 2 3 ]

  57. Compute-and-Forward: Illustration Transmit lattice points over the channel: w 1 x 1 z h 1 y h 2 x 2 w 2 h = [ 1 . 4 2 . 1 ] a m = [ 2 3 ]

  58. Compute-and-Forward: Illustration Lattice codewords are scaled by channel coefficients: w 1 x 1 z h 1 y h 2 x 2 w 2 h = [ 1 . 4 2 . 1 ] a m = [ 2 3 ]

  59. Compute-and-Forward: Illustration Scaled codewords added together plus noise: w 1 x 1 z h 1 y h 2 x 2 w 2 h = [ 1 . 4 2 . 1 ] a m = [ 2 3 ]

  60. Compute-and-Forward: Illustration Scaled codewords added together plus noise: w 1 x 1 z h 1 y h 2 x 2 w 2 h = [ 1 . 4 2 . 1 ] a m = [ 2 3 ]

  61. Compute-and-Forward: Illustration Extra noise penalty for non-integer channel coefficients: w 1 x 1 z h 1 y h 2 x 2 w 2 h = [ 1 . 4 2 . 1 ] a m = [ 2 3 ] Effective noise: 1 + P � h − a m � 2

  62. Compute-and-Forward: Illustration Scale output by α to reduce non-integer noise penalty: w 1 x 1 z h 1 y h 2 x 2 w 2 α h = [ α 1 . 4 α 2 . 1 ] a m = [ 2 3 ] Effective noise: α 2 + P � α h − a m � 2

  63. Compute-and-Forward: Illustration Scale output by α to reduce non-integer noise penalty: w 1 x 1 z h 1 y h 2 x 2 w 2 α h = [ α 1 . 4 α 2 . 1 ] a m = [ 2 3 ] Effective noise: α 2 + P � α h − a m � 2

  64. Compute-and-Forward: Illustration Decode to the closest lattice point: w 1 x 1 z h 1 y h 2 x 2 w 2 α h = [ α 1 . 4 α 2 . 1 ] a m = [ 2 3 ] Effective noise: α 2 + P � α h − a m � 2

  65. Compute-and-Forward: Illustration Recover integer linear combination mod Λ C : w 1 x 1 z h 1 y h 2 x 2 w 2 α h = [ α 1 . 4 α 2 . 1 ] a m = [ 2 3 ] Effective noise: α 2 + P � α h − a m � 2

  66. Compute-and-Forward: Illustration Map back to linear combination of the messages: w 1 x 1 z L � h 1 q mℓ w ℓ y ℓ =1 h 2 x 2 w 2 α h = [ α 1 . 4 α 2 . 1 ] a m = [ 2 3 ] Effective noise: α 2 + P � α h − a m � 2

  67. Random i.i.d. codes are not good for computation 2 n 2 R possible sums of codewords. 2 nR codewords each.

  68. Random i.i.d. codes are not good for computation x 1 z y x 2 2 n 2 R possible sums of codewords. 2 nR codewords each.

  69. Random i.i.d. codes are not good for computation x 1 z y x 2 2 n 2 R possible sums of codewords. 2 nR codewords each.

  70. Random i.i.d. codes are not good for computation x 1 z y x 2 2 n 2 R possible sums of codewords. 2 nR codewords each.

  71. (Algebraic) Network Channel Coding • Compute-and-forward is a useful setting to develop algebraic multi-user coding techniques.

  72. (Algebraic) Network Channel Coding • Compute-and-forward is a useful setting to develop algebraic multi-user coding techniques. • Ordentlich-Erez-Nazer ’13: In a K -user Gaussian multiple-access channel, the sum of the K best computation rates is exactly equal to the multiple-access sum capacity. Algebraic successive cancellation gives this operational meaning.

  73. (Algebraic) Network Channel Coding • Compute-and-forward is a useful setting to develop algebraic multi-user coding techniques. • Ordentlich-Erez-Nazer ’13: In a K -user Gaussian multiple-access channel, the sum of the K best computation rates is exactly equal to the multiple-access sum capacity. Algebraic successive cancellation gives this operational meaning. • Upcoming work on a compute-and-forward framework for discrete memoryless networks.

  74. (Algebraic) Network Channel Coding • Compute-and-forward is a useful setting to develop algebraic multi-user coding techniques. • Ordentlich-Erez-Nazer ’13: In a K -user Gaussian multiple-access channel, the sum of the K best computation rates is exactly equal to the multiple-access sum capacity. Algebraic successive cancellation gives this operational meaning. • Upcoming work on a compute-and-forward framework for discrete memoryless networks. • Let’s take a look at an application of compute-and-forward to interference alignment.

  75. Interference-Free Capacity 1 1

  76. Interference-Free Capacity 1 1

  77. Time Division 1 1 2 2 K K

  78. Time Division 1 1 2 2 K K

  79. Time Division 1 1 2 2 K K

  80. Time Division 1 1 2 2 K K

Recommend


More recommend