integer forcing for channels sources and adcs
play

Integer-Forcing for Channels, Sources and ADCs Or Ordentlich Tel - PowerPoint PPT Presentation

Integer-Forcing for Channels, Sources and ADCs Or Ordentlich Tel Aviv University Or Ordentlich Integer-Forcing for Channels, Sources and ADCs Motivating Example y = x 1 + 2 x 2 + z Or Ordentlich Integer-Forcing for Channels, Sources and


  1. Some Comments For uncoded PAM transmission IF is (almost) the same as lattice reduction aided MIMO equalization (Yao & Wornell 02) Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  2. Some Comments For uncoded PAM transmission IF is (almost) the same as lattice reduction aided MIMO equalization (Yao & Wornell 02) Linear/lattice codes are closed under integer-valued linear combinations − → Can use them instead of PAM Theorem (Nazer-Gastpar11IT) If all users transmit from the same capacity achieving nested-lattice code with rate R , then v = a 1 x 1 + a 2 x 2 can be decoded with a vanishing error probability if R < R comp ( a ) � 1 2 log (SNR eff ( a )) Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  3. Some Comments If the receiver can decode a full-rank set of integer-linear combinations, it can decode all codewords Theorem : (Zhan et al . ISIT2010) For two linearly independent integer vectors a 1 and a 2 the IF receiver can decode both messages if both users transmitted from the same lattice code with rate � 1 � 2 log (SNR eff ( a 1 )) , 1 R < min 2 log (SNR eff ( a 2 )) Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  4. Applications for Channel Coding Problems Joint work with Uri Erez (TAU) and Bobak Nazer (BU) Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  5. Gaussian MAC with Nested Linear Codes Consider the MAC channel y = h 1 x 1 + h 2 x 2 + z where � x i � 2 ≤ n SNR and z is AWGN with zero mean and unit variance Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  6. Gaussian MAC with Nested Linear Codes Consider the MAC channel y = h 1 x 1 + h 2 x 2 + z where � x i � 2 ≤ n SNR and z is AWGN with zero mean and unit variance Capacity region is achieved using i.i.d. Gaussian codebooks Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  7. Gaussian MAC with Nested Linear Codes Consider the MAC channel y = h 1 x 1 + h 2 x 2 + z where � x i � 2 ≤ n SNR and z is AWGN with zero mean and unit variance Capacity region is achieved using i.i.d. Gaussian codebooks Strange question What is the maximal achievable symmetric rate R , if both users transmit from the same linear/lattice code? Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  8. Gaussian MAC with Nested Linear Codes Consider the MAC channel y = h 1 x 1 + h 2 x 2 + z where � x i � 2 ≤ n SNR and z is AWGN with zero mean and unit variance Capacity region is achieved using i.i.d. Gaussian codebooks Strange question What is the maximal achievable symmetric rate R , if both users transmit from the same linear/lattice code? Strange question, strange answer R depends on the “rationality” of the channel gains Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  9. Gaussian MAC with Nested Linear Codes 1 2 Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  10. Gaussian MAC with Nested Linear Codes 1 √ 2 Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  11. Gaussian MAC with Nested Linear Codes - IF Lower Bound Analyzing a random ensemble of linear codes with ML decoding is very complicated, but... IF is applicable when all users transmit from the same linear code − → The IF rate is a lower bound on R Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  12. Gaussian MAC with Nested Linear Codes - IF Lower Bound Analyzing a random ensemble of linear codes with ML decoding is very complicated, but... IF is applicable when all users transmit from the same linear code − → The IF rate is a lower bound on R But how does the IF rate behave as a function of the channel gains? Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  13. Gaussian MAC with Nested Linear Codes - IF Lower Bound Consider the channel y = 1 x 1 + g x 2 + z at SNR = 40dB Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  14. Gaussian MAC with Nested Linear Codes - IF Lower Bound Consider the channel y = 1 x 1 + g x 2 + z at SNR = 40dB The IF receiver has to decode two linearly independent equations with inte- ger coefficients Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  15. Gaussian MAC with Nested Linear Codes - IF Lower Bound Consider the channel y = 1 x 1 + g x 2 + z at SNR = 40dB Normalized computation rate for best coefficient vector a 1 1.4 First Equation 1.2 Normalized Computation Rate 1 0.8 0.6 0.4 0.2 0 1 1.2 1.4 1.6 1.8 2 g Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  16. Gaussian MAC with Nested Linear Codes - IF Lower Bound Consider the channel y = 1 x 1 + g x 2 + z at SNR = 40dB Normalized computation rate for second best coefficient vector a 2 1.4 First Equation Second Equation 1.2 Normalized Computation Rate 1 0.8 0.6 0.4 0.2 0 1 1.2 1.4 1.6 1.8 2 g The red curve is a lower bound on R Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  17. Gaussian MAC with Nested Linear Codes - IF Lower Bound Consider the channel y = 1 x 1 + g x 2 + z at SNR = 40dB Normalized sum of computation rates 1.4 First Equation Second Equation 1.2 Normalized Computation Rate Sum 1 0.8 0.6 0.4 0.2 0 1 1.2 1.4 1.6 1.8 2 g Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  18. Gaussian MAC with Nested Linear Codes - IF Rate Region Using Minkowski’s second theorem we get the following result Theorem The sum of optimal computation rates is lower bounded by � � R comp , 1 + R comp , 2 ≥ 1 1 + � h � 2 SNR − 1 bits 2 log Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  19. Gaussian MAC with Nested Linear Codes - IF Rate Region Using Minkowski’s second theorem we get the following result Theorem The sum of optimal computation rates is lower bounded by � � R comp , 1 + R comp , 2 ≥ 1 1 + � h � 2 SNR − 1 bits 2 log With nested lattice codes each computation rate can be associated with an individual rate for one of the users (not trivial) Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  20. Gaussian MAC with Nested Linear Codes - IF Rate Region Using Minkowski’s second theorem we get the following result Theorem The sum of optimal computation rates is lower bounded by � � R comp , 1 + R comp , 2 ≥ 1 1 + � h � 2 SNR − 1 bits 2 log With nested lattice codes each computation rate can be associated with an individual rate for one of the users (not trivial) The sum rate with nested lattice codes is at most 1 bit smaller than the MAC sum capacity Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  21. Gaussian MAC with Nested Linear Codes - IF Rate Region Using Minkowski’s second theorem we get the following result Theorem The sum of optimal computation rates is lower bounded by � � R comp , 1 + R comp , 2 ≥ 1 1 + � h � 2 SNR − 1 bits 2 log With nested lattice codes each computation rate can be associated with an individual rate for one of the users (not trivial) The sum rate with nested lattice codes is at most 1 bit smaller than the MAC sum capacity Theorem With successive IF the sum of 2 optimal computation rates equals the sum-capacity of the MAC Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  22. Gaussian MAC with Nested Linear Codes - IF Rate Region √ Gaussian two-user MAC y = 1 x 1 + 2 x 2 + z at SNR = 15dB R 1 2 . 51 1 . 85 1 . 44 0 . 28 0 . 77 1 . 44 1 . 85 3 . 00 R 2 Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  23. Application - Symmetric Gaussian K -User IC z 1 x 1 y 1 1 w 1 w 1 ˆ E 1 D 1 g g z 2 g y 2 x 2 1 w 2 w 2 ˆ E 2 D 2 g . . . . . . g z K g x K 1 y K ˆ w K E K D K w K � y k = x k + g x m + z k m � = k Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  24. Application - Symmetric Gaussian K -User IC z 1 x 1 y 1 1 w 1 w 1 ˆ E 1 D 1 g g z 2 g y 2 x 2 1 w 2 w 2 ˆ E 2 D 2 g . . . . . . g z K g x K 1 y K ˆ w K E K D K w K � y k = x k + g x m + z k m � = k What is the symmetric capacity C sym of this channel? Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  25. Application - Symmetric Gaussian K -User IC If all users transmit from the same nested lattice codebook we get � x int , k = x m ∈ Λ m � = k Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  26. Application - Symmetric Gaussian K -User IC If all users transmit from the same nested lattice codebook we get � x int , k = x m ∈ Λ m � = k Same lattice = ⇒ Interference alignment Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  27. Application - Symmetric Gaussian K -User IC If all users transmit from the same nested lattice codebook we get � x int , k = x m ∈ Λ m � = k Same lattice = ⇒ Interference alignment Each receiver sees an effective 2-user MAC y k = x k + g x int , k + z k , x k , x int , k ∈ Λ Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  28. Application - Symmetric Gaussian K -User IC If all users transmit from the same nested lattice codebook we get � x int , k = x m ∈ Λ m � = k Same lattice = ⇒ Interference alignment Each receiver sees an effective 2-user MAC y k = x k + g x int , k + z k , x k , x int , k ∈ Λ Can apply our MAC results and obtain lower bound on C sym Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  29. Approx. Symmetric Capacity: Strong Interference Regime 3−user IC @ SNR=30dB 5 symmetric rate[bits/channel use] 4 3 2 1 0 1 10 g Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  30. Approx. Symmetric Capacity: Strong Interference Regime 3−user IC @ SNR=30dB 5 symmetric rate[bits/channel use] 4 3 2 1 0 1 10 g Question For γ > 0 bits, what is the fraction of channel gains g for which upper bound − lower bound > γ bits? Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  31. Outage Set 3−user IC @ SNR=30dB 5 symmetric rate[bits/channel use] 4 3 2 1 0 1 10 g Strong interference regime Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  32. Outage Set 3−user IC @ SNR=30dB 5 symmetric rate[bits/channel use] 4 3 2 1 0 1 10 g γ = 0 . 25 bits Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  33. Outage Set 48% outage for γ = 0 . 25 bits Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  34. Outage Set 22% outage for γ = 0 . 5 bits Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  35. Outage Set 11% outage for γ = 0 . 75 bits Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  36. Approx. Symmetric Capacity: Strong Interference Regime Theorem - inner bound for the strong interference regime The fraction of channel gains for which upper bound − lower bound > 3 + γ 2 bits is smaller than 2 − γ Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  37. Approx. Symmetric Capacity: Strong Interference Regime Theorem - inner bound for the strong interference regime The fraction of channel gains for which upper bound − lower bound > 3 + γ 2 bits is smaller than 2 − γ Etkin and E. Ordentlich 09 The DoF of the symmetric Gaussian K -user IC is discontinuous at rational values of g Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  38. Approx. Symmetric Capacity: Strong Interference Regime Theorem - inner bound for the strong interference regime The fraction of channel gains for which upper bound − lower bound > 3 + γ 2 bits is smaller than 2 − γ Etkin and E. Ordentlich 09 The DoF of the symmetric Gaussian K -user IC is discontinuous at rational values of g It appears the the notches in the achievable rate region are inherent to the problem Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  39. Approx. Symmetric Capacity: Strong Interference Regime Theorem - inner bound for the strong interference regime The fraction of channel gains for which upper bound − lower bound > 3 + γ 2 bits is smaller than 2 − γ Etkin and E. Ordentlich 09 The DoF of the symmetric Gaussian K -user IC is discontinuous at rational values of g It appears the the notches in the achievable rate region are inherent to the problem What about weak interference ( | g | < 1)? Can use a lattice Han-Kobayashi scheme with IF decoding Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  40. The Symmetric Gaussian K -User IC: New Inner Bounds SNR=20dB 3.5 3 2.5 Symmetric Rate 2 1.5 1 0.5 0 −1 0 1 10 10 10 g Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  41. The Symmetric Gaussian K -User IC: New Inner Bounds SNR=35dB 6 5 Symmetric Rate 4 3 2 1 0 −2 0 2 10 10 10 g Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  42. The Symmetric Gaussian K -User IC: New Inner Bounds SNR=50dB 8 Symmetric Rate 6 4 2 0 −2 0 2 10 10 10 g Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  43. The Symmetric Gaussian K -User IC: New Inner Bounds SNR=65dB 10 Symmetric Rate 8 6 4 2 0 −2 0 2 10 10 10 g Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  44. Applications for Source Coding Problems Joint work with Uri Erez (TAU) Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  45. Distributed Lossy Compression R 1 x 1 E 1 (ˆ x 1 , d 1 ) . . . . . D . (ˆ x K , d K ) R K x K E K Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  46. Distributed Lossy Compression R 1 x 1 E 1 (ˆ x 1 , d 1 ) . . . . . D . (ˆ x K , d K ) R K x K E K The fundamental limits were studied. Inner and outer bounds exist, and they even sometimes agree Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  47. Distributed Lossy Compression R 1 x 1 E 1 (ˆ x 1 , d 1 ) . . . . . D . (ˆ x K , d K ) R K x K E K The fundamental limits were studied. Inner and outer bounds exist, and they even sometimes agree However, in some applications the encoders/decoder are required to be extremely simple Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  48. Distributed Lossy Compression R 1 x 1 E 1 (ˆ x 1 , d 1 ) . . . . . D . (ˆ x K , d K ) R K x K E K We restrict attention to One-shot compression - block length is 1 Jointly Gaussian source x ∼ N ( 0 , K xx ) Symmetric rates and distortions - R 1 = · · · = R K = R and d 1 = · · · = d K = d x k ) 2 ≤ d MSE distortion measure: E ( x k − ˆ Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  49. Integer-Forcing Source Coding: Basic Idea Rather than solving the problem R x 1 E 1 ˆ x 1 . . . . . D . x K ˆ R x K E K Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  50. Integer-Forcing Source Coding: Basic Idea First solve R x 1 E 1 � K � m =1 a 1 m x m . . . . . D . � K � m =1 a Km x m R x K E K Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  51. Integer-Forcing Source Coding: Basic Idea First solve R x 1 E 1 � K � m =1 a 1 m x m . . . . . D . � K � m =1 a Km x m R x K E K This can be done with low complexity if all coefficients are integers � �� K �� and R is proportional to max k Var m =1 a km x m Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  52. Integer-Forcing Source Coding: Basic Idea First solve R x 1 E 1 � K � m =1 a 1 m x m . . . . . D . � K � m =1 a Km x m R x K E K This can be done with low complexity if all coefficients are integers � �� K �� and R is proportional to max k Var m =1 a km x m If x 1 , . . . , x K are sufficiently correlated we can find K linearly independent integer-valued coefficient vectors with small variance Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  53. Integer-Forcing Source Coding: Basic Idea First solve R x 1 E 1 � K � m =1 a 1 m x m . . . . . D . � K � m =1 a Km x m R x K E K This can be done with low complexity if all coefficients are integers � �� K �� and R is proportional to max k Var m =1 a km x m If x 1 , . . . , x K are sufficiently correlated we can find K linearly independent integer-valued coefficient vectors with small variance Invert equation to get ˆ x 1 , . . . , ˆ x K Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  54. Integer-Forcing Source Coding Each signal can be written as x k = x ∗ k + M k ∆ � � − ∆ 2 , ∆ where x ∗ k ∈ and M k ∈ Z 2 − 3∆ − 2∆ − ∆ 0 ∆ 2∆ 3∆ Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  55. Integer-Forcing Source Coding Each signal can be written as x k = x ∗ k + M k ∆ � � − ∆ 2 , ∆ where x ∗ k ∈ and M k ∈ Z 2 − 3∆ − 2∆ − ∆ 0 ∆ 2∆ 3∆ Simple modulo property For any set of integers a 1 , . . . , a K � K � ∗ � K � ∗ � � a k x ∗ = . a k x k k k =1 k =1 Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  56. Integer-Forcing Source Coding � K � ∗ � K � ∗ � � a k x ∗ = a k x k k k =1 k =1 If we know that � K � � − ∆ 2 , ∆ k =1 a k x k ∈ , then 2 � K � ∗ K � � a k x ∗ a k x k = k k =1 k =1 Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  57. Integer-Forcing Source Coding � K � ∗ � K � ∗ � � a k x ∗ = a k x k k k =1 k =1 If we know that � K � � − ∆ 2 , ∆ k =1 a k x k ∈ , then 2 � K � ∗ K � � a k x ∗ a k x k = k k =1 k =1 K for estimating � K It suffices to compress x ∗ 1 , . . . , x ∗ k =1 a k x k Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  58. Integer-Forcing Source Coding � K � ∗ � K � ∗ � � a k x ∗ = a k x k k k =1 k =1 If we know that � K � � − ∆ 2 , ∆ k =1 a k x k ∈ , then 2 � K � ∗ K � � a k x ∗ a k x k = k k =1 k =1 K for estimating � K It suffices to compress x ∗ 1 , . . . , x ∗ k =1 a k x k If x is a Gaussian vector, a can be chosen to minimize the variance of a T x . � � This maximizes the probability that � K − ∆ 2 , ∆ k =1 a k x k ∈ 2 Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  59. Integer-Forcing Source Coding � K � ∗ � K � ∗ � � a k x ∗ = a k x k k k =1 k =1 If we know that � K � � − ∆ 2 , ∆ k =1 a k x k ∈ , then 2 � K � ∗ K � � a k x ∗ a k x k = k k =1 k =1 K for estimating � K It suffices to compress x ∗ 1 , . . . , x ∗ k =1 a k x k If x is a Gaussian vector, a can be chosen to minimize the variance of a T x . � � This maximizes the probability that � K − ∆ 2 , ∆ k =1 a k x k ∈ 2 After estimating K linearly independent combinations, one can estimate each of the individual signals Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  60. Integer-Forcing Source Coding Integer-forcing encoder √ 12 d Z and sends Quantizes x k to the nearest point in the lattice Λ f = the index of this point modulo the lattice Λ = 2 R √ 12 d Z − → The equivalent signal is x k = [ x k + d k ] ∗ ˜ � � and [ · ] ∗ is with respect to ∆ = 2 R √ √ √ 12 d 12 d where d k ∼ Unif − , 12 d 2 2 Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  61. Integer-Forcing Source Coding Integer-forcing encoder √ 12 d Z and sends Quantizes x k to the nearest point in the lattice Λ f = the index of this point modulo the lattice Λ = 2 R √ 12 d Z − → The equivalent signal is x k = [ x k + d k ] ∗ ˜ � � and [ · ] ∗ is with respect to ∆ = 2 R √ √ √ 12 d 12 d where d k ∼ Unif − , 12 d 2 2 x k − 3∆ − 2∆ − ∆ 0 ∆ 2∆ 3∆ Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  62. Integer-Forcing Source Coding Integer-forcing encoder √ 12 d Z and sends Quantizes x k to the nearest point in the lattice Λ f = the index of this point modulo the lattice Λ = 2 R √ 12 d Z − → The equivalent signal is x k = [ x k + d k ] ∗ ˜ � � and [ · ] ∗ is with respect to ∆ = 2 R √ √ √ 12 d 12 d where d k ∼ Unif − , 12 d 2 2 Q Λ f ( x k ) x k − 3∆ − 2∆ − ∆ 0 ∆ 2∆ 3∆ Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  63. Integer-Forcing Source Coding Integer-forcing encoder √ 12 d Z and sends Quantizes x k to the nearest point in the lattice Λ f = the index of this point modulo the lattice Λ = 2 R √ 12 d Z − → The equivalent signal is x k = [ x k + d k ] ∗ ˜ � � and [ · ] ∗ is with respect to ∆ = 2 R √ √ √ 12 d 12 d where d k ∼ Unif − , 12 d 2 2 Q Λ f ( x k ) x k − 3∆ − 2∆ − ∆ 0 ∆ 2∆ 3∆ ˜ x k Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  64. Integer-Forcing Source Coding Integer-forcing encoder √ 12 d Z and sends Quantizes x k to the nearest point in the lattice Λ f = the index of this point modulo the lattice Λ = 2 R √ 12 d Z − → The equivalent signal is x k = [ x k + d k ] ∗ ˜ � � and [ · ] ∗ is with respect to ∆ = 2 R √ √ √ 12 d 12 d where d k ∼ Unif − , 12 d 2 2 Q Λ f ( x k ) x k − 3∆ − 2∆ − ∆ 0 ∆ 2∆ 3∆ ˜ x k modulo = one dimensional binning Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  65. Integer-Forcing Source Coding Integer-forcing decoder Computes K estimates for integer linear combinations � K � ∗ � K � ∗ � � � a T m x = a mk ˜ = a mk ( x k + d k ) , m = 1 , . . . , K x k k =1 k =1 Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  66. Integer-Forcing Source Coding Integer-forcing decoder Computes K estimates for integer linear combinations � K � ∗ � K � ∗ � � � a T m x = a mk ˜ = a mk ( x k + d k ) , m = 1 , . . . , K x k k =1 k =1 If � K � ∗ � � K a mk ( x k + d k ) = a mk ( x k + d k ) , m = 1 , . . . , K k =1 k =1 the decoder can invert the equations and get x k + d k for k = 1 , . . . , K Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  67. Integer-Forcing Source Coding - Error Probability The decoder succeeds if �� � � �� � � � � � � K K K K � � � � � < ∆ � ≥ ∆ � � � � a mk ( x k + d k ) = a mk ( x k + d k ) � � � � � � 2 2 m =1 k =1 m =1 k =1 Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  68. Integer-Forcing Source Coding - Error Probability The decoder succeeds if �� � � �� � � � � � � K K K K � � � � � < ∆ � ≥ ∆ � � � � a mk ( x k + d k ) = a mk ( x k + d k ) � � � � � � 2 2 m =1 k =1 m =1 k =1 E ( w m ) = 0 and σ 2 w m = a T m ( K xx + d I ) a m τ 2 − 2 σ 2 Chernoff’s bound gives Pr ( | w m | > τ ) ≤ 2 e wm Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  69. Integer-Forcing Source Coding - Error Probability The decoder succeeds if �� � � �� � � � � � � K K K K � � � � � < ∆ � ≥ ∆ � � � � a mk ( x k + d k ) = a mk ( x k + d k ) � � � � � � 2 2 m =1 k =1 m =1 k =1 E ( w m ) = 0 and σ 2 w m = a T m ( K xx + d I ) a m τ 2 − 2 σ 2 Chernoff’s bound gives Pr ( | w m | > τ ) ≤ 2 e wm Substituting ∆ = 2 R √ 12 d and applying the union bound gives  ��   � � max m =1 ,..., K a T  m ( Kxx + d I ) a m R − 1  − 3 2 2 log d P e ≤ 2 K exp 22  Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  70. Integer-Forcing Source Coding - Error Probability  ��   max m =1 ,..., K a T  � � m ( Kxx + d I ) a m R − 1  − 3 2 2 log d P e ≤ 2 K exp 22  Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  71. Integer-Forcing Source Coding - Error Probability  ��   max m =1 ,..., K a T  � � m ( Kxx + d I ) a m R − 1  − 3 2 2 log d P e ≤ 2 K exp 22  Let � � � � R IF ( A , d ) � 1 I + 1 m =1 ,..., K a T 2 log max d K xx a m m Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  72. Integer-Forcing Source Coding - Error Probability  ��   max m =1 ,..., K a T  � � m ( Kxx + d I ) a m R − 1  − 3 2 2 log d P e ≤ 2 K exp 22  Let � � � � R IF ( A , d ) � 1 I + 1 m =1 ,..., K a T 2 log max d K xx a m m Theorem Let R = R IF ( A , d ) + δ such that 2 R is a positive integer. IF source coding produces estimates with average MSE distortion d for all x 1 , . . . , x K with � 2 2 2 δ � − 3 probability greater than 1 − 2 K exp Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  73. Integer-Forcing Source Coding - Error Probability  ��   max m =1 ,..., K a T  � � m ( Kxx + d I ) a m R − 1  − 3 2 2 log d P e ≤ 2 K exp 22  Let � � � � R IF ( A , d ) � 1 I + 1 m =1 ,..., K a T 2 log max d K xx a m m Theorem Let R = R IF ( A , d ) + δ such that 2 R is a positive integer. IF source coding produces estimates with average MSE distortion d for all x 1 , . . . , x K with � 2 2 2 δ � − 3 probability greater than 1 − 2 K exp But... is R IF ( A , d ) any good? Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

  74. Integer-Forcing Source Coding - Example Jointly Gaussian vector x ∼ N ( 0 , K xx ) with K xx = I + SNR HH T and all entries of H are i.i.d. N (0 , 1) Or Ordentlich Integer-Forcing for Channels, Sources and ADCs

Recommend


More recommend