Compute-Forward Multiple Access University of Melbourne Jingge Zhu Joint work with Erixhen Sula, Sung Hoon Lim, Adriano Pastore, Michael Gastpar 1 / 33
Overview Overview: Compute-Forward Multiple Access (CFMA) Theory A Practical Implementation 2 / 33
Overview: Compute-Forward Multiple Access (CFMA) 3 / 33
Capacity of two-user Gaussian Multiple Access Channel MAC Capacity ◮ y = h 1 x 1 + h 2 x 2 + z R 1 < I ( X 1 ; Y | X 2 ) R 2 < I ( X 2 ; Y | X 1 ) ◮ E [ � x k � 2 ] ≤ nP R 1 + R 2 < I ( X 1 , X 2 ; Y ) 4 / 33
Compute-Forward Multiple Access (CFMA) To achieve capacity ◮ “Multi-user decoding”, e. g. ◮ maximum likelihood decoder ◮ joint typicality decoder ◮ “Single-user decoding”, e. g. ◮ Successive cancellation decoding + time-sharing ◮ Rate-splitting [Rimoldi-Urbanke ’96] 5 / 33
Compute-Forward Multiple Access (CFMA) To achieve capacity ◮ “Multi-user decoding”, e. g. ◮ maximum likelihood decoder ◮ joint typicality decoder ◮ “Single-user decoding”, e. g. ◮ Successive cancellation decoding + time-sharing ◮ Rate-splitting [Rimoldi-Urbanke ’96] Features of CFMA ◮ capacity achieving ◮ “single-user decoder” ◮ no time-sharing or rate-splitting needed 5 / 33
CFMA ◮ first decode x 1 + x 2 using y ◮ then decode x 1 (or x 2 ) using y , x 1 + x 2 ◮ solve for x 1 , x 2 (equivalently w 1 , w 2 ) 6 / 33
Theory 7 / 33
The Compute-and-Forward Problem [Nazer-Gastpar ’09] ◮ x 1 , x 2 from nested lattice codes in [Nazer-Gastpar] ◮ decoder aims to decode the sum x 1 + x 2 ∈ R n 8 / 33
Nested lattices Two lattices Λ and Λ ′ are nested if Λ ′ ⊆ Λ 9 / 33
Nested lattice codes [Erez-Zamir ’04] ◮ Fine lattice Λ protects against noise. ◮ Coarse lattice enforces the power constraint. 10 / 33
Transmission (figure from M. Gastpar and B. Nazer) 11 / 33
Lattice decoding ◮ Quantize y with respect to the fine lattice 12 / 33
Lattice decoding ◮ Quantize y with respect to the fine lattice ◮ Thanks to the algebraic structure (closed under addition), decoding complexity does not depend on k ◮ Single user decoder! 13 / 33
Compute-and-forward [Nazer-Gastpar ’09] ◮ Theorem: The sum a 1 x 1 + a 2 x 2 can be decoded reliably if the rate R of the codebooks satisfies R < 1 2 log(1 + P � h � 2 ) − 1 2 log( � a � 2 + P ( � h � 2 � a � 2 − ( h T a ) 2 )) where h := [ h 1 , h 2 ], a := [ a 1 , a 2 ] ∈ N 2 . 14 / 33
Compute-and-forward [Nazer-Gastpar ’09] ◮ Theorem: The sum a 1 x 1 + a 2 x 2 can be decoded reliably if the rate R of the codebooks satisfies R < 1 2 log(1 + P � h � 2 ) − 1 2 log( � a � 2 + P ( � h � 2 � a � 2 − ( h T a ) 2 )) where h := [ h 1 , h 2 ], a := [ a 1 , a 2 ] ∈ N 2 . ◮ Not enough for CFMA, no flexibility in terms of rates 14 / 33
General compute-and-forward [Zhu-Gastpar ’14] 15 / 33
General compute-and-forward [Zhu-Gastpar ’14] ◮ Theorem: The sum a 1 x 1 + a 2 x 2 can be decoded reliably if the rate R i of the codebook i satisfies R i < 1 − 1 � a � 2 + P ( � h � 2 � ˜ a � 2 − ( h T ˜ � β i (1 + P � h � 2 ) a ) 2 ) � � 2 log 2 log � ˜ where ˜ a := [ β 1 a 1 , β 2 a 2 ], with any β 1 , β 2 ∈ R ◮ β parameters is crucial for adjusting rates 15 / 33
Compute-Forward Multiple Access (CFMA) [Zhu-Gastpar ’17] 16 / 33
Compute-Forward Multiple Access (CFMA) [Zhu-Gastpar ’17] Theorem : Define h 1 h 2 P A := . � 1 + h 2 1 P + h 2 2 P If A ≥ 1, every rate pair on the dominant face can be achieved via CFMA. √ ◮ (SNR ≥ 1 + 2 if h 1 = h 2 ) ◮ Each rate pair corresponds to some appropriately chosen β ◮ 2 × lattice decoding 16 / 33
A Practical Implementation 17 / 33
Practical CFMA? ◮ If directly apply the above scheme... ◮ nested lattice codes do not seem very practical ◮ lattice decoding (quantization in high-dimensional space) hard to implement 18 / 33
Practical CFMA? ◮ If directly apply the above scheme... ◮ nested lattice codes do not seem very practical ◮ lattice decoding (quantization in high-dimensional space) hard to implement ◮ Different codes, same spirit ◮ Off-the-shelf codes (e. g. binary LDPC codes) ◮ Efficient decoding algorithm (e. g. sum-product algorithm) 18 / 33
A practical implementation of CFMA ◮ C 1 , C 2 LDPC codes with C 2 ⊆ C 1 ◮ u 1 ∈ C 1 , u 2 ∈ C 2 ◮ Decode s = u 1 ⊕ u 2 . ◮ Decode u 1 using modulo sum as side information. 19 / 33
Nested LDPC codes ◮ How to construct nested code C 1 from C 2 such that C 2 ⊆ C 1 ? ◮ Replacing two rows h T i , h T j in the parity check matrix of C 2 by the new row ( h i ⊕ h j ) T we obtain a new code C 1 . C 2 C 1 ◮ Rate 1 / 2 → 5 / 8 ◮ C 2 ⊆ C 1 ! 20 / 33
CFMA with binary LDPC codes ◮ Decode s ˆ s i = argmax p ( s i | y ) s i ∈{ 0 , 1 } ◮ Decode u 1 u 1 , i = argmax ˆ p ( u 1 , i | y , s ) u 1 , i ∈{ 0 , 1 } 21 / 33
First Step: Decode Modulo Sum ◮ s = u 1 ⊕ u 2 ∈ ˜ C where ˜ C is the code with the larger rate among C 1 , C 2 . ◮ So decoding s is essentially the same as decoding one codeword! ◮ Bit-wise MAP estimation for the modulo sum: n � � � ˜ � s i = argmax ˆ p ( y i | s i ) ✶ Hs = 0 s i ∈{ 0 , 1 } ∼ s i i =1 where the summation is over all bits of s except s i . 22 / 33
First Step: Decode Modulo Sum ◮ s = u 1 ⊕ u 2 ∈ ˜ C where ˜ C is the code with the larger rate among C 1 , C 2 . ◮ So decoding s is essentially the same as decoding one codeword! ◮ Bit-wise MAP estimation for the modulo sum: n � � � ˜ � s i = argmax ˆ p ( y i | s i ) ✶ Hs = 0 s i ∈{ 0 , 1 } ∼ s i i =1 where the summation is over all bits of s except s i . ◮ This expression should remind you of the sum-product algorithm... 22 / 33
Second Step: Decode One of the Codewords ◮ From the bit-wise MAP estimation for one of the codewords: u 1 , i = argmax ˆ p ( u 1 , i | y , s ) u 1 , i ∈{ 0 , 1 } n � � � � = argmax p ( y i | u 1 , i , s i ) ✶ H 1 u 1 = 0 . u 1 , i ∈{ 0 , 1 } ∼ u 1 , i i =1 ◮ Similar form as in the first step. Again, sum-product algorithm. 23 / 33
Implementation by Standard SPA ◮ Both decoding steps can be implemented efficiently using a standard SPA, with modified initial log-likelihood ratio (LLR). LLR 2 := log p ( y i | u 1 , i = 0 , s i ) LLR 1 := log p ( y i | s i = 0) p ( y i | u 1 , i = 1 , s i ) p ( y i | s i = 1) √ √ � � � = log cosh P ( h 1 + h 2 ) y i − 2 y i P ( h 1 + h 2 ) for s i = 0 √ � − 2 Ph 1 h 2 = √ � cosh y i P ( h 1 − h 2 ) − 2 y i P ( h 1 − h 2 ) for s i = 1 Algorithm: CFMA with binary LDPC s = SPA (˜ 1. ˆ H , LLR 1 ) 2. ˆ u 1 = SPA ( H 1 , LLR 2 ) 24 / 33
Simulations: binary LDPC codes 25 / 33
Simulations: binary LDPC codes 26 / 33
Extension to higher rate ◮ How to support higher rate with off-the-shelf binary codes? ◮ CFMA with multilevel binary codes: 27 / 33
Extension to higher rate ◮ How to support higher rate with off-the-shelf binary codes? ◮ CFMA with multilevel binary codes: ◮ Each u ( ℓ ) is a binary LDPC code; higher-order modulation. i ◮ For each level ℓ , decode the sum s ( ℓ ) and then u ( ℓ ) 1 27 / 33
CFMA with multilevel binary codes ◮ Punchline: only single-user SPA is used! 28 / 33
Simulations: multilevel LDPC codes 29 / 33
Simulations: multilevel LDPC codes 30 / 33
CFMA summary ◮ “Theoretical” CFMA with nested lattice codes ◮ capacity achieving ◮ “single-user decoder” (lattice decoding) ◮ no time-sharing or rate-splitting needed ◮ “Practical” CFMA with binary LDPC codes ◮ “single-user decoder” (SPA) ◮ no time-sharing or rate-splitting needed ◮ see [Lim et al. ’18] for a performance characterization ◮ K -user case: complexity grow linearly (instead of exponentially) with K . 31 / 33
Take-home message Two key components of CFMA: ◮ Codes with algebraic structure (so that decoding the sum is of the same complexity as decoding one codeword) ◮ Efficient decoding algorithm for the “base code” We have shown ◮ nested lattice codes + lattice decoding ◮ binary LDPC codes + sum-product algorithm but same methodology readily applies to, for example ◮ Convolutional/Turbo codes + Viterbi algorithm ◮ Polar codes + Arikan’s successive cancellation decoder ◮ . . . Could be a candidate for NOMA technology? 32 / 33
References ◮ B. Nazer and M. Gastpar, “Compute-and-forward: Harnessing interference through structured codes,” IEEE Trans. Inf. Theory, 2011. ◮ J. Zhu and M. Gastpar, “Gaussian Multiple Access via Compute-and-Forward,” IEEE Trans. Inf. Theory, 2017. ◮ E. Sula, J. Zhu, A. Pastore, S. H. Lim, and M. Gastpar, “Compute-Forward Multiple Access (CFMA): Practical Implementations,” IEEE Trans. Comm. 2018 ◮ S. H. Lim, C. Feng, A. Pastore, B. Nazer, and M. Gastpar, “A Joint Typicality Approach to Compute-Forward,” IEEE Trans. Inf. Theory, 2018 33 / 33
Recommend
More recommend