modulated sparse regression codes
play

Modulated Sparse Regression Codes Kuan Hsieh and Ramji Venkataramanan - PowerPoint PPT Presentation

Modulated Sparse Regression Codes Kuan Hsieh and Ramji Venkataramanan University of Cambridge, UK ISIT, June 2020 1/17 Complex AWGN channel communication w 1 , . . . , w n data bits estimated data bits i.i.d. CN (0 , 2 ) m x 1 , . . .


  1. Modulated Sparse Regression Codes Kuan Hsieh and Ramji Venkataramanan University of Cambridge, UK ISIT, June 2020 1/17

  2. Complex AWGN channel communication w 1 , . . . , w n data bits estimated data bits i.i.d. ∼ CN (0 , σ 2 ) m x 1 , . . . , x n y 1 , . . . , y n + Encoder Decoder y = x + w 2/17

  3. Complex AWGN channel communication w 1 , . . . , w n data bits estimated data bits i.i.d. ∼ CN (0 , σ 2 ) m x 1 , . . . , x n y 1 , . . . , y n + Encoder Decoder Rate Channel capacity Power constraint n ✓ ◆ 1 1 + P R = m | x i | 2 ≤ P X C = log σ 2 n n i =1 2/17

  4. Complex AWGN channel communication w 1 , . . . , w n data bits estimated data bits i.i.d. ∼ CN (0 , σ 2 ) m x 1 , . . . , x n y 1 , . . . , y n + Encoder Decoder Rate Channel capacity Power constraint n ✓ ◆ 1 1 + P R = m | x i | 2 ≤ P X C = log σ 2 n n i =1 2/17

  5. Sparse regression codes (SPARCs) Encoding x = A β (sparse) Codeword A Design matrix β Message vector x [ x 1 , . . . , x n ] > encodes data bits ind. Gaussian entries [Joseph and Barron ’12] 3/17

  6. Sparse regression codes (SPARCs) Encoding x = A β (sparse) Codeword A Design matrix β Message vector x [ x 1 , . . . , x n ] > encodes data bits ind. Gaussian entries Decoding y = A β + w β Estimate given [Joseph and Barron ’12] 3/17

  7. SPARC encoding x = A β > . . . ... 0 , 1 , 0 1 , 0 , 0 , ... 0 , 1 , 0 , ... β : M entries bits log 2 M determine location 4/17

  8. SPARC encoding x = A β Section 1 Section 2 Section L . . . n A : rows > . . . ... 0 , 1 , 0 1 , 0 , 0 , ... 0 , 1 , 0 , ... β : M entries bits log 2 M determine location R = L log M Rate n 4/17

  9. SPARC decoding Section 1 Section 2 Section L . . . n A : rows > . . . ... 0 , 1 , 0 1 , 0 , 0 , ... 0 , 1 , 0 , ... β : M entries y = A β + w β Estimate given n o L X 1 b Section Error Rate: β ` 6 = β ` (SER) L ` =1 5/17

  10. Previous results on (unmodulated) SPARCs Maximum likelihood decoding [Joseph and Barron ’12] Matrix designs + e ffi cient decoding Power allocation Spatial coupling Adaptive, Successive Hard-thresholding [Joseph and Barron ’14] Approximate Message Passing Adaptive, Successive Soft-thresholding [Barbier et al. ’14-’19] [Cho and Barron ’13] [Rush, Hsieh and Venkataramanan ’18, ’19, ’20] Approximate Message Passing [Barbier and Krzakala ’17] [Rush, Greig and Venkataramanan ’17] 6/17

  11. Previous results on (unmodulated) SPARCs Maximum likelihood decoding [Joseph and Barron ’12] Matrix designs + e ffi cient decoding Power allocation Spatial coupling Adaptive, Successive Hard-thresholding [Joseph and Barron ’14] Approximate Message Passing Adaptive, Successive Soft-thresholding [Barbier et al. ’14-’19] [Cho and Barron ’13] [Rush, Hsieh and Venkataramanan ’18, ’19, ’20] Approximate Message Passing [Barbier and Krzakala ’17] [Rush, Greig and Venkataramanan ’17] 6/17

  12. Spatial coupling LM n Design matrix A c = 1 c = C β : β c [Felstrom and Zigangirov ’99] [Kudekar and Pfister ’10] [Barbier, Schülke and Krazakala ’13, ’15] … 7/17

  13. Spatial coupling LM C R n Base matrix W Design matrix A ✓ ◆ c = 1 c = C 0 , 1 A ij ∼ CN LW r ( i ) , c ( j ) β : β c [Thorpe ’03] [Mitchell, Lentmaier, and Costello ’15] [Liang, Ma and Ping ’17] … 7/17

  14. Modulated SPARC encoding x = A β > . . . β : ... 0 , a 1 , 0 a 2 , 0 , 0 , ... 0 , a L , 0 , ... M entries Im c 2 bits log 2 M c 3 c 1 determine location c 4 c 8 0 Re bits log 2 K c 5 c 7 c 6 determine value E.g. 8-PSK K-ary R = L log( KM ) Phase Shift Keying n (PSK) 8/17

  15. AMP decoding y = A β + w β 0 to all-zero vector. For t = 0 , 1 , 2 . . . Initialise b z t = y � A b β t + υ t � z t � 1 ⇣ β t + ( S t � A ) ⇤ z t , τ t ⌘ β t +1 = η b b 9/17

  16. AMP decoding y = A β + w β 0 to all-zero vector. For t = 0 , 1 , 2 . . . Initialise b z t = y � A b β t + υ t � z t � 1 E ff ective noise variance ⇣ β t + ( S t � A ) ⇤ z t , τ t ⌘ β t +1 = η b b } ≈ β + Gaussian noise 9/17

  17. AMP decoding y = A β + w β 0 to all-zero vector. For t = 0 , 1 , 2 . . . Initialise b z t = y � A b β t + υ t � z t � 1 E ff ective noise variance ⇣ β t + ( S t � A ) ⇤ z t , τ t ⌘ β t +1 = η b b } ≈ β + Gaussian noise Bayes-optimal estimator � s = β + p τ � u h i � η j ( s , τ ) = E β j : standard normal random vector u 9/17

  18. AMP decoding y = A β + w β 0 to all-zero vector. For t = 0 , 1 , 2 . . . Initialise b z t = y � A b β t + υ t � z t � 1 E ff ective noise variance ⇣ β t + ( S t � A ) ⇤ z t , τ t ⌘ β t +1 = η b b } ≈ β + Gaussian noise Bayes-optimal estimator State evolution predicts � s = β + p τ � u h i � η j ( s , τ ) = E β j β t � β k 2 k b : standard normal random vector u 9/17

  19. State evolution for K-PSK modulated SPARCs For large and L n c = 1 c = C β : c ⇡ k b c � β c k 2 β t ψ t L/ C β c C E.g. R Base matrix W 10/17

  20. State evolution for K-PSK modulated SPARCs For large and L n c = 1 c = C β : c ⇡ k b c � β c k 2 β t ψ t L/ C β c Initialise ψ 0 c = 1 for c = 1 , . . . , C . For t = 0 , 1 , 2 . . . C C r = σ 2 + 1 X φ t W rc ψ t c , C c =1 E.g. R  1 R � − 1 R/ 2 W rc X τ t c = , log( KM ) R φ t r r =1 ψ t +1 � τ t � = mmse β Base matrix W c c 10/17

  21. Initialise ψ 0 c = 1 for c = 1 , . . . , C . For t = 0 , 1 , 2 . . . C C r = σ 2 + 1 X φ t W rc ψ t c , C c =1 E.g. R  1 R � − 1 R/ 2 W rc X τ t c = , log( KM ) R φ t r r =1 ψ t +1 � τ t � = mmse β Base matrix W c c For δ ∈ (0 , 1 1 Main result 2 ) and ν t c = c log( KM ) , τ t  ( KM ) − α 1 K δ 2 if ν t δ √ c > 2 + δ ,   log( KM )    ψ t +1 ≤ c 1 + ( KM ) − α 2 K ν t   c √ otherwise .   ν t c log( KM )  11/17

  22. Initialise ψ 0 c = 1 for c = 1 , . . . , C . For t = 0 , 1 , 2 . . . C C r = σ 2 + 1 X φ t W rc ψ t c , C c =1 E.g. R  1 R � − 1 R/ 2 W rc X τ t c = , log( KM ) R φ t r r =1 ψ t +1 � τ t � = mmse β Base matrix W c c For δ ∈ (0 , 1 1 Main result 2 ) and ν t c = c log( KM ) , τ t  ( KM ) − α 1 K δ 2  if ν t 0 c > 2 , δ √    log( KM )       fixed K and M →∞ ψ t +1 ≤ − − − − − − − − − − − → c 1 + ( KM ) − α 2 K ν t    c  √   1 otherwise .   ν t c log( KM )  11/17

  23. Asymptotic SE for K-PSK modulated SPARCs For fixed K , as M → ∞ the state evolution simplifies to: Initialise ψ 0 c = 1 for c = 1 , . . . , C . For t = 0 , 1 , 2 . . . C C r = σ 2 + 1 X φ t W rc ψ t c , C c =1 E.g. R ( R ) 1 W rc X ψ t +1 = ≤ R . c φ t R r r =1 Base matrix W Does not depend on K 12/17

  24. Theorem for K-PSK modulated SPARCs C = Λ ω R = Λ + ω − 1 ( ω , Λ ) base matrix W 13/17

  25. Theorem for K-PSK modulated SPARCs Consider a K -PSK modulated complex SPARC C = Λ constructed with an ( ω , Λ ) base matrix W with ω > ω ? and rate satisfying R < ˜ ω C := C / (1 + ! − 1 Λ ). R = Λ + ω − 1 As n → ∞ , the SER of the AMP decoder after T iterations = 0 almost surely, where ( ω , Λ ) base matrix W Λ T ∝ . 2 ω ( ˜ C − R ) 13/17

  26. Steps of proof 1. Error rate of AMP accurately predicted by state evolution for large code lengths. By extending results in [Rush, Hsieh and Venkataramanan ’20]. 2. For any , state evolution predicts vanishing error R < C probability in the large system limit. A. Asymptotic state evolution is the same for any . K Shown in this work. B. Use asymptotic state evolution analysis from unmodulated ( ) SPARCs. K = 1 Shown in [Rush, Hsieh and Venkataramanan ’20]. 14/17

  27. Steps of proof 1. Error rate of AMP accurately predicted by state evolution for large code lengths. By extending results in [Rush, Hsieh and Venkataramanan ’20]. 2. For any , state evolution predicts vanishing error R < C probability in the large system limit. A. Asymptotic state evolution is the same for any . K Shown in this work. B. Use asymptotic state evolution analysis from unmodulated ( ) SPARCs. K = 1 Shown in [Rush, Hsieh and Venkataramanan ’20]. 14/17

  28. Steps of proof 1. Error rate of AMP accurately predicted by state evolution for large code lengths. By extending results in [Rush, Hsieh and Venkataramanan ’20]. 2. For any , state evolution predicts vanishing error R < C probability in the large system limit. A. Asymptotic state evolution is the same for any . K Shown in this work. B. Use asymptotic state evolution analysis from unmodulated ( ) SPARCs. K = 1 Shown in [Rush, Hsieh and Venkataramanan ’18, ’20]. 14/17

  29. Simulation results Bit error rate Codeword error rate Coded modulation R = 1 . 6 bits/dim. R = L log( KM ) (6480 , 16200) LDPC n ≈ 2000 n DVB-S2 standard L = 960 +256 QAM ω = 6 , Λ = 32 , 15/17

Recommend


More recommend