simplified implementation of the map decoder shouvik
play

Simplified Implementation of the MAP Decoder Shouvik Ganguly ECE - PowerPoint PPT Presentation

Simplified Implementation of the MAP Decoder Shouvik Ganguly ECE 259B Final Project Presentation Introduction : MAP Decoder u k = arg max i { 0 , 1 } Pr[ u k = i | R N 1 ] LAPPR k = log Pr[ u k = 1 | R N 1 ] = log Pr[ u k = 1 ,


  1. Simplified Implementation of the MAP Decoder Shouvik Ganguly ECE 259B Final Project Presentation

  2. Introduction : MAP Decoder u k = arg max i ∈{ 0 , 1 } Pr[ u k = i | R N ◮ � 1 ] ◮ LAPPR Λ k = log Pr[ u k = 1 | R N 1 ] = log Pr[ u k = 1 , R N 1 ] 1 ] 1 ] . Pr[ u k = 0 | R N Pr[ u k = 0 , R N ◮ MAP Rule: � 1 , Λ k ≥ 0 � u k = . 0 , otherwise

  3. BCJR Algorithm [2], [3] � � α k − 1 ( m ′ ) γ 1 k ( m ′ , m ) β k ( m ) m ′ ∈ S m ∈ S ◮ Λ k = log � � α k − 1 ( m ′ ) γ 0 k ( m ′ , m ) β k ( m ) m ′ ∈ S m ∈ S S = set of states of trellis S k = state of the trellis after k th input. α k ( m ) = Pr[ S k = m, R k 1 ] β k ( m ) = Pr[ R N k +1 | S k = m ] γ i k ( m ′ , m ) = Pr[ u k = i, S k = m, R k | S k − 1 = m ′ ]

  4. BCJR Algorithm : Forward and backward recursions � � 1 α k − 1 ( m ′ ) γ i k ( m ′ , m ) ◮ α k ( m ) = m ′ ∈ S i =0 � � 1 β k +1 ( m ′ ) γ i k +1 ( m, m ′ ) ◮ β k ( m ) = m ′ ∈ S i =0 ◮ γ i k ( m ′ , m ) = Pr[ u k = i ] Pr[ S k = m | S k − 1 = m ′ , u k = i ] Pr[ R k | S k − 1 = m ′ , u k = i, S k = m ]

  5. BCJR Algorithm : Computation and Memory Requirements ◮ O ( | S | 2 ) multiplications and additions for computing the metrics for each k ◮ O ( | S | 2 ) multiplications and additions for computing the LAPPR for each k ◮ Need to store forward metric for every k Problematic for large block-length and codes with higher memory

  6. A New ‘Maximum’ Function ◮ max ∗ ( x, y ) � log(e x + e y ) = max( x, y ) + log(1 + e −| y − x | ) ◮ Key insight : max ∗ can be approximated by max ◮ max ∗ ( x, y, z ) � max ∗ (max ∗ ( x, y ) , z ) = log(e x + e y + e z ) And so on . . .

  7. BCJR Algorithm : Simplification of Computation [1] ◮ � a k ( m ) � log α k ( m ) � b k ( m ) � log β k ( m ) c i,k ( m ′ , m ) � log γ i k ( m ′ , m ) � c 1 ,k ( m ′ , m ) + � a k − 1 ( m ′ ) + � Λ k ≈ max m,m ′ ∈ S [ � b k ( m )] − c 0 ,k ( m ′ , m ) + � a k − 1 ( m ′ ) + � m,m ′ ∈ S [ � max b k ( m )] a k − 1 ( m ′ ) + � c i,k ( m ′ , m )] � a k ( m ) ≈ m ′ ∈ S,i ∈{ 0 , 1 } [ � max � m ′ ∈ S,i ∈{ 0 , 1 } [ � b j +1 ( m ′ ) + � c i,j +1 ( m ′ , m )] b j ( m ) ≈ max

  8. BCJR Algorithm : Simplification of Computation c i,k ( m ′ , m ) = log Pr[ u k = i ]+ � log Pr[ S k = m | S k − 1 = m ′ , u k = i ]+ log Pr[ R k | S k − 1 = m ′ , u k = i, S k = m ] ◮ Initializations � 0 , m = s 0 a 0 ( m ) = � −∞ , otherwise . � 0 , m = s N � b N ( m ) = −∞ , otherwise .

  9. BCJR Algorithm : Simplification of Computation ◮ Forward and backward metrics can be computed similarly to VA ◮ Problem of normalization at each step solved Memory requirement problems

  10. BCJR Algorithm : Reducing Memory Requirements [1] ◮ Status till now : Forward metric stored for all stages till N Backward metric stored for one stage at a time for ‘dual-maxima’ process Decoded vector output only after time N (after the whole input is seen) ◮ Larger block-length increases memory requirement

  11. BCJR Algorithm : Reducing Memory Requirements ◮ Key idea : Behavior of VA nearly independent of initial conditions beyond a few constraint lengths ◮ Use two backward decoders in tandem ◮ L = ‘learning period’ Received symbols delayed by 2 L

  12. BCJR Algorithm : Reducing Memory Requirements ◮ Forward decoder starts at branch 0 at time 2 L ◮ Forward decoder stores every branch metric for each time ◮ Time 2 L : first backward decoder starts backwards from branch 2 L and stores only the most recent metric till branch L ◮ Time 3 L : first backward decoder meets the computed forward metric at branch L

  13. BCJR Algorithm : Reducing Memory Requirements ◮ Time 3 L to time 4 L : first backward decoder moves till branch 0 and dual maxima processor outputs soft decisions for first L branches ◮ Time 3 L : second backward decoder starts backwards from branch 3 L and stores only the most recent metric till branch 2 L ◮ Time 4 L : second backward decoder meets the computed forward metric at branch 2 L

  14. BCJR Algorithm : Reducing Memory Requirements ◮ Time 4 L to time 5 L : second backward decoder moves till branch L and dual maxima processor outputs soft decisions for branches L through 2 L ◮ Two backward processors hop forward 4 L branches every time 2 L sets of backward state metrics have been generated ◮ Time-sharing of dual-maxima processor

  15. BCJR Algorithm : Reducing Memory Requirements ◮ State metrics for only 2 L branches stored by first decoder ◮ Soft decisions for first 2 L branches generated at time 5 L ◮ Four times the complexity of a simple VA for the same convolutional code

  16. Schematic Representation Figure: Scheduling [1]

  17. Results Block length 10 , 000

  18. References I A. Viterbi. An Intuitive Justification and a simplified implementation of the MAP decoder for Convolutional Codes . IEEE Journal on Selected Areas in Communications, 16(2), pp. 261–264, Feb 1998. L.R. Bahl, J. Cocke, F. Jelinek and J. Raviv. Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate . IEEE Transactions on Information Theory, 20(2), pp. 284–287, 1974.

  19. References II C. Berrou, A. Glavieux and P. Thitimajshima. Near Shannon limit error-correcting coding and decoding : Turbo-codes . Proc. IEEE International Conference on Communications , 2, pp. 1064–1070, May 1993.

Recommend


More recommend