optimal receiver for the awgn channel
play

Optimal Receiver for the AWGN Channel Saravanan Vijayakumaran - PowerPoint PPT Presentation

Optimal Receiver for the AWGN Channel Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 31, 2012 1 / 50 Additive White Gaussian Noise Channel AWGN s ( t ) y ( t )


  1. Optimal Receiver for the AWGN Channel Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 31, 2012 1 / 50

  2. Additive White Gaussian Noise Channel AWGN s ( t ) y ( t ) Channel y ( t ) = s ( t ) + n ( t ) s ( t ) Transmitted Signal y ( t ) Received Signal n ( t ) White Gaussian Noise S n ( f ) = N 0 2 = σ 2 R n ( τ ) = σ 2 δ ( τ ) 2 / 50

  3. M -ary Signaling in AWGN Channel • One of M continuous-time signals s 1 ( t ) , . . . , s M ( t ) is sent • The received signal is the transmitted signal corrupted by AWGN • M hypotheses with prior probabilities π i , i = 1 , . . . , M H 1 : y ( t ) = s 1 ( t ) + n ( t ) H 2 : y ( t ) = s 2 ( t ) + n ( t ) . . . . . . H M : y ( t ) = s M ( t ) + n ( t ) • Random variables are easier to handle than random processes • We derive an equivalent M -ary hypothesis testing problem involving only random variables 3 / 50

  4. White Gaussian Noise through Correlators • Consider the output of a correlator with WGN input � ∞ Z = n ( t ) u ( t ) dt = � n , u � −∞ where u ( t ) is a deterministic finite-energy signal • Z is a Gaussian random variable • The mean of Z is � ∞ E [ Z ] = E [ n ( t )] u ( t ) dt = 0 −∞ • The variance of Z is var [ Z ] = σ 2 � u � 2 4 / 50

  5. White Gaussian Noise through Correlators Proposition Let u 1 ( t ) and u 2 ( t ) be linearly independent finite-energy signals and let n ( t ) be WGN with PSD S n ( f ) = σ 2 . Then � n , u 1 � and � n , u 2 � are jointly Gaussian with covariance cov ( � n , u 1 � , � n , u 2 � ) = σ 2 � u 1 , u 2 � . Proof To prove that � n , u 1 � and � n , u 2 � are jointly Gaussian, consider a non-trivial linear combination a � n , u 1 � + b � n , u 2 � � a � n , u 1 � + b � n , u 2 � = n ( t ) [ au 1 ( t ) + bu 2 ( t )] dt 5 / 50

  6. White Gaussian Noise through Correlators Proof (continued) cov ( � n , u 1 � , � n , u 2 � ) = E [ � n , u 1 �� n , u 2 � ] �� � � = E n ( t ) u 1 ( t ) dt n ( s ) u 2 ( s ) ds � � = u 1 ( t ) u 2 ( s ) E [ n ( t ) n ( s )] dt ds � � u 1 ( t ) u 2 ( s ) σ 2 δ ( t − s ) dt ds = � σ 2 = u 1 ( t ) u 2 ( t ) dt σ 2 � u 1 , u 2 � = If u 1 ( t ) and u 2 ( t ) are orthogonal, � n , u 1 � and � n , u 2 � are independent. 6 / 50

  7. Restriction to Signal Space is Optimal Theorem For the M-ary hypothesis testing given by H 1 : y ( t ) = s 1 ( t ) + n ( t ) . . . . . . H M : y ( t ) = s M ( t ) + n ( t ) there is no loss in detection performance by using the optimal decision rule for the following M-ary hypothesis testing problem : Y = s 1 + N H 1 . . . . . . H M : Y = s M + N where Y , s i and N are the projections of y ( t ) , s i ( t ) and n ( t ) respectively onto the signal space spanned by { s i ( t ) } . 7 / 50

  8. Projections onto Signal Space • Consider an orthonormal basis { ψ i | i = 1 , . . . , K } for the space spanned by { s i ( t ) | i = 1 , . . . , M } • Projection of s i ( t ) onto the signal space is � T � s i = � s i , ψ 1 � · · · � s i , ψ K � • Projection of n ( t ) onto the signal space is � T � N = � n , ψ 1 � · · · � n , ψ K � • Projection of y ( t ) onto the signal space is � T � Y = � y , ψ 1 � · · · � y , ψ K � • Component of y ( t ) orthogonal to the signal space is K � y ⊥ ( t ) = y ( t ) − � y , ψ i � ψ i ( t ) i = 1 8 / 50

  9. Proof of Theorem y ( t ) is equivalent to ( Y , y ⊥ ( t )) . We will show that y ⊥ ( t ) is an irrelevant statistic. K y ⊥ ( t ) � = y ( t ) − � y , ψ i � ψ i ( t ) i = 1 K � = s i ( t ) + n ( t ) − � s i + n , ψ j � ψ j ( t ) j = 1 K � = n ( t ) − � n , ψ j � ψ j ( t ) j = 1 n ⊥ ( t ) = where n ⊥ ( t ) is the component of n ( t ) orthogonal to the signal space. n ⊥ ( t ) is independent of which s i ( t ) was transmitted 9 / 50

  10. Proof of Theorem To prove y ⊥ ( t ) is irrelevant, it is enough to show that n ⊥ ( t ) is independent of Y . For a given t and k cov ( n ⊥ ( t ) , N k ) E [ n ⊥ ( t ) N k ] =     n   � = E  n ( t ) − N j ψ j ( t )  N k   j = 1 K � = E [ n ( t ) N k ] − E [ N j N k ] ψ j ( t ) j = 1 σ 2 ψ k ( t ) − σ 2 ψ k ( t ) = 0 = 10 / 50

  11. M -ary Signaling in AWGN Channel • M hypotheses with prior probabilities π i , i = 1 , . . . , M H 1 : Y = s 1 + N . . . . . . H M : Y = s M + N � T � Y = � y , ψ 1 � · · · � y , ψ K � � T � s i = � s i , ψ 1 � · · · � s i , ψ K � � T � N = � n , ψ 1 � · · · � n , ψ K � • N ∼ N ( m , C ) where m = 0 and C = σ 2 I cov ( � n , ψ 1 � , � n , ψ 2 � ) = σ 2 � ψ 1 , ψ 2 � . 11 / 50

  12. Optimal Receiver for the AWGN Channel Theorem (MPE Decision Rule) The MPE decision rule for M-ary signaling in AWGN channel is given by 1 ≤ i ≤ M � y − s i � 2 − 2 σ 2 log π i δ MPE ( y ) = arg min 1 ≤ i ≤ M � y , s i � − � s i � 2 + σ 2 log π i = arg max 2 Proof δ MPE ( y ) = arg max 1 ≤ i ≤ M π i p i ( y ) −� y − s i � 2 � � = arg max 1 ≤ i ≤ M π i exp 2 σ 2 12 / 50

  13. Vector Representation of Real Signal Waveforms ψ 1 ( t ) � s i , 1 × ψ 2 ( t ) s i , 2 × � . . . . . . . . . = s i s i ( t ) ψ K − 1 ( t ) � s i , K − 1 × ψ K ( t ) � s i , K × 13 / 50

  14. Vector Representation of the Real Received Signal ψ 1 ( t ) � y 1 × ψ 2 ( t ) × � y 2 . . . . . . . . . = y y ( t ) ψ K − 1 ( t ) � y K − 1 × ψ K ( t ) � × y K 14 / 50

  15. MPE Decision Rule − � s 1 � 2 + σ 2 log π 1 s 1 2 × + − � s 2 � 2 + σ 2 log π 2 s 2 2 × + . . . . . . y T arg max − � s M − 1 � 2 + σ 2 log π M − 1 s M − 1 2 × + − � s M � 2 + σ 2 log π M s M 2 × + 15 / 50

  16. MPE Decision Rule Example s 1 ( t ) s 2 ( t ) s 3 ( t ) 2 2 t 1 2 3 1 2 3 t 1 t -2 s 4 ( t ) y ( t ) 2 2 1 2 3 t 1 2 3 t Let π 1 = π 2 = 1 3 , π 3 = π 4 = 1 6 , σ 2 = 1 , and log 2 = 0 . 69 . 16 / 50

  17. ML Receiver for the AWGN Channel Theorem (ML Decision Rule) The ML decision rule for M-ary signaling in AWGN channel is given by 1 ≤ i ≤ M � y − s i � 2 δ ML ( y ) = arg min 1 ≤ i ≤ M � y , s i � − � s i � 2 = arg max 2 Proof δ ML ( y ) = arg max 1 ≤ i ≤ M p i ( y ) −� y − s i � 2 � � = arg max 1 ≤ i ≤ M exp 2 σ 2 17 / 50

  18. ML Decision Rule − � s 1 � 2 s 1 2 × + − � s 2 � 2 s 2 2 × + . . . . . . y T arg max − � s M − 1 � 2 s M − 1 2 × + − � s M � 2 s M 2 × + 18 / 50

  19. ML Decision Rule − s 1 �·� 2 + − s 2 �·� 2 + . . . . . . y arg min − s M − 1 �·� 2 + − s M �·� 2 + 19 / 50

  20. ML Decision Rule Example s 1 ( t ) s 2 ( t ) s 3 ( t ) 2 2 t 1 2 3 1 2 3 t 1 t -2 s 4 ( t ) y ( t ) 2 2 1 2 3 t 1 2 3 t 20 / 50

  21. Continuous-Time Versions of Optimal Decision Rules • Discrete-time decision rules 1 ≤ i ≤ M � y , s i � − � s i � 2 + σ 2 log π i δ MPE ( y ) = arg max 2 1 ≤ i ≤ M � y , s i � − � s i � 2 δ ML ( y ) = arg max 2 • Continuous-time decision rules 1 ≤ i ≤ M � y , s i � − � s i � 2 + σ 2 log π i δ MPE ( y ) = arg max 2 1 ≤ i ≤ M � y , s i � − � s i � 2 δ ML ( y ) = arg max 2 21 / 50

  22. ML Decision Rule for Antipodal Signaling s 1 ( t ) s 2 ( t ) A t T t T - A 1 ≤ i ≤ 2 � y , s i � − � s i � 2 δ ML ( y ) = arg max = arg max 1 ≤ i ≤ 2 � y , s i � 2 δ ML ( y ) = 1 ⇐ ⇒ � y , s 1 � ≥ � y , s 2 � ⇐ ⇒ � y , s 1 � ≥ 0 � T � y , s 1 � = y ( τ ) s 1 ( τ ) d τ = y ⋆ s MF ( T ) 0 where s MF ( t ) = s 1 ( T − t ) is the matched filter. 22 / 50

  23. Optimal Receiver for Passband Signals Consider M -ary passband signaling over the AWGN channel H i : y p ( t ) = s i , p ( t ) + n p ( t ) , i = 1 , . . . , M where y p ( t ) Real passband received signal s i , p ( t ) Real passband signals n p ( t ) Real passband GN with PSD N 0 2 Passband GN PSD N 0 2 − f c f c f 23 / 50

  24. White Gaussian Noise is an Idealization WGN PSD N 0 2 − f c f c f Infinite Power! Ideal model of passband Gaussian noise Passband GN PSD N 0 2 − f c f c f 24 / 50

  25. Detection using Complex Baseband Representation • M -ary passband signaling over the AWGN channel H i : y p ( t ) = s i , p ( t ) + n p ( t ) , i = 1 , . . . , M where y p ( t ) Real passband received signal s i , p ( t ) Real passband signals n p ( t ) Real passband GN with PSD N 0 2 • The equivalent problem in complex baseband is H i : y ( t ) = s i ( t ) + n ( t ) , i = 1 , . . . , M where y ( t ) Complex envelope of y p ( t ) s i ( t ) Complex envelope of s i , p ( t ) n ( t ) Complex envelope of n p ( t ) 25 / 50

Recommend


More recommend