Optimal Receiver for the AWGN Channel Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay September 23, 2013 1 / 20
Additive White Gaussian Noise Channel AWGN s ( t ) y ( t ) Channel y ( t ) = s ( t ) + n ( t ) s ( t ) Transmitted Signal y ( t ) Received Signal n ( t ) White Gaussian Noise S n ( f ) = N 0 2 = σ 2 R n ( τ ) = σ 2 δ ( τ ) 2 / 20
M -ary Signaling in AWGN Channel • One of M continuous-time signals s 1 ( t ) , . . . , s M ( t ) is sent • The received signal is the transmitted signal corrupted by AWGN • M hypotheses with prior probabilities π i , i = 1 , . . . , M : y ( t ) = s 1 ( t ) + n ( t ) H 1 : y ( t ) = s 2 ( t ) + n ( t ) H 2 . . . . . . H M : y ( t ) = s M ( t ) + n ( t ) • Random variables are easier to handle than random processes • We derive an equivalent M -ary hypothesis testing problem involving only random vectors 3 / 20
Restriction to Signal Space is Optimal Theorem For the M-ary hypothesis testing given by H 1 : y ( t ) = s 1 ( t ) + n ( t ) . . . . . . H M : y ( t ) = s M ( t ) + n ( t ) there is no loss in detection performance by using the optimal decision rule for the following M-ary hypothesis testing problem H 1 : Y = s 1 + N . . . . . . H M : Y = s M + N where Y , s i and N are the projections of y ( t ) , s i ( t ) and n ( t ) respectively onto the signal space spanned by { s i ( t ) } . 4 / 20
Projection of Signals onto Signal Space • Consider an orthonormal basis { ψ i ( t ) | i = 1 , . . . , K } for the space spanned by { s i ( t ) | i = 1 , . . . , M } • Projection of s i ( t ) onto the signal space is � T � s i = � s i , ψ 1 � · · · � s i , ψ K � ψ 1 ( t ) � s i , 1 × ψ 2 ( t ) � s i , 2 × . . . . . . . . . = s i s i ( t ) ψ K − 1 ( t ) � s i , K − 1 × ψ K ( t ) � s i , K × 5 / 20
Projection of Observed Signal onto Signal Space • Projection of y ( t ) onto the signal space is � T � Y = � y , ψ 1 � · · · � y , ψ K � ψ 1 ( t ) � y 1 × ψ 2 ( t ) � y 2 × . . . . . . . . . = y y ( t ) ψ K − 1 ( t ) � y K − 1 × ψ K ( t ) � × y K 6 / 20
Projection of Noise onto Signal Space • Projection of n ( t ) onto the signal space is � T � N = � n , ψ 1 � · · · � n , ψ K � ψ 1 ( t ) � × N 1 ψ 2 ( t ) � × N 2 . . . . . . . . . n ( t ) = N ψ K − 1 ( t ) � × N K − 1 ψ K ( t ) � × N K 7 / 20
Proof of Theorem � T � • Y = � y , ψ 1 � · · · � y , ψ K � • Component of y ( t ) orthogonal to the signal space is K y ⊥ ( t ) = y ( t ) − � � y , ψ i � ψ i ( t ) i = 1 • y ( t ) is equivalent to ( Y , y ⊥ ( t )) • We claim that y ⊥ ( t ) is an irrelevant statistic K y ⊥ ( t ) � = y ( t ) − � y , ψ i � ψ i ( t ) i = 1 K � = s i ( t ) + n ( t ) − � s i + n , ψ j � ψ j ( t ) j = 1 K � � n , ψ j � ψ j ( t ) = n ⊥ ( t ) = n ( t ) − j = 1 where n ⊥ ( t ) is the component of n ( t ) orthogonal to the signal space. • n ⊥ ( t ) is independent of which s i ( t ) was transmitted which makes y ⊥ ( t ) an irrelevant statistic. 8 / 20
M -ary Signaling in AWGN Channel • M hypotheses with prior probabilities π i , i = 1 , . . . , M H 1 : Y = s 1 + N . . . . . . H M : Y = s M + N � T � Y = � y , ψ 1 � · · · � y , ψ K � � T � s i = � s i , ψ 1 � · · · � s i , ψ K � � T N = � � n , ψ 1 � · · · � n , ψ K � • N ∼ N ( m , C ) where m = 0 and C = σ 2 I cov ( � n , ψ 1 � , � n , ψ 2 � ) = σ 2 � ψ 1 , ψ 2 � . 9 / 20
Optimal Receiver for the AWGN Channel Theorem (MPE Decision Rule) The MPE decision rule for M-ary signaling in AWGN channel is given by � y − s i � 2 − 2 σ 2 log π i δ MPE ( y ) = argmin 1 ≤ i ≤ M � y , s i � − � s i � 2 + σ 2 log π i = argmax 2 1 ≤ i ≤ M Proof δ MPE ( y ) = argmax π i p i ( y ) 1 ≤ i ≤ M −� y − s i � 2 � � = argmax π i exp 2 σ 2 1 ≤ i ≤ M 10 / 20
MPE Decision Rule s 1 − � s 1 � 2 + σ 2 log π 1 2 × + − � s 2 � 2 + σ 2 log π 2 s 2 2 × + . . . . . . y T argmax � s M − 1 � 2 s M − 1 + σ 2 log π M − 1 − 2 × + − � s M � 2 + σ 2 log π M s M 2 × + 11 / 20
Continuous-Time Version of MPE Rule • Discrete-time version � y , s i � − � s i � 2 + σ 2 log π i δ MPE ( y ) = argmax 2 1 ≤ i ≤ M • Continuous-time version � y , s i � − � s i � 2 + σ 2 log π i δ MPE ( y ) = argmax 2 1 ≤ i ≤ M 12 / 20
MPE Decision Rule Example s 1 ( t ) s 2 ( t ) s 3 ( t ) 2 2 1 2 3 t 1 2 3 t 1 t -2 s 4 ( t ) y ( t ) 2 2 1 1 2 3 t 1 2 3 t Let π 1 = π 2 = 1 3 , π 3 = π 4 = 1 6 , σ 2 = 1 , and log 2 = 0 . 69 . 13 / 20
ML Receiver for the AWGN Channel Theorem (ML Decision Rule) The ML decision rule for M-ary signaling in AWGN channel is given by � y − s i � 2 δ ML ( y ) = argmin 1 ≤ i ≤ M � y , s i � − � s i � 2 = argmax 2 1 ≤ i ≤ M Proof δ ML ( y ) = argmax p i ( y ) 1 ≤ i ≤ M −� y − s i � 2 � � = argmax exp 2 σ 2 1 ≤ i ≤ M 14 / 20
ML Decision Rule s 1 − � s 1 � 2 2 × + − � s 2 � 2 s 2 2 × + . . . . . . y T argmax � s M − 1 � 2 s M − 1 − 2 × + − � s M � 2 s M 2 × + 15 / 20
ML Decision Rule − s 1 + �·� 2 − s 2 + �·� 2 . . . . . . y argmin − s M − 1 + �·� 2 − s M + �·� 2 16 / 20
Continuous-Time Version of ML Rule • Discrete-time version � y , s i � − � s i � 2 δ ML ( y ) = argmax 2 1 ≤ i ≤ M • Continuous-time version � y , s i � − � s i � 2 δ ML ( y ) = argmax 2 1 ≤ i ≤ M 17 / 20
ML Decision Rule Example s 1 ( t ) s 2 ( t ) s 3 ( t ) 2 2 1 2 3 t 1 2 3 t 1 t -2 s 4 ( t ) y ( t ) 2 2 1 2 3 t 1 2 3 t 18 / 20
ML Decision Rule for Antipodal Signaling s 1 ( t ) s 2 ( t ) A T t T t - A � y , s i � − � s i � 2 δ ML ( y ) = argmax = argmax � y , s i � 2 1 ≤ i ≤ 2 1 ≤ i ≤ 2 δ ML ( y ) = 1 ⇐ ⇒ � y , s 1 � ≥ � y , s 2 � ⇐ ⇒ � y , s 1 � ≥ 0 � T � y , s 1 � = y ( τ ) s 1 ( τ ) d τ = ( y ⋆ s MF )( T ) 0 where s MF ( t ) = s 1 ( T − t ) is the matched filter. 19 / 20
Thanks for your attention 20 / 20
Recommend
More recommend