Performance of ML Receiver for Binary Signaling Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay October 7, 2013 1 / 14
Real AWGN Channel
M -ary Signaling in AWGN Channel • One of M continuous-time signals s 1 ( t ) , . . . , s M ( t ) is transmitted • The received signal is the transmitted signal corrupted by real AWGN • M hypotheses with prior probabilities π i , i = 1 , . . . , M : y ( t ) = s 1 ( t ) + n ( t ) H 1 : y ( t ) = s 2 ( t ) + n ( t ) H 2 . . . . . . H M : y ( t ) = s M ( t ) + n ( t ) • If the prior probabilities are equal, ML decision rule is optimal • The ML decision rule is � y , s i � − � s i � 2 � y − s i � 2 = argmax δ ML ( y ) = argmin 2 1 ≤ i ≤ M 1 ≤ i ≤ M • We want to study the performance of the ML decision rule 3 / 14
ML Decision Rule for Binary Signaling • Consider the special case of binary signaling H 0 : y ( t ) = s 0 ( t ) + n ( t ) H 1 : y ( t ) = s 1 ( t ) + n ( t ) • The ML decision rule decides H 0 is true if � y , s 0 � − � s 0 � 2 > � y , s 1 � − � s 1 � 2 2 2 • The ML decision rule decides H 1 is true if � y , s 0 � − � s 0 � 2 ≤ � y , s 1 � − � s 1 � 2 2 2 • The ML decision rule H 0 � s 0 � 2 − � s 1 � 2 � � y , s 0 − s 1 � 2 2 H 1 • The distribution of � y , s 0 − s 1 � is required to evaluate decision rule performance 4 / 14
Performance of ML Decision Rule for Binary Signaling • Let Z = � y , s 0 − s 1 � • Z is a Gaussian random variable Z = � y , s 0 − s 1 � = � s i , s 0 − s 1 � + � n , s 0 − s 1 � • The mean and variance of Z under H 0 are � s 0 � 2 − � s 0 , s 1 � E [ Z | H 0 ] = σ 2 � s 0 − s 1 � 2 var [ Z | H 0 ] = where σ 2 is the PSD of n ( t ) • Probability of error under H 0 is Z ≤ � s 0 � 2 − � s 1 � 2 � � � � � s 0 − s 1 � � � P e | 0 = Pr � H 0 = Q � 2 2 σ 5 / 14
Performance of ML Decision Rule for Binary Signaling • The mean and variance of Z under H 1 are � s 1 , s 0 � − � s 1 � 2 E [ Z | H 1 ] = σ 2 � s 0 − s 1 � 2 var [ Z | H 1 ] = • Probability of error under H 1 is Z > � s 0 � 2 − � s 1 � 2 � � � � � s 0 − s 1 � � � P e | 1 = Pr � H 1 = Q � 2 2 σ • The average probability of error is P e = P e | 0 + P e | 1 � � s 0 − s 1 � � = Q 2 2 σ 6 / 14
Different Types of Binary Signaling � s 0 � 2 + � s 1 � 2 � • Let E b = 1 � 2 • For antipodal signaling, s 1 ( t ) = − s 0 ( t ) E b = � s 0 � 2 = � s 1 � 2 and � s 0 − s 1 � = 2 � s 0 � = 2 � s 1 � = 2 √ E b � √ E b �� � � 2 E b P e = Q = Q σ N 0 where σ 2 = N 0 2 • For on-off keying, s 1 ( t ) = s ( t ) and s 0 ( t ) = 0 and �� � E b P e = Q N 0 • For orthogonal signaling, s 1 ( t ) and s 2 ( t ) are orthogonal ( � s 0 , s 1 � = 0) �� � E b P e = Q N 0 7 / 14
Performance Comparison of Antipodal and Orthogonal Signaling 10 − 1 Orthogonal Antipodal 10 − 3 10 − 5 P e 10 − 7 10 − 9 0 2 4 6 8 10 12 14 16 18 20 E b N 0 (dB) 8 / 14
Optimal Choice of Signal Pair • For any s 0 ( t ) and s 1 ( t ) , the probability of error of the ML decision rule is � � s 0 − s 1 � � P e = Q 2 σ • How to chose s 0 ( t ) and s 1 ( t ) to minimize P e ? • If E b is not fixed, the problem is ill-defined • For a given E b , we have � � � s 0 − s 1 � 2 E b ( 1 − ρ ) = Q P e = Q 2 N 0 N 0 where ρ = � s 0 , s 1 � , − 1 ≤ ρ ≤ 1 E b • ρ = − 1 for antipodal signaling, s 0 ( t ) = − s 1 ( t ) • Any pair of antipodal signals is the optimal choice 9 / 14
Complex AWGN Channel
ML Rule for Complex Baseband Binary Signaling • Consider binary signaling in the complex AWGN channel H 0 : y ( t ) = s 0 ( t ) + n ( t ) H 1 : y ( t ) = s 1 ( t ) + n ( t ) where y ( t ) Complex envelope of received signal s i ( t ) Complex envelope of transmitted signal under H i n ( t ) Complex white Gaussian noise with PSD N 0 = 2 σ 2 • n ( t ) = n c ( t ) + jn s ( t ) where n c ( t ) and n s ( t ) are independent WGN with PSD σ 2 • The ML decision rule is Re ( � y , s 0 � ) − � s 0 � 2 H 0 Re ( � y , s 1 � ) − � s 1 � 2 � 2 2 H 1 � s 0 � 2 − � s 1 � 2 H 0 � Re ( � y , s 0 − s 1 � ) 2 H 1 • The distribution of Re ( � y , s 0 − s 1 � ) is required to evaluate decision rule performance 11 / 14
Performance of ML Rule for Complex Baseband Binary Signaling • Let Z = Re ( � y , s 0 − s 1 � ) • Z is a Gaussian random variable Z = Re ( � y , s 0 − s 1 � ) = � y c , s 0 , c − s 1 , c � + � y s , s 0 , s − s 1 , s � = � s i , c + n c , s 0 , c − s 1 , c � + � s i , s + n s , s 0 , s − s 1 , s � = � s i , c , s 0 , c − s 1 , c � + � n c , s 0 , c − s 1 , c � + � s i , s , s 0 , s − s 1 , s � + � n s , s 0 , s − s 1 , s � • The mean and variance of Z under H 0 are � s 0 , c � 2 + � s 0 , s � 2 − � s 0 , c , s 1 , c � − � s 0 , s , s 1 , s � E [ Z | H 0 ] = � s 0 � 2 − Re ( � s 0 , s 1 � ) = σ 2 � s 0 , c − s 1 , c � 2 + σ 2 � s 0 , s − s 1 , s � 2 = σ 2 � s 0 − s 1 � 2 var [ Z | H 0 ] = • Probability of error under H 0 is Z ≤ � s 0 � 2 − � s 1 � 2 � � � � � s 0 − s 1 � � � P e | 0 = Pr � H 0 = Q � 2 2 σ 12 / 14
Performance of ML Rule for Complex Baseband Binary Signaling • The mean and variance of Z under H 1 are � s 1 , c , s 0 , c � + � s 1 , s , s 0 , s � − � s 1 , c � 2 − � s 1 , s � 2 E [ Z | H 1 ] = Re ( � s 1 , s 0 � ) − � s 1 � 2 = σ 2 � s 0 , c − s 1 , c � 2 + σ 2 � s 0 , s − s 1 , s � 2 = σ 2 � s 0 − s 1 � 2 var [ Z | H 1 ] = • Probability of error under H 1 is Z > � s 0 � 2 − � s 1 � 2 � � � � � s 0 − s 1 � � � P e | 1 = Pr � H 1 = Q � 2 2 σ • The average probability of error is P e = P e | 0 + P e | 1 � � s 0 − s 1 � � = Q 2 2 σ 13 / 14
Thanks for your attention 14 / 14
Recommend
More recommend