part ii fading and diversity
play

Part II. Fading and Diversity Impact of Fading in Detection; Time - PowerPoint PPT Presentation

Part II. Fading and Diversity Impact of Fading in Detection; Time Diversity; Antenna Diversity; Frequency Diversity 1 Simplest Model: Single-Tap Rayleigh Fading Flat fading: single-tap Rayleigh fading H CN (0 , 1) , Z CN (0 , N 0 ) V =


  1. Part II. Fading and Diversity Impact of Fading in Detection; Time Diversity; Antenna Diversity; Frequency Diversity 1

  2. Simplest Model: Single-Tap Rayleigh Fading Flat fading: single-tap Rayleigh fading H ∼ CN (0 , 1) , Z ∼ CN (0 , N 0 ) V = Hu + Z, Detection: ˆ Detection V = Hu + Z Θ u = a ˆ ˆ θ u = a θ ∈ A � { a 1 , . . . , a M } Detector (Rx) may or may not know the channel coe ffi cients Coherent Detection : Rx knows the realization of H Noncoherent Detection : Rx does not know the realization of H 2

  3. Coherent Detection of BPSK ˆ Detection u = a ˆ ˆ V = Hu + Z Θ = φ ( V, H ) θ p p p u ∈ {± E s } a 0 = + E s , a 1 = − E s H ∼ CN (0 , 1) , Z ∼ CN (0 , N 0 ) Likelihood function: f V,H | Θ ( v, h | θ ) = f V | H, Θ ( v | h, θ ) f H ( h ) ∝ f V | H, Θ ( v | h, θ ) The detection problem is equivalent to binary detection in V = u + ˜ ˜ V � V / h, ˜ ˜ Z � Z / h ∼ CN (0 , N 0 / | h | 2 ) Z, Probability of error conditioned on the realization of H = h : � � �� � 2 √ E s 2 | h | 2 E s P e ( φ �� ; H = h ) = Q = Q N 0 2 √ N 0 / (2 | h | 2 ) 3

  4. Probability of error: �� � 2 | h | 2 E s P e ( φ �� ; H = h ) = Q N 0 P e ( φ �� ) = E H ∼ CN (0 , 1) [ P e ( φ �� ; H )] � �� �� 2 | H | 2 E s = E H ∼ CN (0 , 1) Q N 0 � 1 2 exp( − | H | 2 SNR ) � ≤ E | H | 2 ∼ Exp(0 , 1) Z ∞ 1 1 2 e − t SNR e − t d t = = 2(1 + SNR ) 0 4

  5. Impact of Fading • Let us explore the impact of fading by comparing the performance of coherent BPSK between AWGN and single-tap Rayleigh fading • The average received SNRs are the same: | H | 2 SNR � � E H ∼ CN (0 . 1) = SNR • AWGN: probability of error decays exponentially fast: � √ � ≤ 1 P e ( φ ML ) = Q 2 SNR 2 exp( − SNR ) ������ e − SNR • Rayleigh fading: probability of error decays much slower: � �� �� ≤ 1 1 2 | H | 2 SNR P e ( φ ML ) = E H ∼ CN (0 , 1) Q 2 1+ SNR ������ SNR − 1 5

  6. Availability of channel state information (CSI) at Rx only changes the intercept, but not the slope P e 1 3 dB 10 –2 15 dB 10 –4 10– 6 10 –8 BPSK over AWGN 10 –10 Non-coherent orthogonal 10 –12 Coherent BPSK 10 –14 10 –16 –20 –10 0 10 20 30 40 SNR (dB) 6

  7. Coherent Detection of General QAM Probability of error for -ary QAM M = 2 2 ℓ �� � �� � | h | 2 d 2 3 M − 1 | h | 2 SNR P e ( φ �� ; H = h ) ≤ 4Q = 4Q min 2 N 0 � �� �� 3 M − 1 | H | 2 SNR P e ( φ �� ) ≤ E H ∼ CN (0 , 1) 4Q � � 3 2 exp( − | H | 2 2( M − 1) SNR ) ≤ E | H | 2 ∼ Exp(0 , 1) 2 2( M − 1) SNR ≈ 4( M − 1) = SNR − 1 3 1 + 3 Using general constellation does not change the order of performance (the “slope” on the log P e vs. log SNR plot) Di ff erent constellation only changes the intercept 7

  8. Deep Fade: the Typical Error Event • In Rayleigh fading channel, regardless of constellation size and detection method (coherent/non-coherent), P e ∼ SNR − 1 • This is in sharp contrast to AWGN: P e ∼ exp( − c SNR ) • Why? Let’s take a deeper look at the BPSK case: � � 2 | h | 2 SNR P e ( φ �� ; H = h ) = Q ‣ If channel is good, error probability | h | 2 SNR � 1 = ∼ exp( − c SNR ) ⇒ ‣ If channel is bad, error probability is | h | 2 SNR < 1 = Θ (1) ⇒ | H | 2 > SNR − 1 E | | H | 2 > SNR − 1 � � P e ≡ P {E} = P P | H | 2 < SNR − 1 E | | H | 2 < SNR − 1 � � + P P = 1 − e − SNR − 1 ≈ SNR − 1 | H | 2 < SNR − 1 � / P • Deep fade event: {| H | 2 < SNR − 1 } 8

  9. Diversity {| H | 2 < SNR − 1 } Deep fade event: V = Hu + Z • Reception only relies on a single “look” at the fading state H • If H is in deep fade ⟹ big trouble (low reliability) • Increase the number of “looks” ⟺ Increase diversity ‣ If one look is in deep fade, other looks can compensate! • If there are L indep. looks, the probability of deep fade becomes L Y P { ���� i �� ���� ���� } ≈ SNR − L ` =1 • Find independent “looks” over time , space , and frequency to increase diversity! 9

  10. Time Diversity • Channel varies over time, at the scale of coherence time T c . • Interleaving: ‣ Channels within a coherence time are highly correlated ‣ Realizations separated by several T c ’s apart are roughly independent ‣ Diversity is obtained if we spread the codeword across multiple coherence time periods • Architecture(s): ‣ bit-level interleaver: interleave before modulation ‣ symbol-level interleaver: interleave after modulation 10

  11. Symbol-level interleaving ECC Equivalent Modulator Interleaver info bits encoder Discrete-Time Complex decoded De- ECC De- Baseband info bits decoder modulator interleaver Channel Bit-level interleaving ECC Equivalent Interleaver Modulator info bits encoder Discrete-Time Complex decoded De- ECC De- Baseband info bits decoder modulator interleaver Channel 11

  12. | H [ ℓ ] | | h l | L = 4 L = 4 H [1] H [4] H [2] H [3] ℓ l All are bad No interleaving Codeword Codeword Codeword Codeword x 0 x 1 x 2 x 3 Only one is bad Interleaving 12

  13. ������ Repetition Coding + Interleaving • Equivalent vector channel ‣ Channel model: ∼ CN (0 , N 0 ) , V [ ℓ ] = H [ ℓ ] u [ ℓ ] + Z [ ℓ ] , Z [ ℓ ] ℓ = 1 , ..., L ‣ (su ffi cient) Interleaving ⇒ { H [ ℓ ] } L ℓ =1 : ������ CN (0 , 1) = ‣ Repetition coding = ⇒ u [ ℓ ] = u, ℓ = 1 , ..., L ‣ Equivalent vector channel: V = H u + Z V [ L ] � ⊺ H [ L ] � ⊺ Z [ L ] � ⊺ V � � V [1] H � � H [1] Z � � Z [1] · · · · · · · · · • Probability of error analysis for BPSK: ‣ Conditioned on : 2 ∥ h ∥ 2 SNR � � P e ( φ �� ; H = h ) = Q H = h ‣ Average probability of error: � �� �� 2 ∥ H ∥ 2 SNR 2 exp( − ∥ H ∥ 2 SNR ) � � 1 P e ( φ �� ) = E H Q ≤ E H L = 1 = 1 Y exp( − | H ` | 2 SNR ) 2(1 + SNR ) − L ⇥ ⇤ E H ` 2 ` =1 ������ SNR − L 13

  14. Probability of Deep Fade • Deep fade event: { ∥ H ∥ 2 < SNR − 1 } ‣ “Equivalent squared channel” is the sum of L i.i.d. Exp(1) r.v.: ∥ H ∥ 2 1 ( L − 1)! x L � 1 e � x , x ≥ 0 f k H k 2 ( x ) = ‣ Chi-squared distribution with 2 L degrees of freedom: ∥ H ∥ 2 ∼ χ 2 2 L • Probability of deep fade: Z SNR − 1 1 P { k H k 2 < SNR − 1 } = ( L � 1)! x L − 1 e − x d x 0 ‣ Approximation at high SNR: Z SNR − 1 ( L � 1)! x L − 1 d x = 1 1 P { k H k 2 < SNR − 1 } ⇡ L ! SNR − L 0 ������ SNR − L 14

  15. 2 χ 2 L 1.0 0.9 P { k H k 2 < SNR − 1 } ⇡ 1 L ! SNR − L 0.8 0.7 L = 1 0.6 0.5 L = 2 0.4 L = 3 0.3 L = 4 L = 5 0.2 0.1 0 0 2.5 5 7.5 10 SNR − 1 15

  16. Diversity Order: 1 → L P e without coding and 1 interleaving: P e ∼ SNR − 1 L = 1 10 –5 L = 2 10 –10 L = 3 with coding and interleaving: P e ∼ SNR − L L = 4 10 –15 L = 5 − log P e Diversity order: d , 10 –20 lim log SNR SNR →∞ 10 –25 –10 –5 0 5 10 15 20 25 30 35 40 SNR (dB) 16

  17. Time-Diversity Code • Full diversity order: ‣ Total L independent looks (interleave over L coherence time intervals) ‣ The scheme can achieve full diversity order if its diversity order is L . • Repetition coding ‣ achieves full diversity order ‣ su ff ers loss in transmission rate • Is it possible to achieve full diversity order without compromising the transmission rate? • The answer is yes, with Time-Diversity Code. 17

  18. Sending 2 BPSK Symbols for L = 2 “Looks” • Consider sending 2 independent BPSK symbols ( u [1] , u [2] ) over two (interleaved) time slots ( L = 2 ) ‣ Diversity order = 1 because each BPSK symbol has only one “look” u [2] √ √ ( − 1 , 1) (1 , 1) E s E s u [1] √ √ (1 , − 1) ( − 1 , − 1) E s E s 18

  19. Rotation Code for L = 2 • How about rotating the equivalent constellation set? � � cos θ − sin θ x = r θ u , r θ = sin θ cos θ x [2] x 00 each codeword comprises 2 linear combinations of the 2 original symbols x 10 ⟹ each info. symbol has 2 independent looks! x [1] x 01 x 11 19

  20. Performance Analysis of Rotation Code Equivalent vector (2-dim) channel: � � H [1] 0 x + Z = ˜ V = x + Z 0 H [2] x [2] Union bound via pairwise probability of error: x 00 P e ( φ �� ; H = h ) ≤ P { x 00 → x 01 | H = h } p d 2 E s x 10 + P { x 00 → x 11 | H = h } p d 1 E s x [1] + P { x 00 → x 10 | H = h } x 01 ✓ k ˜ ◆ x 00 � ˜ x 10 k p 2 N 0 P { x 00 ! x 10 | H = h } = Q x 11 r ! | h [1] | 2 | d 1 | 2 + | h [2] | 2 | d 2 | 2 x 10 k 2 = Q SNR k ˜ x 00 � ˜ 2 = E s ( | h [1] | 2 | d 1 | 2 + | h [2] | 2 | d 2 | 2 ) d 1 = 2 cos θ , d 2 = 2 sin θ 20

  21. ✓ k ˜ ◆ x 00 � ˜ x 10 k p 2 N 0 P { x 00 ! x 10 | H = h } = Q r ! | h [1] | 2 | d 1 | 2 + | h [2] | 2 | d 2 | 2 = Q SNR 2 x [2] x 00 P { x 00 → x 10 }  1 � 4 ( | H [1] | 2 | d 1 | 2 + | H [2] | 2 | d 2 | 2 ) SNR 2 e − 1 ≤ E H [1] ,H [2] p d 2 E s x 10 p d 1 E s = 1 1 1 x [1] 1 + | d 1 | 2 1 + | d 2 | 2 2 4 SNR 4 SNR x 01 8 8 | d 1 d 2 | 2 SNR − 2 = SNR − 2 ≈ δ 00 → 10 x 11 squared product distance: x 10 k 2 k ˜ x 00 � ˜ = E s ( | h [1] | 2 | d 1 | 2 + | h [2] | 2 | d 2 | 2 ) δ 00 → 10 , | d 1 d 2 | 2 = 4 sin 2 (2 θ ) d 1 = 2 cos θ , d 2 = 2 sin θ 21

Recommend


More recommend