iterative timing recovery
play

Iterative Timing Recovery John R. Barry School of Electrical and - PowerPoint PPT Presentation

Iterative Timing Recovery John R. Barry School of Electrical and Computer Engineering, Georgia Tech Atlanta, Georgia U.S.A. barry@ece.gatech.edu 0 Outline Timing Recovery Tutorial Problem statement TED: M&M, LMS, S-curves


  1. Iterative Timing Recovery John R. Barry School of Electrical and Computer Engineering, Georgia Tech Atlanta, Georgia U.S.A. barry@ece.gatech.edu 0

  2. Outline Timing Recovery Tutorial • Problem statement • TED: M&M, LMS, S-curves • PLL Iterative Timing Recovery • Motivation (powerful FEC) • 3-way strategy • Per-survivor strategy • Performance comparison 1

  3. The Timing Recovery Problem Receiver expects the k -th pulse to arrive at time kT : 0 T 2 T 3 T 4 T 5 T τ 4 Instead, the k -th pulse arrives at time kT + τ k . Notation: τ k is offset of k -th pulse. 2

  4. Sampling Best sampling times are { kT + τ k } . to digital r ( t ) detector ˆ τ kT + k TIMING RECOVERY Estimate {τ k } 3

  5. Timing Offset Models τ k ⇒ CONSTANT τ k = τ 0 TIME τ k ⇒ FREQUENCY OFFSET τ k +1 = τ k + ∆ T TIME τ k RANDOM WALK ⇒ 2 ) τ k +1 = τ k + N (0, σ w TIME RANDOM WALK + FREQUENCY OFFSET τ k +1 = τ k + N ( ∆ T , σ w 2 ) 4

  6. The PR4 Model and Notation TIMING OFFSETS AWGN a k d k r ( t ) = ∑ k d k g ( t – kT – τ k ) τ 1 – D 2 g ( t ) ∈{ ± 1 } ∈{ 0, ± 2 } + AWGN SINC FUNCTION –2T 0 2T 4T d k = a k – a k – 2 ∈ { 0, ± 2 } = 3-level “PR4” symbol Define: • ˆ • k = receiver’s estimate of d k d τ k = timing offset • ˆ τ k = receiver’s estimate of τ k • ˆ ε k = τ k – τ k estimate error, with std σ ε . • ˆ ε k = receiver’s estimate of ε k • 5

  7. ML Estimate: Trained, Constant Offset 2 ∞   ˆ ∑ τ d i g ( t – iT – τ ) The ML estimate minimizes J ( | a ) = τ ∫  dt . r ( t ) –  – i ∞ Exhaustive search: Try all values for τ , pick one that best represents r ( t ) in MMSE sense. J ˆ τ –2 T – T 0 T 2 T 6

  8. ANIMATION 1 7

  9. Achieves Cramer-Rao Bound ˆ ˆ ∑ τ τ – τ ) + n k d i g ( kT – iT + r ( kT + ) = i ˆ = s k ( ε ) + n k , where ε = τ – τ is the estimation error. The CRB on the variance of the estimation error: σ ε 2 σ 2 3 σ 2 1 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ⋅ = . ≥ ∂ s k ε T 2 N 2 π 2 N   ( ) E   ∂ ε 8

  10. Implementation J ˆ τ –2 T – T 0 T 2 T ∂ ˆ ˆ Gradient search: τ τ J ( τ | a ) i – µ i + 1 = ˆ i = τ τ ∂ τ Direct calculation ∂ ∞ 1 of gradient: - - τ J ( τ | a ) = r ( t ) g ´( t – iT – τ ) dt ∑ i d i ∞ ∫ ∂ 2 – ∑ i d i r i ´ . = Remarks: • Susceptible to local minima ⇒ initialize carefully. • Block processing. • Requires training. 9

  11. Conventional Timing Recovery After each sample: Step 1. Estimate residual error, using a timing-error detector (TED) ˆ Step 2. Update , using a phase-locked loop (PLL) τ r k r ( t ) ˆ τ kT + k VITERBI DETECTOR PLL T.E.D. UPDATE ˆ k ˆ ε ∈{ 0, ± 2 } d k TRAINING d k 10

  12. LMS Timing Recovery MMSE cost function: 2   E r k – d k   k -th sample, what we want it to be ˆ τ r k = r ( kT + k ) ˆ ˆ LMS approach: ˆ τ τ k + µ k ε k + 1 = ∂ 2   where ˆ k ε . = – r k – d k   ˆ ∂ τ ˆ ˆ k = τ τ 11

  13. LMS TED ∂ ∂ 2   1 ∑ ˆ But: - - τ τ – τ ) r k – d k = ( r k – d k ) d i g ( kT – iT +   ˆ ˆ ∂ ∂ τ 2 i ˆ = ˆ k τ τ ∑ ˆ τ – τ ) = ( r k – d k ) d i g ´( kT – iT + i ∑ ( ε k ) = ( r k – d k ) d i p – k i i ∂ where g ( nT – ε ): ( ε k ) p n = ∂ τ 0 12

  14. From LMS to Mueller & Müller 0 ˆ k ∝≈ ( r k – d k )( d k – 1 – d k + 1 ) + smaller terms ε = r k d k – 1 – r k d k + 1 – d k d k – 1 + d k d k + 1 Independent of τ Delay second term and eliminate last two: ˆ k ∝ r k d k – 1 – r k – 1 d k ⇒ Mueller & Müller (M&M) TED ε 13

  15. S Curves 0.5 T ε ε = ^ TRAINED M&M, LMS AVERAGE TED OUTPUT, E[ ε | ε ] ^ 0 SNR = 2 ⁄ σ 2 = 10 dB –0.5 T –0.5 T 0.5 T 0 TIMING ERROR, ε 14

  16. LMS is Noisier 0.5 T ε ε = ^ AVERAGE TED OUTPUT, E[ ε | ε ] ^ ONE STD TRAINED M&M TRAINED LMS 0 SNR = 2 ⁄ σ 2 = 10 dB –0.5 T –0.5 T 0.5 T 0 TIMING ERROR, ε 15

  17. An Interpretation of M&M Consider the complex signal r ( t ) + jr ( t – T ) . Its noiseless trajectory: 2 0 –2 –2 0 2 • It passes through { 0, ± 2, ± 2 ± 2 j , ± 2 j } at times { kT + τ} . • More often than not, in a counterclockwise direction. 16

  18. ANIMATION 2 17

  19. Sampling Late by 20% R k = r k + jr k – 1 D k = d k + jd k – 1 θ The angle between R k and D k predicts timing error:   R * D M&M. θ ≈ sin θ = Im ∝ r k d k – 1 – r k – 1 d k ⇒ �   - - - - - - - - - - - - - - - - RD   18

  20. Decision-Directed TED ˆ ˆ ˆ Replace training by decisions { k } ⇒ ˆ k ∝ r k k . ε d d d k – 1 – r k – 1 Instantaneous decisions: • Hard: Round r k to nearest symbol. 2 r k σ 2 ⁄ ˆ ( ) 2 sinh • Soft: : d k = E[ d k | r k ] = - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - e 2 σ 2 ⁄ 2 r k σ 2 ⁄ ( ) + cosh ∞ dB ˆ d 10 dB 5 dB r 19

  21. 0.5 T TRAINED M&M, LMS AVERAGE TED OUTPUT, E[ ε | ε ] ^ HARD M&M 0 SOFT M&M SNR = 2 ⁄ σ 2 = 10 dB –0.5 T –0.5 T 0.5 T 0 TIMING ERROR, ε 20

  22. 0.5 T AVERAGE TED OUTPUT, E[ ε | ε ] ^ HARD SOFT 0 SNR = 2 ⁄ σ 2 = 10 dB –0.5 T –0.5 T 0.5 T 0 TIMING ERROR, ε 21

  23. Reliability versus Delay Three places to get decisions: ˆ τ r k = r ( kT + k ) r ( t ) TRELLIS TRELLIS DECODER DECODER EQUALIZER EQUALIZER ˆ τ z –D kT + k ˆ ε B r k–D PLL A k T.E.D. UPDATE ˜ k–D C ˆ k ˆ k ˆ k τ τ + α ε d = + 1 Inherent trade-off: reliability versus delay. • to get more reliable decisions requires more decoding delay D • delay decreases agility to timing variations 22

  24. Decision Delay Degrades Performance 5.5 D = 20 5 5 (%) 1 = RMS TIMING JITTER σ ε ⁄ T D D = 10 4.5 4 5 = D 3.5 D = 0 3 TRAINED M&M 1st-ORDER PLL 2.5 E b ⁄ N 0 = 8 dB 0 0.02 0.04 0.06 0.08 0.1 σ w ⁄ T = 0.5% PLL GAIN, α 23

  25. The Instantaneous-vs-Reliable Trade-Off 7% JITTER DELAY-100, σ ε ⁄ T PERFECT DECISIONS 6% 5% 4% DELAY-20, PERFECT DECISIONS 3% INSTANTANEOUS BUT 2% UNRELIABLE DECISIONS 1% 0 5 10 15 20 25 SNR (dB) Parameters Delay α opt Reliability becomes more Averaged over 40,000 bits random walk σ w ⁄ T = 0.5% important at low SNR. 100 0.006 1st-order M&M PLL 20 0.017 α optimized for SNR = 10 dB 0 0.046 24

  26. Linearized Analysis Assume ˆ k = ε k + independent noise ε ˆ = τ k – τ k + n k ⇒ 1st order PLL, ˆ ˆ k + α ( τ k – ˆ k + n k ), is a linear system: k + 1 = τ τ τ α NOISE 1-st order LPF – ˆ k α z 1 τ τ k - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - – – – ( α ) z 1 1 1 Ex: Random walk WHITE 1-st order LPF τ k ˆ k τ ⇒ derive optimal α w k – α z 1 1 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - to minimize σ ε 2 – – – – – ( α ) z 1 z 1 1 1 1 – WHITE ε k + 25

  27. The PLL Update ˆ ˆ ˆ τ τ k + α k ε 1st-order PLL: k + 1 = • Already introduced using LMS • Easily motivated intuitively: ˆ k is accurate, α = 1 corrects in one step ➢ if ε ➢ Smaller α attenuates noise at cost of slower response ˆ ˆ ˆ ∑ k τ τ ε ˆ 2nd-order PLL: k + α k + β ε k + 1 = n = – ∞ n • Accumulate TED output to anticipate trends • P+I control • Closed-loop system is second-order LPF • Faster response • Zero steady-state error for frequency offset 0 0 TIME 26

  28. Equivalent Views of PLL ˆ Analysis: Sample at { kT + k } , where τ ˆ ˆ ˆ ∑ k n , τ τ ε ˆ • k + α k + β ε k + 1 = = – ∞ n ˆ k is estimate of timing error at time k . ε • Implementation: A ⁄ D ˆ k ε β + - - - - - - - - - - - - - - - - α VCO TED – – z 1 1 LOOP FILTER PHASE DETECTOR Accumulation implicit in VCO 27

  29. Iterative Timing Recovery Motivation • Powerful codes ⇒ low SNR ⇒ timing recovery is difficult • traditional PLL approach ignores presence of code Key Questions • How can timing recovery exploit code? • What performance gains can be expected? • Is it practical? 28

Recommend


More recommend