adaptive filters
play

Adaptive Filters Hidayatullah Ahsan Department of Electrical and - PowerPoint PPT Presentation

Adaptive Filters Hidayatullah Ahsan Department of Electrical and Computer Engineering, Boise State University April 12, 2010 H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 1 / 17 Motivation d be a scalar-valued random variable (desired


  1. Adaptive Filters Hidayatullah Ahsan Department of Electrical and Computer Engineering, Boise State University April 12, 2010 H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 1 / 17

  2. Motivation d be a scalar-valued random variable (desired output signal) E [ d ] = 0 � d 2 � = σ 2 E d With realization { d ( i ) : i = 0 , 1 , 2 , . . . } u ∈ R M � C M � be a random vector (input signal) E [ u ] = 0 R u = E [ u ∗ u ] > 0 R du = E [ du ∗ ] With realization { u i : i = 0 , 1 , 2 , . . . } Problem We want to solve � ( d − u ω ) 2 � min ω E (1) where ω is the weights vector. H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 2 / 17

  3. Solution By the steepest-descent algorithm ω o = R − 1 u R du which can be approximated by the following recursion with constant step-size µ > 0 ω i = ω i − 1 + µ [ R du − R u ω i − 1 ] , ω − 1 = initial guess. Remark R u and R du should be known , and fixed . H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 3 / 17

  4. Adaptive Filters "Smart Systems" Learning: Learns the Statistics of the Signal Tracking: Adjusts the Behavior to Signal Variations Practicle Reasons for Using Adaptive Filters Lack of Statistical Information Mean, Variance, Auto-correlation, Cross-correlation, etc Variation in the Statistics of the Signal Signal with Noise Randomly Moving in a Know/Unknown Bandwith with Time Types of Adaptive Filters Least Mean Square (LMS) Filters Normalized LMS Filters Non-Canonical LMS Filters Recursive Least Square (RLS) Filters QR-RLS Filters H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 4 / 17

  5. Least Mean Square (LMS) Filters Development Using Instantaneous Approximation At time index i approximate R u = E [ u ∗ u ] by � R u = u ∗ i u i R du = E [ du ∗ ] by � R du = d ( i ) u ∗ i Corresponding steepest-descent itteration ω i = ω i − 1 + µ u ∗ i [ d ( i ) − u i ω i − 1 ] , ω − 1 = initial guess where µ > 0 is a constant stepsize. Remarks Also known as the Widrow-Hoff algorithm. Commonly used algorithm for simplicity. µ is choosen to be 2 − m for m ∈ N . Computational Cost Complex-valued Signal: 8 M + 2 real multiplications, 8 M real additions. Real-values Signal: 2 M + 1 real multiplications, 2 M real additions. H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 5 / 17

  6. Least Mean Square (LMS) Filters An Illustration u: input signal ω ω - + + interference d: desired output signal Figure: An Illustration for Least Mean Square Filter H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 6 / 17

  7. Least Mean Square (LMS) Filters An Application (1/3) Least Mean Square Filter Wave Scope Sy stem WaveScope Generator d(k) In1 Out1 Channel Model In x(k) x(k) Gateway In d(k) Uniform Random Number e(k) Out Gateway Out1 Error x(k) LMS Adaptive Filter H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 7 / 17

  8. Least Mean Square (LMS) Filters An Application (2/3) z -1 z -1 z -1 2 x(k) Delay1 Delay2 Delay3 Mult Mult1 Mult2 Mult3 a a a a z -0 z -0 z -0 z -0 (ab) (ab) (ab) (ab) b b b b AddSub1 AddSub2 AddSub 1 d(k) a a a a + b a + b a + b y (k) b b b weight2 weight3 weight4 a b AddSub3 a - b e(k) Out Out Out Out Gateway Out2 Gateway Out3 Gateway Out4 Gateway Out5 Weight 1 Weight 2 Weight 3 Weight 4 weight1 1 e(k) weight1 weight2 weight3 weight4 Delay4 z -1 Delay8 z -1 Delay9 z -1 Delay10 z -1 step (2m) 0.0400390625 a + b a + b a + b a + b Constant a b AddSub4 AddSub5 AddSub6 AddSub7 a b a b a b a b z -0 Mult4 (ab) b(ab) b(ab) b(ab) b(ab) z -0 Mult5 z -0 Mult6 z -0 Mult7 z -0 Mult8 a a a a z -1 z -1 z -1 Delay5 Delay6 Delay7 H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 8 / 17

  9. Least Mean Square (LMS) Filters An Application (Error )(3/3) 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -4 x 10 Time offset: 0 H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 9 / 17

  10. Normalized Least Mean Square (LMS) Filters Solution to (1) using regularized Newton Recursion ω i = ω i − 1 + µ ( i ) [ ε ( i ) I − R u ] − 1 [ R du − R u ω i − 1 ] , ω − 1 = initial guess. where µ ( i ) > 0 is the stepsize and ε ( i ) is the regularization factor. With µ ( i ) = µ > 0 and ε ( i ) = ε fixed for all i , using the instantaneous approximation i u i ] − 1 u ∗ ω i − 1 + µ [ ε I − u ∗ = i [ d ( i ) − u i ω i − 1 ] ω i = · · · µ ε + � u i � 2 u ∗ = ω i − 1 + i [ d ( i ) − u i ω i − 1 ] Computational Cost Complex-valued Signal: 10 M + 2 real multiplications, 10 M real additions and one real division. Real-values Signal: 3 M + 1 real multiplications, 3 M real additions and one real division. H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 10 / 17

  11. Other LSM-Type Techniques Power Normalization µ µ / M Replace ε + � u i � 2 with , where M is the order of ε / M + � u i � 2 / M the filter. Definition Non-Blind algorithms are so called since they employ a reference sequence { d ( i ) : i = 0 , 1 , 2 , . . . } . Non-Blind Algorithm Blind Algorithm Leaky LMS Algorithm CMA1-2, NCMA Algorithm LMF Algorithm CMA2-2 Algorithm LMMN Algorithm RCA Algorithm MMA Algorithm H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 11 / 17

  12. Non-Canonical Least Mean Square (LMS) Filters 2 x(k) Mult Mult1 Mult2 Mult3 a a a a (ab) z -0 (ab) z -0 (ab) z -0 (ab) z -0 b b b b AddSub1 AddSub2 1 d(k) a a a a + b a + b a + b y (k) b z -1 b z -1 b z -1 AddSub Delay2 Delay1 Delay3 a b a - b AddSub3 e(k) weight1 weight2 weight4 Out Out Out Out weight3 Gateway Out2 Gateway Out3 Gateway Out4 Gateway Out5 Weight 1 Weight 2 Weight 3 Weight 4 1 e(k) weight1 weight2 weight3 weight4 Delay4 z -1 Delay8 z -1 Delay9 z -1 Delay10 z -1 step (2m) 0.0400390625 a + b a + b a + b a + b Constant a b AddSub4 AddSub5 AddSub6 AddSub7 z -0 (ab) Mult4 a b a b a b a b b(ab) b(ab) b(ab) b(ab) z -0 z -0 z -0 z -0 Mult5 Mult6 Mult7 Mult8 a a a a z -1 z -1 z -1 Delay5 Delay6 Delay7 H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 12 / 17

  13. Recursive Least Square (RLS) Filters Solution to (1) using regularized Newton Recursion ω i = ω i − 1 + µ ( i ) [ ε ( i ) I − R u ] − 1 [ R du − R u ω i − 1 ] , ω − 1 = initial guess. where µ ( i ) > 0 is the stepsize and ε ( i ) is the regularization factor. i 1 Approximate R u by � λ i − j u ∗ ∑ R u = j u j , i.e. by an exponential i + 1 j = 0 average of previous regressors. If λ = 1 then all regressors have equal weight. If 0 � λ < 1 then recent regressors ( i − 1 , i − 2 , . . . ) are more relevant and remote regressors are forgotten. Generally λ is choosen so that 0 � λ < 1, therefore RLS has a memory or forgetting property. i + 1 and ε ( i ) = λ i + 1 ε 1 Assume µ ( i ) = i + 1 for all i . Then ε ( i ) → 0 as i → ∞ , i.e. as time increases the regularization factor disappears. H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 13 / 17

  14. Recursive Least Square (RLS) Filters Development using the instantaneous approximation � � − 1 i λ i + 1 ε I + λ i − j u ∗ u ∗ ∑ ω i = ω i − 1 + j u j i [ d ( i ) − u i ω i − 1 ] j = 0 Define i Φ i = λ i + 1 ε I + ∑ λ i − j u ∗ j u j j = 0 then Φ i = λ Φ i − 1 + u ∗ i u i , Φ − 1 = ε I The matrix inversion formula for P i = Φ − 1 is given by i � � P i − 1 − λ − 1 P i − 1 u ∗ i u i P i − 1 P i = λ − 1 , P − 1 = ε − 1 I 1 + λ − 1 u i P i − 1 u ∗ i With the simplification we obtain the RLS algorithm ω i = ω i − 1 + P i u ∗ i [ d ( i ) − u i ω i − 1 ] , i = 0 , 1 , 2 , . . . Computational Cost H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 14 / 17

  15. Least-Squares Problem � | d − u ω | 2 � N ∑ N − 1 i = 0 | d − u ω | 2 , then problem (1) is by 1 Replace E modified to N − 1 | d ( i ) − u i ω | 2 = min ω � y − H ω � 2 ∑ min (2) ω i = 0 where � � y = d ( 0 ) d ( 1 ) · · · d ( N − 1 ) and � � T u T u T u T H = · · · 0 1 N − 1 Weighted Least-Squares Let W be a weights matrix, then (2) can be modified to ω ( y − H ω ) ∗ W ( y − H ω ) . min Regularized Least-Squares Let Π > 0 be a regularization matrix, then (2) can be modified to � ω ∗ Π ω + � y − H ω � 2 � min . ω H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 15 / 17

  16. Not Presented Weighted, Regularized and Weighted and Regularized Least-Square Algorithms Array Methods for Adaptive Filters Given’s Rotation CORDIC Cells QR-Recursive Least Square Algorithm H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 16 / 17

  17. References Dr. Rafla’s Notes for ECE 635 Adaptive Filters by Ali H. Sayed H. Ahsan (ECE BSU) Adaptive Filters April 12, 2010 17 / 17

Recommend


More recommend