Optimal and Adaptive Filtering Murat Üney M.Uney@ed.ac.uk Institute for Digital Communications (IDCOM) 26/06/2017 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 1 / 69
Table of Contents Optimal Filtering 1 Optimal filter design Application examples Optimal solution: Wiener-Hopf equations Example: Wiener equaliser Adaptive filtering 2 Introduction Recursive Least Squares Adaptation Least Mean Square Algorithm Applications Optimal signal detection 3 Application examples and optimal hypothesis testing Additive white and coloured noise Summary 4 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 2 / 69
Optimal Filtering Optimal filter design Optimal filter design Observation Estimation sequence Linear time invariant system Figure 1: Optimal filtering scenario. y ( n ) : Observation related to a stationary signal of interest x ( n ) . h ( n ) : The impulse response of an LTI estimator. x ( n ) : Estimate of x ( n ) given by ˆ ∞ � ˆ x ( n ) = h ( n ) ∗ y ( n ) = h ( i ) y ( n − i ) . i = −∞ Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 3 / 69
Optimal Filtering Optimal filter design Optimal filter design Observation Estimation sequence Linear time invariant system Figure 1: Optimal filtering scenario. Find h ( n ) with the best error performance: e ( n ) = x ( n ) − ˆ x ( n ) = x ( n ) − h ( n ) ∗ y ( n ) The error performance is measured by the mean squared error (MSE) �� � 2 � ξ = E e ( n ) . Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 4 / 69
Optimal Filtering Optimal filter design Optimal filter design Observation Estimation sequence Linear time invariant system Figure 1: Optimal filtering scenario. The MSE is a function of h ( n ) , i.e., h = [ · · · , h ( − 2 ) , h ( − 1 ) , h ( 0 ) , h ( 1 ) , h ( 2 ) , · · · ] �� � 2 � �� � 2 � x ( n ) − h ( n ) ∗ y ( n ) ξ ( h ) = E e ( n ) = E . Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 5 / 69
Optimal Filtering Optimal filter design Optimal filter design Observation Estimation sequence Linear time invariant system Figure 1: Optimal filtering scenario. The MSE is a function of h ( n ) , i.e., h = [ · · · , h ( − 2 ) , h ( − 1 ) , h ( 0 ) , h ( 1 ) , h ( 2 ) , · · · ] �� � 2 � �� � 2 � x ( n ) − h ( n ) ∗ y ( n ) ξ ( h ) = E e ( n ) = E . Thus, optimal filtering problem is h opt = arg min h ξ ( h ) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 5 / 69
Optimal Filtering Application examples Application examples 1) Prediction, interpolation and smoothing of signals Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 6 / 69
Optimal Filtering Application examples Application examples 1) Prediction, interpolation and smoothing of signals d = 1 ◮ Prediction for anti-aircraft fire control. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 7 / 69
Optimal Filtering Application examples Application examples 1) Prediction, interpolation and smoothing of signals d = − 1 / 2 (interpolation) d = − 1 (smoothing) ◮ Signal denoising applications, estimation of missing data points. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 8 / 69
Optimal Filtering Application examples Application examples 2) System identification Figure 2: System identification using a training sequence t ( n ) from an ergodic and stationary ensemble. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 9 / 69
Optimal Filtering Application examples Application examples 2) System identification Figure 2: System identification using a training sequence t ( n ) from an ergodic and stationary ensemble. y ( n ) transmitter microphone Modem hybrid filter Echo cancellation in full transformer two − wire line synthesized duplex data transmission. ^ echo ( ) x n ( n ) ( n ) received e x receiver Σ signal + echo earpiece Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 9 / 69
Optimal Filtering Application examples Application examples 3) Inverse System identification Figure 3: Inverse system identification using x ( n ) as a training sequence. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 10 / 69
Optimal Filtering Application examples Application examples 3) Inverse System identification Figure 3: Inverse system identification using x ( n ) as a training sequence. ◮ Channel equalisation in digital communication systems. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 10 / 69
Optimal Filtering Optimal solution: Wiener-Hopf equations Optimal solution: Normal equations � ( e ( n )) 2 � Consider the MSE ξ ( h ) = E The optimal filter satisfies ∇ ξ ( h ) | h opt = 0 . Equivalently, for all j = . . . , − 2 , − 1 , 0 , 1 , 2 , . . . � � ∂ξ 2 e ( n ) ∂ e ( n ) ∂ h ( j ) = E ∂ h ( j ) � � � x ( n ) − � ∞ � 2 e ( n ) ∂ i = −∞ h ( i ) y ( n − i ) = E ∂ h ( j ) � � 2 e ( n ) ∂ ( − h ( j ) y ( n − j )) = E ∂ h ( j ) = − 2 E [ e ( n ) y ( n − j )] Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 11 / 69
Optimal Filtering Optimal solution: Wiener-Hopf equations Optimal solution: Normal equations � ( e ( n )) 2 � Consider the MSE ξ ( h ) = E The optimal filter satisfies ∇ ξ ( h ) | h opt = 0 . Equivalently, for all j = . . . , − 2 , − 1 , 0 , 1 , 2 , . . . � � ∂ξ 2 e ( n ) ∂ e ( n ) ∂ h ( j ) = E ∂ h ( j ) � � � x ( n ) − � ∞ � 2 e ( n ) ∂ i = −∞ h ( i ) y ( n − i ) = E ∂ h ( j ) � � 2 e ( n ) ∂ ( − h ( j ) y ( n − j )) = E ∂ h ( j ) = − 2 E [ e ( n ) y ( n − j )] Hence, the optimal filter solves the “normal equations” E [ e ( n ) y ( n − j )] = 0 , j = . . . , − 2 , − 1 , 0 , 1 , 2 , . . . Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 11 / 69
Optimal Filtering Optimal solution: Wiener-Hopf equations Optimal solution: Wiener-Hopf equations The error of h opt is orthogonal to its observations, i.e., for all j ∈ Z E [ e opt ( n ) y ( n − j )] = 0 which is known as “the principle of orthogonality”. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 12 / 69
Optimal Filtering Optimal solution: Wiener-Hopf equations Optimal solution: Wiener-Hopf equations The error of h opt is orthogonal to its observations, i.e., for all j ∈ Z E [ e opt ( n ) y ( n − j )] = 0 which is known as “the principle of orthogonality”. Furthermore, �� ∞ � � � E [ e opt ( n ) y ( n − j )] = E x ( n ) − h opt ( i ) y ( n − i ) y ( n − j ) i = −∞ ∞ � h opt ( i ) E [ y ( n − i ) y ( n − j )] = 0 = E [ x ( n ) y ( n − j )] − i = −∞ Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 12 / 69
Optimal Filtering Optimal solution: Wiener-Hopf equations Optimal solution: Wiener-Hopf equations The error of h opt is orthogonal to its observations, i.e., for all j ∈ Z E [ e opt ( n ) y ( n − j )] = 0 which is known as “the principle of orthogonality”. Furthermore, �� ∞ � � � E [ e opt ( n ) y ( n − j )] = E x ( n ) − h opt ( i ) y ( n − i ) y ( n − j ) i = −∞ ∞ � h opt ( i ) E [ y ( n − i ) y ( n − j )] = 0 = E [ x ( n ) y ( n − j )] − i = −∞ Result (Wiener-Hopf equations) ∞ � h opt ( i ) r yy ( i − j ) = r xy ( j ) i = −∞ Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 12 / 69
Optimal Filtering Optimal solution: Wiener-Hopf equations The Wiener filter Wiener-Hopf equations can be solved indirectly, in the complex spectral domain: h opt ( n ) ∗ r yy ( n ) = r xy ( n ) ↔ H opt ( z ) P yy ( z ) = P xy ( z ) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 13 / 69
Optimal Filtering Optimal solution: Wiener-Hopf equations The Wiener filter Wiener-Hopf equations can be solved indirectly, in the complex spectral domain: h opt ( n ) ∗ r yy ( n ) = r xy ( n ) ↔ H opt ( z ) P yy ( z ) = P xy ( z ) Result (The Wiener filter) H opt ( z ) = P xy ( z ) P yy ( z ) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 13 / 69
Optimal Filtering Optimal solution: Wiener-Hopf equations The Wiener filter Wiener-Hopf equations can be solved indirectly, in the complex spectral domain: h opt ( n ) ∗ r yy ( n ) = r xy ( n ) ↔ H opt ( z ) P yy ( z ) = P xy ( z ) Result (The Wiener filter) H opt ( z ) = P xy ( z ) P yy ( z ) The optimal filter has an infinite impulse response (IIR), and, is non-causal, in general. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 13 / 69
Optimal Filtering Optimal solution: Wiener-Hopf equations Causal Wiener filter We project the unconstrained solution H opt ( z ) onto the set of causal and stable IIR filters by a two step procedure: First, factorise P yy ( z ) into causal (right sided) Q yy ( z ) , and anti-causal (left sided) parts Q ∗ yy ( 1 / z ∗ ) , i.e., P yy ( z ) = σ 2 y Q yy ( z ) Q ∗ yy ( 1 / z ∗ ) . Select the causal (right sided) part of P xy ( z ) / Q ∗ yy ( 1 / z ∗ ) . Result (Causal Wiener filter) � � 1 P xy ( z ) H + opt ( z ) = σ 2 yy ( 1 / z ∗ ) Q ∗ y Q yy ( z ) + Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 14 / 69
Recommend
More recommend