Short Overview on Blind Equalization Philippe Ciblat Télécom ParisTech
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Outline 1. Introduction General problem Problem classification Considered problem 2. Statistical framework 3. High-Order Statistics (HOS) algorithms Constant Modulus Algorithm (CMA) Adaptive versions 4. Second-Order Statistics (SOS) algorithms Covariance matching algorithm (Deterministic) Maximum-Likelihood algorithm Some sub-optimal algorithms 5. Other types of algorithms 6. Numerical illustrations: optical-fiber communications use-case Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 2 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Part 1: Introduction Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 3 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs General model Unknown signal mixture with additive noise y ( n ) = fct ( s ( n )) + w ( n ) (1) with y ( n ) : observations vector at time-index n w ( n ) : white Gaussian noise with zero-mean Find out the multi-variate input s ( n ) given − only a set of observations y ( n ) − statistical model for the noise Blind techniques Unknown fct without deterministic help of s ( n ) to estimate it Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 3 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Problem classification s ( n ) belongs to a discrete set: equalization − Military applications: passive listening − Civilian applications: no training sequence ◦ Goal 1: remove the header and increase the data rate (be careful: with the same raw data rate) ◦ Goal 2: follow very fast variation of wireless channel (be careful: set of observations is small) s ( n ) belongs to a uncountable set: source separation − Audio (cocktail party) − Hyperspectral imaging − Cosmology (Cosmic Microwave Background map with Planck data) Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 4 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Problem classification (cont’d) In the context of Blind Source Separation (BSS) : Instantaneous mixture: y ( n ) = Hs ( n ) + w ( n ) with a unknown matrix H Convolutive mixture: L � H ( ℓ ) s ( n − ℓ ) + w ( n ) y ( n ) = ℓ = 0 with a unknown set of matrices H ( ℓ ) Nonlinear mixture: fct is not linear BSS field Vast community mainly working on the instantaneous case Goal: find out s ( n ) up to scale and permutation operators Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 5 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Considered Problem Go back to equalization (done in blindly manner) Unlike BSS, sources are strongly structured: discrete set (often a lattice, i.e., Z -module) discrete set with specific properties: constant modulus if PSK man-made source (can be even modify to help the blind equalization step) Classification problem rather than Regression problem First questions Do we have a Input/Output model given by Eq. (1)? If yes, what is the shape of the mixture given by fct ? Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 6 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Signal model Single-user context Single-antenna context Multipath propagation channel Equivalent discrete-time channel model (by sampling EM wave at the symbol rate) � y ( n ) = h ( ℓ ) s ( n − ℓ ) + w ( n ) , ∀ n = 0 , . . . , N − 1 ⇔ y = Hs + w ℓ where H is a band-Toeplitz matrix, N is the frame size Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 7 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Signal model (cont’d) Sampling at symbol rate leads to no information loss on the symbol sequence but information loss on the electro-magnetic wave, and probably on the channel impulse response (our goal, here) Go back to the “true” receive signal... � y ( t ) = s ( n ) h ( t − nT s ) + w ( t ) , ∀ t ∈ R n with s ( n ) : symbol sequence w ( t ) : white Gaussian noise h ( t ) : filter coming from the channel and the transmitter � � − 1 + ρ , 1 + ρ occupied band = 2 T s 2 T s with the roll-off factor ρ ∈ ( 0 , 1 ] Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 8 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Signal framework Shannon-Nyquist sampling theorem ⇒ T = T s 2 Scalar framework: no filtering anymore � ˜ s ( knT s / 2 − kT s ) + ˜ y ( n ) = y ( nT ) = w ( n ) k Vector framework: SIMO filtering � y 1 ( n ) = y ( nT s ) = h 1 ⋆ s ( n ) + w 1 ( n ) y 2 ( n ) = y ( nT s + T s / 2 ) = h 2 ⋆ s ( n ) + w 2 ( n ) with h 1 ( n ) = h ( nT s ) and h 2 ( n ) = h ( nT s + T s / 2 ) . y 1 h 1 T s T s / 2 s ( n ) s ( n ) h ( t ) y 2 h 2 T s . Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 9 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Problems to be solved Goals Estimate 1. Scalar case: h 1 given y 1 ( n ) only and h 2 given y 2 ( n ) only, i.e., working with model of Slide 7 2. Vector case: h = [ h 1 , h 2 ] T given y ( n ) = [ y 1 ( n ) , y 2 ( n )] T jointly Glossary: without training sequence � Non-Data-aided (NDA) or blind/unsupervised � � with training sequence � Data-aided (DA) or supervised � � with decision-feedback � Decision-Directed (DD) � � Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 10 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Part 2: Statistical framework Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 11 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Available data statistics Only { y ( n ) } N − 1 n = 0 is available to estimate H What is an algorithm here? a function depending only on { y ( n ) } N − 1 n = 0 ... ... a statistic of the random process y ( n ) � � { y ( n ) } N − 1 Θ n = 0 Choice of Θ : P -order polynomial: moments of the random process Question: which orders are relevant? listen to the talk A Deep Neural Network (DNN) Question: how calculating the weights? see Slide 37 Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 11 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs A not-so toy example y ( n ) = Hs ( n ) + w ( n ) with y ( n ) is a vector of length L H is a L × L square full rank matrix s ( n ) , w ( n ) are i.i.d. circularly-symmetric Gaussian vectors with zero-mean and variances σ 2 s and σ 2 w respectively Results y ( n ) Gaussian with zero-mean and correlation matrix s HH H + σ 2 R ( H ) = σ 2 w Id L R ( H ) = R ( HU ) for any unitary matrix U Principal Component Analysis (PCA) is a deadlock s ( n ) has to be non-Gaussian ⇒ Independent Component Analysis (ICA) Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 12 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Scalar case Go back to blind equalization y ( n ) = h ⋆ s ( n ) + w ( n ) As y ( n ) is stationary, second-order information lies in � r ( m ) e − 2 i π fm = σ 2 s | h ( e 2 i π f ) | 2 + σ 2 S ( e 2 i π f ) = w m with r ( m ) = E [ y ( n + m ) y ( n )] h ( z ) = � ℓ h ( ℓ ) z − ℓ , with z = e 2 i π f Results Lack of information on the channel impulse response, except if h ( z ) is phase minimum ( h ( z ) � = 0 if | z | > 1) non-stationary signal non-Gaussian signal (by resorting to high-order statistics) : OK for PAM, PSK, QAM sources Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 13 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Scalar case: the pavement of the HOS road Let X = [ X 1 , . . . , X N ] be a real-valued random vector of length N . Characteristic function of the first kind (MGF) � � � Ψ X : ω �→ E [ e i ω T x ] p X ( x ) e i ω T x d x = Moments (of order s) ∝ component of Taylor series expansion of Ψ X for s -th order Example: N = 2; Second-order means E [ X 2 1 ] , E [ X 2 2 ] , and E [ X 1 X 2 ] Characteristic function of the second kind (CGF) Φ X : ω �→ log(Ψ X ( ω )) Cumulants (of order s) ∝ component of Taylor series expansion of Φ X for s -th order Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 14 / 45
Introduction Statistics HOS SOS Other Simulations Ccl and Refs Useful properties Why cumulants? let X and Y be independent vectors Ψ [ X , Y ] ( ω ) = Ψ X ( ω 1 ) . Ψ Y ( ω 2 ) but Φ [ X , Y ] ( ω ) = Φ X ( ω 1 ) + Φ Y ( ω 2 ) X = [ X 1 , · · · , X N ] and Y = [ Y 1 , · · · , Y N ] be independent vectors cum s ( X i 1 + Y i 1 , · · · , X i s + Y i s ) = cum s ( X i 1 , · · · , X i s )+ cum s ( Y i 1 , · · · , Y i s ) X = [ X 1 , · · · , X N ] with at least two independent components cum N ( X 1 , · · · , X N ) = 0 X = [ X 1 , · · · , X N ] Gaussian vector cum s ( X i 1 , · · · , X i s ) = 0 if s ≥ 3 Remarks No HOS information for Gaussian vector “Distance” to the Gaussian distribution ⇒ (normalized) Kurtosis cum 4 ( x , x , x , x ) κ x = ( E [ | x | 2 ]) 2 Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 15 / 45
Recommend
More recommend