eigenspace estimation for source localization using large
play

Eigenspace estimation for source localization using large random - PowerPoint PPT Presentation

Eigenspace estimation for source localization using large random matrices Pascal Vallet (1) Joint work with Philippe Loubaton (1) and Xavier Mestre (2) (1) LabInfo IGM (CNRS-UMR 8049) / Universit Paris-Est (2) Centre Tecnologic de


  1. Eigenspace estimation for source localization using large random matrices Pascal Vallet (1) Joint work with Philippe Loubaton (1) and Xavier Mestre (2) (1) LabInfo IGM (CNRS-UMR 8049) / Université Paris-Est (2) Centre Tecnologic de Telecomunicacions de Catalunya (CTTC) / Barcelona

  2. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations Table of Contents 1 Introduction 2 Random matrix theory results Consistent estimation of eigenspace 3 Numerical evaluations 4 2 / 68

  3. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations We will assume that K source signals are received by an antenna array of M elements, and K < M . At time n , we receive y n = As n + v n , with A = [ a ( θ 1 ),..., a ( θ K )] the M × K "steering vectors" matrix with a ( θ 1 ),..., a ( θ K ) linearly independent. s n = [ s 1, n ,..., s n , K ] the vector of non-observable transmitted signals, assumed deterministic, v n a gaussian white noise (zero mean, covariance σ 2 I M ). θ 1 ,..., θ K are the parameters of interest of the K sources, it can be either frequencies, direction of arrival (DoA)... 3 / 68

  4. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations We will assume that K source signals are received by an antenna array of M elements, and K < M . At time n , we receive y n = As n + v n , with A = [ a ( θ 1 ),..., a ( θ K )] the M × K "steering vectors" matrix with a ( θ 1 ),..., a ( θ K ) linearly independent. s n = [ s 1, n ,..., s n , K ] the vector of non-observable transmitted signals, assumed deterministic, v n a gaussian white noise (zero mean, covariance σ 2 I M ). θ 1 ,..., θ K are the parameters of interest of the K sources, it can be either frequencies, direction of arrival (DoA)... 4 / 68

  5. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations We will assume that K source signals are received by an antenna array of M elements, and K < M . At time n , we receive y n = As n + v n , with A = [ a ( θ 1 ),..., a ( θ K )] the M × K "steering vectors" matrix with a ( θ 1 ),..., a ( θ K ) linearly independent. s n = [ s 1, n ,..., s n , K ] the vector of non-observable transmitted signals, assumed deterministic, v n a gaussian white noise (zero mean, covariance σ 2 I M ). θ 1 ,..., θ K are the parameters of interest of the K sources, it can be either frequencies, direction of arrival (DoA)... 5 / 68

  6. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations We will assume that K source signals are received by an antenna array of M elements, and K < M . At time n , we receive y n = As n + v n , with A = [ a ( θ 1 ),..., a ( θ K )] the M × K "steering vectors" matrix with a ( θ 1 ),..., a ( θ K ) linearly independent. s n = [ s 1, n ,..., s n , K ] the vector of non-observable transmitted signals, assumed deterministic, v n a gaussian white noise (zero mean, covariance σ 2 I M ). θ 1 ,..., θ K are the parameters of interest of the K sources, it can be either frequencies, direction of arrival (DoA)... 6 / 68

  7. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations We will assume that K source signals are received by an antenna array of M elements, and K < M . At time n , we receive y n = As n + v n , with A = [ a ( θ 1 ),..., a ( θ K )] the M × K "steering vectors" matrix with a ( θ 1 ),..., a ( θ K ) linearly independent. s n = [ s 1, n ,..., s n , K ] the vector of non-observable transmitted signals, assumed deterministic, v n a gaussian white noise (zero mean, covariance σ 2 I M ). θ 1 ,..., θ K are the parameters of interest of the K sources, it can be either frequencies, direction of arrival (DoA)... 7 / 68

  8. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations We collect N observations of the previous model, stacked in Y N = [ y 1 ,..., y N ], and we can write Y N = AS N + V N with S N and V N built as Y N . The goal is to infer the angles θ 1 ,..., θ K from Y N . There are essentially two common methods: Maximum Likelihood (ML) estimation Subspace method. 8 / 68

  9. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations We collect N observations of the previous model, stacked in Y N = [ y 1 ,..., y N ], and we can write Y N = AS N + V N with S N and V N built as Y N . The goal is to infer the angles θ 1 ,..., θ K from Y N . There are essentially two common methods: Maximum Likelihood (ML) estimation Subspace method. 9 / 68

  10. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations We collect N observations of the previous model, stacked in Y N = [ y 1 ,..., y N ], and we can write Y N = AS N + V N with S N and V N built as Y N . The goal is to infer the angles θ 1 ,..., θ K from Y N . There are essentially two common methods: Maximum Likelihood (ML) estimation Subspace method. 10 / 68

  11. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations The ML estimator is given by 1 I M − A ( ω )( A ( ω ) ∗ A ( ω )) − 1 A ( ω ) ∗ � Y N Y ∗ � argmin N Tr N , ω where A ( ω ) is the matrix in which we have replaced [ θ 1 ,..., θ K ] by the variable ω = [ ω 1 ,..., ω K ]. This estimator is consistent when M , N → ∞ , however, it clearly requires a multidimensional optimization. An alternative, requiring a monodimensional search, has been found through the subspace method. 11 / 68

  12. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations The ML estimator is given by 1 I M − A ( ω )( A ( ω ) ∗ A ( ω )) − 1 A ( ω ) ∗ � Y N Y ∗ � argmin N Tr N , ω where A ( ω ) is the matrix in which we have replaced [ θ 1 ,..., θ K ] by the variable ω = [ ω 1 ,..., ω K ]. This estimator is consistent when M , N → ∞ , however, it clearly requires a multidimensional optimization. An alternative, requiring a monodimensional search, has been found through the subspace method. 12 / 68

  13. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations The ML estimator is given by 1 I M − A ( ω )( A ( ω ) ∗ A ( ω )) − 1 A ( ω ) ∗ � Y N Y ∗ � argmin N Tr N , ω where A ( ω ) is the matrix in which we have replaced [ θ 1 ,..., θ K ] by the variable ω = [ ω 1 ,..., ω K ]. This estimator is consistent when M , N → ∞ , however, it clearly requires a multidimensional optimization. An alternative, requiring a monodimensional search, has been found through the subspace method. 13 / 68

  14. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations N A ∗ has K non null Assuming S N has full rank K , then 1 N AS N S ∗ eigenvalues 0 = λ 1, N = ... = λ M − K , N < λ M − K + 1, N < ... < λ M , N . We denote by Π N the projector onto the eigensubspace associated with eigenvalue 0. Since span{ a ( θ 1 ),..., a ( θ K )} is also the eigenspace associated with non null eigenvalues λ M − K + 1, N ,..., λ M , N , it is possible to determine the ( θ k ) k = 1,..., K . MUSIC algorithm The angles θ 1 ,..., θ K are the (unique) solutions of the equation η ( θ ) : = a ( θ ) ∗ Π N a ( θ ) = 0. 14 / 68

  15. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations N A ∗ has K non null Assuming S N has full rank K , then 1 N AS N S ∗ eigenvalues 0 = λ 1, N = ... = λ M − K , N < λ M − K + 1, N < ... < λ M , N . We denote by Π N the projector onto the eigensubspace associated with eigenvalue 0. Since span{ a ( θ 1 ),..., a ( θ K )} is also the eigenspace associated with non null eigenvalues λ M − K + 1, N ,..., λ M , N , it is possible to determine the ( θ k ) k = 1,..., K . MUSIC algorithm The angles θ 1 ,..., θ K are the (unique) solutions of the equation η ( θ ) : = a ( θ ) ∗ Π N a ( θ ) = 0. 15 / 68

  16. Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations N A ∗ has K non null Assuming S N has full rank K , then 1 N AS N S ∗ eigenvalues 0 = λ 1, N = ... = λ M − K , N < λ M − K + 1, N < ... < λ M , N . We denote by Π N the projector onto the eigensubspace associated with eigenvalue 0. Since span{ a ( θ 1 ),..., a ( θ K )} is also the eigenspace associated with non null eigenvalues λ M − K + 1, N ,..., λ M , N , it is possible to determine the ( θ k ) k = 1,..., K . MUSIC algorithm The angles θ 1 ,..., θ K are the (unique) solutions of the equation η ( θ ) : = a ( θ ) ∗ Π N a ( θ ) = 0. 16 / 68

Recommend


More recommend