ml estimation of signal parameters
play

ML Estimation of Signal Parameters Saravanan Vijayakumaran - PowerPoint PPT Presentation

ML Estimation of Signal Parameters Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 8 ML Estimation Requires Conditional Densities ML estimation involves maximizing


  1. ML Estimation of Signal Parameters Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 8

  2. ML Estimation Requires Conditional Densities • ML estimation involves maximizing the conditional density wrt unknown parameters ˆ θ ML ( y ) = argmax p ( y | θ ) θ • Example: Y ∼ N ( θ, σ 2 ) where θ is unknown and σ 2 is known − ( y − θ ) 2 1 p ( y | θ ) = √ 2 πσ 2 e 2 σ 2 • Suppose the observation is the realization of a random process y ( t ) = Ae j θ s ( t − τ ) + n ( t ) • What is the conditional density of y ( t ) given A , θ and τ ? 2 / 8

  3. Maximizing Likelihood Ratio for ML Estimation • Consider Y ∼ N ( θ, σ 2 ) where θ is unknown and σ 2 is known − ( y − θ ) 2 1 p ( y | θ ) = √ 2 πσ 2 e 2 σ 2 • Let q ( y ) be the density of a Gaussian with distribution N ( 0 , σ 2 ) − y 2 1 q ( y ) = √ 2 πσ 2 e 2 σ 2 • The ML estimate of θ is obtained as p ( y | θ ) ˆ θ ML ( y ) = argmax p ( y | θ ) = argmax q ( y ) θ θ = argmax L ( y | θ ) θ where L ( y | θ ) is called the likelihood ratio 3 / 8

  4. Likelihood Ratio and Hypothesis Testing • The likelihood ratio L ( y | θ ) is the ML decision statistic for the following binary hypothesis testing problem Y ∼ N ( θ, σ 2 ) H 1 : Y ∼ N ( 0 , σ 2 ) H 0 : • H 0 is a dummy hypothesis which does not give any advantage for the case of random vectors • But it makes calculation of the ML estimator easy for random processes 4 / 8

  5. Likelihood Ratio of a Signal in AWGN • Let H s ( θ ) be the hypothesis corresponding the following received signal H s ( θ ) : y ( t ) = s θ ( t ) + n ( t ) where θ can be a vector parameter • Define a noise-only dummy hypothesis H 0 H 0 : y ( t ) = n ( t ) • Define Z and y ⊥ ( t ) as follows Z = � y , s θ � y ( t ) − � y , s θ � s θ ( t ) y ⊥ ( t ) = � s θ � 2 • Z and y ⊥ ( t ) completely characterize y ( t ) 5 / 8

  6. Likelihood Ratio of a Signal in AWGN • Under both hypotheses y ⊥ ( t ) is equal to n ⊥ ( t ) where n ⊥ ( t ) = n ( t ) − � n , s θ � s θ ( t ) � s θ � 2 • n ⊥ ( t ) has the same distribution under both hypotheses • n ⊥ ( t ) is irrelevant for this binary hypothesis testing problem • The likelihood ratio of y ( t ) equals the likelihood ratio of Z under the following hypothesis testing problem Z ∼ N ( � s θ � 2 , σ 2 � s θ � 2 ) H s ( θ ) : Z ∼ N ( 0 , σ 2 � s θ � 2 ) H 0 ( θ ) : 6 / 8

  7. Likelihood Ratio of Signals in AWGN • The likelihood ratio of signals in real AWGN is � 1 � y , s θ � − � s θ � 2 � �� L ( y | s θ ) = exp σ 2 2 • The likelihood ratio of signals in complex AWGN is � 1 Re ( � y , s θ � ) − � s θ � 2 � �� L ( y | s θ ) = exp σ 2 2 • Maximizing these likelihood ratios as functions of θ results in the ML estimator 7 / 8

  8. References • Section 4.2, Fundamentals of Digital Communication , Upamanyu Madhow, 2008 8 / 8

Recommend


More recommend