finite alphabet estimation
play

Finite Alphabet Estimation Graham C. Goodwin Day 5: Lecture 3 17th - PowerPoint PPT Presentation

Finite Alphabet Estimation Graham C. Goodwin Day 5: Lecture 3 17th September 2004 International Summer School Grenoble, France Centre for Complex Dynamic Systems and Control 1. Introduction We have seen in earlier lectures that constrained


  1. Finite Alphabet Estimation Graham C. Goodwin Day 5: Lecture 3 17th September 2004 International Summer School Grenoble, France Centre for Complex Dynamic Systems and Control

  2. 1. Introduction We have seen in earlier lectures that constrained estimation problems can be formulated in a similar fashion to constrained control problems using the idea of receding horizon optimization . This is also true of constrained estimation problems where the decision variables must satisfy finite alphabet constraints . Centre for Complex Dynamic Systems and Control

  3. Finite alphabet estimation problems arise in many application, for example, : estimation of transmitted signals in digital communication systems where the signals are known to belong to a finite alphabet (say ± 1); state estimation problems where a disturbance is known to take only a finite set of values (for example, either “on” or “off”). Centre for Complex Dynamic Systems and Control

  4. To fix ideas, we refer to the specific problem of estimating a signal drawn from a given finite alphabet that has been transmitted over a noisy dispersive communication channel. This problem, which is commonly referred to as one of channel equalisation , can be formulated as a fixed-delay maximum likelihood detection problem. The resultant detector estimates each symbol based upon the entire sequence received to a point in time and hence constitutes, in principle, a growing memory structure. Centre for Complex Dynamic Systems and Control

  5. In order to address this problem, various simplified detectors of fixed memory and complexity have been proposed. The simplest such scheme is the decision feedback equaliser [DFE], which is a symbol-by-symbol detector. Recall the development in Day 1: Lecture 1. Centre for Complex Dynamic Systems and Control

  6. 2. Maximum Likelihood Detection Utilising An A Priori State Estimate Consider a linear channel (which may include a whitening matched filter and any other pre-filter) with scalar input u k drawn from a finite alphabet U . The channel output y k is scalar and is assumed to be perturbed by zero-mean additive white Gaussian noise n k of variance r , denoted by n k ∼ N ( 0 , r ) . Centre for Complex Dynamic Systems and Control

  7. This is described by the state space model x k + 1 = Ax k + Bu k , (1) y k = Cx k + Du k + n k , where x k ∈ R n . The above model may equivalently be expressed in transfer function form as ∞ � H ( ρ ) = D + C ( ρ I − A ) − 1 B = h 0 + h i ρ − i , y k = H ( ρ ) u k + n k , i = 1 where 1 h i = CA i − 1 B , h 0 = D , i = 1 , 2 , . . . . (2) 1 ρ denotes the forward shift operator, ρ v k = v k + 1 , where { v k } is any sequence. Centre for Complex Dynamic Systems and Control

  8. We incorporate an a priori state estimate into the problem formulation. We fix integers L 1 ≥ 0, L 2 ≥ 1 and suppose, for the moment, that x k − L 1 ∼ N ( z k − L 1 , P ) , (3) that is, z k − L 1 is a given a priori estimate for x k − L 1 which has a Gaussian distribution. The matrix P − 1 reflects the degree of belief in this a priori state estimate. Absence of prior knowledge of x k − L 1 can be accommodated by using P − 1 = 0, and decision feedback is achieved by taking P = 0, which effectively locks x k − L 1 at z k − L 1 . Centre for Complex Dynamic Systems and Control

  9. We define the vectors �  , � u k � u k − L 1 u k − L 1 + 1 · · · u k + L 2 − 1 �  . � y k � y k − L 1 y k − L 1 + 1 · · · y k + L 2 − 1 The vector y k gathers time samples of the channel output and u k contains channel inputs, which are the decision variables of the estimation problem considered here. Centre for Complex Dynamic Systems and Control

  10. The maximum a posteriori [MAP] sequence detector, which at time t = k provides an estimate of u k and x k − L 1 based upon the received data contained in y k , maximises the probability density function 2 � � �� �� �� u k u k � p y k � p � �� � � � x k − L 1 x k − L 1 u k � � p � y k = , (4) � x k − L 1 p ( y k ) 2 For ease of notation, in what follows we will denote all (conditional) probability density functions by p . The specific function referred to will be clear from the context. Centre for Complex Dynamic Systems and Control

  11. Note that only the numerator of the above expression influences the maximisation. Assuming that u k and x k − L 1 are independent, if u k is white), it follows that �� �� � p ( u k ) . u k = p � x k − L 1 p x k − L 1 Hence, if all finite alphabet-constrained symbol sequences u k are equally likely (an assumption that we make in what follows), then the MAP detector that maximises is equivalent to the following maximum likelihood sequence detector � � � � � �� �� u k ˆ u k � � arg max p y k � p � x k − L 1 . (5) � ˆ x k − L 1 x k − L 1 � u k , x k − L 1 Centre for Complex Dynamic Systems and Control

  12. �  , � u k � ˆ u k − L 1 ˆ u k − L 1 + 1 ˆ · · · u k ˆ · · · u k + L 2 − 1 ˆ (6) and u k needs to satisfy the constraint U N � U × · · · × U , u k ∈ U N , N � L 1 + L 2 , (7) in accordance with the restriction u k ∈ U . Centre for Complex Dynamic Systems and Control

  13. Our working assumption is that the initial channel state x k − L 1 has a Gaussian probability density function −� x k − L 1 − z k − L 1 � 2   1 � =   P − 1   p � x k − L 1 ( 2 π ) n / 2 ( det P ) 1 / 2 exp    . (8) 2      Centre for Complex Dynamic Systems and Control

  14. We rewrite the channel model at time instants t = k − L 1 , k − L 1 + 1 , . . . , k + L 2 − 1 in block form as y k = Ψ u k + Γ x k − L 1 + n k . Here,  h 0 0 . . . 0   n k − L 1   C          .  ...       .   n k − L 1 + 1   CA        h 1 h 0 .               n k �   , Γ �   , Ψ � .    .   .        . . .  ... ...       .   .   .        0 .                   CA N − 1       n k + L 2 − 1    h N − 1 . . . h 1 h 0  The columns of Ψ contain truncated impulse responses of the model. Centre for Complex Dynamic Systems and Control

  15. Since the noise n k is assumed Gaussian with variance r , it follows that  −� y k − Ψ u k − Γ x k − L 1 � 2  � � �� u k 1 �  R − 1    p y k � = ( 2 π ) N / 2 ( det R ) 1 / 2 exp  ,   � x k − L 1  2  �    (9) where the matrix R � diag { r , . . . , r } ∈ R N × N . Centre for Complex Dynamic Systems and Control

  16. Applying the natural logarithm, one obtains the sequence detector � � u k ˆ V ( u k , x k − L 1 ) , (10) = arg min x k − L 1 ˆ u k , x k − L 1 The objective function V is defined as V ( u k , x k − L 1 ) � � x k − L 1 − z k − L 1 � 2 P − 1 + � y k − Ψ u k − Γ x k − L 1 � 2 R − 1 k + L 2 − 1 � = � x k − L 1 − z k − L 1 � 2 P − 1 + r − 1 x j − Du j ) 2 , ( y j − C ˇ j = k − L 1 (11) Centre for Complex Dynamic Systems and Control

  17. The vectors ˇ x j denote predictions of the channel states x j . They satisfy, x j + 1 = A ˇ ˇ x j + Bu j for j = k − L 1 , . . . , k + L 2 − 1 , (12) ˇ x k − L 1 = x k − L 1 . Centre for Complex Dynamic Systems and Control

  18. Remark (Notation) Since ˆ u k and ˆ x k − L 1 are calculated using data up to time t = k + L 2 − 1, they could perhaps be more insightfully denoted as u k | k + L 2 − 1 and ˆ ˆ x k − L 1 | k + L 2 − 1 , respectively. However, in order to keep the notation simple, we will here avoid double indexing, in anticipation that the context will always allow for correct interpretation. Centre for Complex Dynamic Systems and Control

  19. As a consequence of considering the joint probability density function, the objective function includes a term which allows one to obtain an a posteriori state estimate ˆ x k − L 1 which differs from the a priori estimate z k − L 1 as permitted by the confidence matrix P − 1 . Centre for Complex Dynamic Systems and Control

  20. 3. Information Propagation Having set up the fixed horizon estimator as the finite alphabet optimiser, we next show how this information can be utilised as part of a moving horizon scheme. Centre for Complex Dynamic Systems and Control

  21. Minimisation of the objective function V yields the entire optimising sequence ˆ u k . However, following our usual procedure, we will utilise a moving horizon approach in which only the present value 3 � � u  ˆ � u k , ˆ 0 L 1 1 0 L 2 − 1 (13) k will be delivered at the output of the detector. 3 The row vector 0 m ∈ R 1 × m contains only zeros. Centre for Complex Dynamic Systems and Control

Recommend


More recommend