19 series representation of stochastic processes
play

19. Series Representation of Stochastic Processes t Given - PowerPoint PPT Presentation

19. Series Representation of Stochastic Processes t Given information about a stochastic process X ( t ) in 0 T , can this continuous information be represented in terms of a countable set of random variables whose relative importance


  1. 19. Series Representation of Stochastic Processes ≤ t ≤ Given information about a stochastic process X ( t ) in 0 T , can this continuous information be represented in terms of a countable set of random variables whose relative importance decrease under some arrangement? To appreciate this question it is best to start with the notion of a Mean-Square periodic process . A stochastic process X ( t ) is said to be mean square (M.S) periodic, if for some T > 0 2 + − = E [ X ( t T ) X ( t ) ] 0 for all t . (19-1) i.e = + X t ( ) X t ( T ) with probability 1 for all . t Suppose X ( t ) is a W.S.S process. Then ⇔ R τ X t ( ) is mean-square perodic ( ) is periodic in the ordinary sense, where τ = * + ( ) R E X t X [ ( ) ( t T )] ( ⇒ ) Proof : suppose X ( t ) is M.S. periodic. Then 1 PILLAI

  2. 2 = + − E [ X ( t T ) X ( t ) ] 0 . (19-2) But from Schwarz’ inequality 2 2 2 + − ≤ + − * E X t [ ( ){ X t ( T ) X t ( )} ] E X t [ ( ) ] [ E X t ( T ) X t ( ) ] ���� � ���� � � 1 2 2 1 2 2 0 Thus the left side equals + − = * E X t [ ( ){ X t ( T ) X t ( )} ] 0 1 2 2 or + = ⇒ − + = − * * E X t X [ ( ) ( t T )] E X t X [ ( ) ( )] t ( R t t T ) R t ( t ) 1 2 1 2 2 1 2 1 ⇒ τ + = τ τ ( R T ) R ( ) for any R τ i.e., ( ) is periodic with period T . (19-3) ⇐ τ ( ) Suppose R ( ) is periodic. Then + τ − = − τ − τ = 2 * E [| X ( t ) X ( t ) | ] 2 R ( 0 ) R ( ) R ( ) 0 i.e., X ( t ) is mean square periodic. 2 PILLAI

  3. ( τ Thus if X ( t ) is mean square periodic, then is periodic and let R ) π +∞ 2 ∑ ω τ τ = γ ω = jn R ( ) e , (19-4) 0 n 0 T −∞ represent its Fourier series expansion. Here 1 T ∫ − ω τ γ = τ τ jn R ( ) e d . (19-5) 0 n T 0 In a similar manner define 1 T ∫ ω = jk t c X t e ( ) dt (19-6) 0 k T 0 = −∞ → +∞ c k k Notice that are random variables, and , 1 T T ∫ ω ∫ − ω = jk t jm t * * E c c [ ] E [ X t e ( ) dt X ( ) t e dt ] 0 1 0 2 k m 1 1 2 2 2 T 0 0 1 T T ∫ ∫ ω − ω = − jk t jm t R t ( t e ) e dt dt 0 1 0 2 2 1 1 2 2 T 0 0 τ ��� 1 1 T T ∫ ∫ − ω − − − ω = − − jm ( t t ) j m k ( ) t [ R t ( t e ) d t ( t )] e dt 0 2 1 0 1 ��� ��� 2 1 2 1 1 T T 0 0 ������� τ ������� τ � 3 γ PILLAI m

  4. γ > =  0, k m T ∫ − − ω = γ 1 =  m * j m k ( ) t E c c [ ] { e dt } (19-7) 0 1 k m m 1 ≠ T ��� � ���� � 0 k m . 0  δ m k , = +∞ n i.e., form a sequence of uncorrelated random variables, { c } = −∞ n n and, further, consider the partial sum N = ∑ � − ω jk t X ( ) t c e . 0 (19-8) N k =− K N ~ = → ∞ We shall show that in the mean square sense as X N ( t ) X ( t ) N . i.e., ~ 2 − → → ∞ (19-9) E [ X ( t ) X ( t ) ] 0 as N . N Proof: 2 � � 2 − = − * E X t [ ( ) X ( ) ] t E X t [ ( ) ] 2Re[ ( E X ( ) t X ( )] t N N 2 � + (19-10) E X [ ( ) ]. t N 4 But PILLAI

  5. +∞ ∑ 2 = = γ E X t [ ( ) ] R (0) , k =−∞ k and N ∑ � − ω = * jk t * E X [ ( ) t X ( )] t E [ c e X ( )] t 0 N k =− k N 1 N T ∑ ∫ − ω − α = α α jk ( t ) * E [ X ( ) e X ( ) t d ] 0 T 0 =− k N 1 N N (19-12) T ∑ ∑ ∫ = − α − ω − α − α = γ jk ( t ) [ R t ( ) e d t ( )] . 0 k T 0 ������������� =− =− k N k N γ Similarly k N ∑∑ ∑∑ ∑ 2 � − ω − ω = = = γ * j k m ( ) t * j k m ( ) t E X [ ( ) ] t E [ c c e E c c [ ] e . 0 0 N k m k m k =− k m k m k N +∞ N ∑ ∑ 2 � ⇒ − = γ − γ → → ∞ E X t [ ( ) X ( ) ] t 2( ) 0 as N (19-13) N k k =−∞ =− k k N i.e., +∞ ∑ − ω − ∞ < < +∞ jk t X t ( ) � c e , t . (19-14) 0 5 k =−∞ k PILLAI

  6. Thus mean square periodic processes can be represented in the form of a series as in (19-14). The stochastic information is contained in the = −∞ → +∞ c k , k . random variables Further these random variables = γ δ * γ → → ∞ ( { E c c } ) 0 as k . are uncorrelated and their variances k m k k m , k This follows by noticing that from (19-14) +∞ ∑ 2 γ = = = < ∞ R (0) E X t [ ( ) ] P . k =−∞ k Thus if the power P of the stochastic process is finite, then the positive ∑ +∞ γ γ → → ∞ sequence converges, and hence 0 as k . This k k =−∞ k implies that the random variables in (19-14) are of relatively less k → ∞ importance as and a finite approximation of the series in , (19-14) is indeed meaningful. The following natural question then arises: What about a general stochastic process, that is not mean square periodic? Can it be represented in a similar series fashion as in (19-14), if not in the whole − ∞ < < ∞ ≤ ≤ t , interval say in a finite support 0 t T ? Suppose that it is indeed possible to do so for any arbitrary process 6 X (t) in terms of a certain sequence of orthonormal functions. PILLAI

  7. i.e., ∞ ~ ∑ = ϕ X ( t ) c ( t ) (19-15) k k = n 1 where T ∫ = ∆ ϕ * c X t ( ) ( ) t dt (19-16) k k 0 T ∫ ϕ ϕ = δ * (19-17) ( ) t ( ) t dt , k n k n , 0 and in the mean square sense � ≤ ≤ X t ( ) � X t ( ) in 0 t T . Further, as before, we would like the c k s to be uncorrelated random variables. If that should be the case, then we must have = λ δ * E c c [ ] . (19-18) k m m k m , Now T T ∫ ∫ = ϕ ϕ * * * E c c [ ] E [ X t ( ) ( ) t dt X ( ) t ( ) t dt ] k m 1 k 1 1 2 m 2 2 0 0 T T ∫ ∫ = ϕ ϕ * * ( ) t E X t X { ( ) ( )} t ( ) t dt dt k 1 1 2 m 2 2 1 0 0 T T ∫ ∫ = ϕ ϕ * ( ){ t R ( , t t ) ( ) t dt } dt 7 (19-19) k 1 XX 1 2 m 2 2 1 0 0 PILLAI

  8. and T ∫ λ δ = λ ϕ ϕ * ( ) t ( ) t dt . (19-20) m k m , m k 1 m 1 1 0 Substituting (19-19) and (19-20) into (19-18), we get T T ∫ ∫ ϕ ϕ − λ ϕ = * ( ){ t R ( , t t ) ( ) t dt ( )} t dt 0. (19-21) k 1 1 2 m 2 2 m m 1 1 XX 0 0 ϕ = → ∞ Since (19-21) should be true for every we must have k t ( ), k 1 , T ∫ ϕ − λ ϕ ≡ R ( , t t ) ( ) t dt ( ) t 0, 1 2 m 2 2 m m 1 XX 0 or T ∫ ϕ = λ ϕ < < = → ∞ (19-22) R ( , t t ) ( ) t dt ( ), 0 t t T , m 1 . 1 2 m 2 2 m m 1 1 XX 0 i.e., the desired uncorrelated condition in (19-18) gets translated into the integral equation in (19-22) and it is known as the Karhunen-Loeve or ϕ ∞ K-L. integral equation.The functions { k t ( )} are not arbitrary = 1 k and they must be obtained by solving the integral equation in (19-22). They are known as the eigenvectors of the autocorrelation 8 PILLAI

  9. λ ∞ function of Similarly the set represent the eigenvalues R ( , t t ). { } = 1 2 k k 1 XX λ of the autocorrelation function. From (19-18), the eigenvalues k c , represent the variances of the uncorrelated random variables k k = → ∞ This also follows from Mercer’s theorem which allows the 1 . representation ∞ ∑ = µ φ φ < < * R ( , t t ) ( ) t ( ), 0 t t t , T , (19-23) 1 2 k k 1 k 2 1 2 XX = k 1 where T ∫ φ φ = δ * ( ) t ( ) t dt . k m k m , 0 φ µ k = → ∞ Here and are known as the eigenfunctions ( t ) , 1 k k and eigenvalues of A direct substitution and R ( , t t ) respectively. 1 2 XX simplification of (19-23) into (19-22) shows that ϕ = φ λ = µ = → ∞ ( ) t ( ), t , k 1 . (19-24) k k k k Returning back to (19-15), once again the partial sum N ∑ � = ϕ  → ≤ ≤ X ( ) t c ( ) t ( ), 0 X t t T (19-25) 9 k k →∞ N N = k 1 PILLAI

Recommend


More recommend