some new developments in sequential analysis extension of
play

Some New Developments in Sequential Analysis Extension of - PDF document

University of Patras [ 1 ] Some New Developments in Sequential Analysis Extension of Optimality of Well Known Stopping Times Extension of Walds First Identity to Markov Processes George V. Moustakides Dept. Computer Engineering and


  1. University of Patras [ 1 ] Some New Developments in Sequential Analysis • Extension of Optimality of Well Known Stopping Times • Extension of Wald’s First Identity to Markov Processes George V. Moustakides Dept. Computer Engineering and Informatics University of Patras, Greece e-mail: moustaki@cti.gr

  2. University of Patras [ 2 ] Extension of Optimality of Well Known Stopping Times Given sequentially ξ 1 , ξ 2 , . . . , ξ n , . . . {F n } the corresponding filtration Given conditional probability measures { P n ( ξ n |F n − 1 ) } , { Q n ( ξ n |F n − 1 ) } with Q n ( ξ n |F n − 1 ) ≪ P n ( ξ n |F n − 1 ) Hypotheses Testing H 0 : { ξ n } statistics according { P n ( ξ n |F n − 1 ) } H 1 : { ξ n } statistics according { Q n ( ξ n |F n − 1 ) } Decide between H 0 and H 1 Stopping Time N and decision rule d N

  3. University of Patras [ 3 ] Disruption { ξ n } m − 1 statistics according { P n ( ξ n |F n − 1 ) } 1 { ξ n } ∞ m statistics according { Q n ( ξ n |F n − 1 ) } Detect unknown disruption time m Stopping time N Optimum Schemes For { ξ n } i.i.d. { P n ( ξ n |F n − 1 ) } = P ( ξ n ) { Q n ( ξ n |F n − 1 ) } = Q ( ξ n ) l n = dQ ( ξ n ) dP ( ξ n ) Hypotheses Testing: SPRT Disruption: Geometric prior CUSUM Shiryayev-Roberts

  4. University of Patras [ 4 ] All proofs need { l n } to be i.i.d. and not { ξ n } Given { P n ( ξ n |F n − 1 ) } , { Q n ( ξ n |F n − 1 ) } l n = dQ n ( ξ n |F n − 1 ) dP n ( ξ n |F n − 1 ) If, for all n , P n { l n ≤ x |F n − 1 } = F 0 ( x ) then � x Q n { l n ≤ x |F n − 1 } = F 1 ( x ) = 0 zdF 0 ( z ) and { l n } j i is i.i.d. under both measures induced by the two sequences of conditional measures.

  5. University of Patras [ 5 ] Examples Finite State Markov Chains Two States:     p 1 − p q 1 − q     P =  Q =             1 − p p 1 − q q        q 1 − q p 1 − p   p 1 − p   L =  P =          1 − q q    1 − p p      1 − p p P ( l n = q p |F n − 1 ) = p , P ( l n = 1 − q 1 − p |F n − 1 ) = 1 − p

  6. University of Patras [ 6 ] Generalization: � p = [ p 1 p 2 · · · p s ], � q = [ q 1 q 2 · · · q s ] � p i = � q i = 1 p i , q i ≥ 0 and T i , i = 1 , . . . , s , permutation matrices     pT 1 � � qT 1                 pT 2 � � qT 2         P = Q =         . .     . . . .                         pT s � qT s �     Cyclic case   p 1 p 2 p 3 0 · · · 0         0 p 1 p 2 p 3 · · · 0         . . . . . .  . . . . . .  . . . . . .             p 2 p 3 0 0 · · · p 1   T i can be time varying.

  7. University of Patras [ 7 ] AR Processes H 0 : ξ n = w n , w n : i.i.d. uniform on [-1 1] H 1 : ξ n = αξ n − 1 + w n , w n : i.i.d. f 1 ( w ) on [ − (1 − α ) (1 − α )] P n ( l n ≤ x |F n − 1 ) = 0 . 5 ν { ξ n : 2 f 1 ( ξ n − αξ n − 1 ) ≤ x } = 0 . 5 ν { w : 2 f 1 ( w ) ≤ x }

  8. University of Patras [ 8 ] Random Walk on a Circle H 0 : { ξ n } i.i.d. uniform on unit circle H 1 : ξ n = g ( ξ n − 1 + w n ), w n i.i.d. f 1 ( w ) g ( ξ ) = ξ − 2 kπ for 2 kπ ≤ ξ < 2( k + 1) π The transition density under H 1 ∞ h ( ξ n | ξ n − 1 ) = k = −∞ f 1 ( ξ n − ξ n − 1 + 2 kπ ) � therefore ∞ l n = 2 π k = −∞ f 1 ( ξ n − ξ n − 1 + 2 kπ ) � P n ( l n ≤ x |F n − 1 ) = ∞ (2 π ) − 1 ν { w : 2 π k = −∞ f 1 ( w + 2 kπ ) ≤ x } �

  9. University of Patras [ 9 ] Extension of Wald’s First Identity to Markov Processes n Let X 1 , X 2 , . . . , i.i.d. and S n = k =1 X k . � Simplest form: If E [ | X 1 | ] < ∞ and N stopping time with E [ N ] < ∞ then N E [ S N ] = E [ n =1 X n ] = E [ X 1 ] E [ N ] � If E [ X 1 ] = 0 then E [ S N ] = 0. Generalizations consider E [ X 1 ] = 0 and relax E [ N ] < ∞ . If E [ | X 1 | α ] < ∞ and E [ N 1 /α ] < ∞ , 1 ≤ α ≤ 2, then E [ S N ] = 0.

  10. University of Patras [ 10] The Markov Case Let { ξ n } a homogeneous Markov process and θ ( ξ ) a scalar n nonlinearity. Consider X n = θ ( ξ n ) and S n = k =1 θ ( ξ k ) � N E [ n =1 θ ( ξ n )] =? � A first result E [ S N ] = µ ′ (0) E [ N ] − E [ r ′ ( ξ N , 0)] + E [ r ′ ( ξ 0 , 0)] µ ( s ) , r ( ξ, s ) are solutions to the eigenvalue problem y ( ξ ) = E [ e sθ ( ξ 1 ) x ( ξ 1 ) | ξ 0 = ξ ] e µ ( s ) r ( ξ, s ) = E [ e sθ ( ξ 1 ) r ( ξ 1 , s ) | ξ 0 = ξ ]

  11. University of Patras [ 11] Proposed Extension: E [ S N ] = lim n →∞ E [ θ ( ξ n )] E [ N ] + E [ ω ( ξ 0 )] − E [ ω ( ξ N )] where ω ( ξ ) satisfies a Poisson Integral Equation that has closed form solution for several interesting cases. Requirements 1. Existence of invariant measure π . 2. Class of functions θ ( ξ ): E π [ | θ ( ξ ) | ] < ∞ . 3. Type of ergodicity E [ θ ( ξ n )] → E π [ θ ( ξ )]. Background Meyn & Tweedie: Markov Chains and Stochastic Stabil- ity.

  12. University of Patras [ 12] Theorem (Meyn and Tweedie): Let { ξ n } irreducible and aperiodic then the following two conditions are equivalent: i) There exists function V ( ξ ) ≥ 1, a proper set C and constants 0 ≤ λ < 1, b < ∞ such that the following Drift Condition is satisfied E [ V ( ξ 1 ) | ξ 0 = ξ ] ≤ λV + b 1 l C V ( ξ ) is called Drift Function . ii) There exists probability measure π , function V ( ξ ) ≥ 1 and constants 0 ≤ ρ < 1, R < ∞ such that | g |≤ V | E [ g ( ξ n ) | ξ 0 = ξ ] − πg | ≤ ρ n RV ( ξ ) sup

  13. University of Patras [ 13] Denote P n g = E [ g ( ξ n ) | ξ 0 = ξ ] The drift condition can be written as PV ≤ λV + b 1 l C Define space of function L ∞ V to be all measurable functions g ( ξ ) such that | g ( ξ ) | sup V ( ξ ) < ∞ ξ Define also a norm � g � V in L ∞ V to be | g ( ξ ) | � g � V = sup V ( ξ ) ξ then L ∞ V is Banach. Furthermore for g ∈ L ∞ V we have, due to Theorem 1 | P n g − πg | ≤ ρ n R � g � V V ( ξ )

  14. University of Patras [ 14] Let θ ( ξ ) ∈ L ∞ V consider the Poisson Integral Lemma: Equation with respect to the unknown ω ( ξ ) Pω = ω − ( Pθ − πθ ) , πω = 0 then the unique solution in L ∞ V is ∞ n =1 ( P n θ − πθ ) ω = � Theorem: Let E [ V ( ξ 0 )] < ∞ then for any θ ( ξ ) ∈ L ∞ V we have N E [ S N ] = E [ n =1 θ ( ξ n )] � = ( πθ ) E [ N ] + E [ ω ( ξ 0 )] − E [ ω ( ξ N )] E [ ω ( ξ 0 )] − E [ ω ( ξ N )] lim = 0 E [ N ] E [ N ] →∞

  15. University of Patras [ 15] Examples Finite State Markov Chains Let ξ n have K states and P denote the transition proba- bility matrix. P has a unit eigenvalue, if this eigenvalue is simple and all other eigenvalues have magnitude strictly less than unity then the chain is irreducible and aperiodic and an invariant measure π exists being the left eigenvector to the unit eigenvalue of P , i.e. π t P = π t and [1 · · · 1] π = 1.

  16. University of Patras [ 16] Any function θ ( ξ ) can be regarded as a vector θ of length K and its expectation under the invariant measure is simply π t θ . The Poisson Equation and the constraint takes here the form ( P − I ) ω = − ( P − Jπ t ) θ π t ω = 0 where I is the identity matrix and J = [1 · · · 1] t . If the null space of P is nontrivial then we can find vectors θ with corresponding ω = 0.

  17. University of Patras [ 17] Finite Dependence Consider { ζ n } ∞ n = − m +1 i.i.d. with probability measure µ . Define ξ n = ( ζ n , ζ n − 1 , . . . , ζ n − m +1 ). For simplicity consider m = 2, i.e. ξ n = ( ζ n , ζ n − 1 ) and we are interested in θ ( ζ n , ζ n − 1 ). The invariant measure exists and it is equal to π = µ × µ . Furthermore one can show that the process is irreducible and aperiodic. In fact we can see that P n = π for n ≥ 2. This means that the solution to the Poisson Equation is ∞ n =1 P n θ − πθ = Pθ − πθ ω = � or ω ( ζ ) = E [ θ ( ζ 1 , ζ 0 ) | ζ 0 = ζ ] − E [ θ ( ζ 1 , ζ 0 )]

  18. University of Patras [ 18] Generalized Wald’s identity takes the form N E [ n =1 θ ( ζ n , ζ n − 1 )] = E [ θ ( ζ 1 , ζ 0 )] E [ N ]+ � E [ ω ( ζ 0 )] − E [ ω ( ζ N )] where ω ( ζ ) = E [ θ ( ζ 1 , ζ 0 ) | ζ 0 = ζ ] − E [ θ ( ζ 1 , ζ 0 )] Finding θ ( ξ ) functions for which ω ( ξ ) = 0 is easy. Let g ( ζ 1 , ζ 0 ) be such that π | g | < ∞ then if θ ( ζ 1 , ζ 0 ) = g ( ζ 1 , ζ 0 ) − E [ g ( ζ 1 , ζ 0 ) | ζ 0 ] + c we have ω ( ζ ) = 0.

  19. University of Patras [ 19] AR Processes We consider the scalar case ξ n = αξ n − 1 + w n , { w n } i . i . d ., | α | < 1 Lemma: If w n has an everywhere positive density then { ξ n } is irreducible and aperiodic. 1. If E [ | w 1 | p ] < ∞ then V ( ξ ) = 1+ | ξ | p is a drift function. 2. If for c > 0 we have E [ e c | w 1 | p ] < ∞ (true for 1 ≤ p ≤ 2 when w n is Gaussian) then there exists δ > 0 such that V ( ξ ) = e δ | ξ | p is a drift function.

Recommend


More recommend