02407 Stochastic Processes Outline Outline Lecture 6: Outline Recap the stationary distribution, and the vector ρ ( k ) . The vector ρ ( k ) The vector ρ ( k ) The stationary The stationary Markov chains on finite state spaces distribution distribution Discrete-time Markov Chains III Finite state spaces Finite state spaces Perron-Frobenius Perron-Frobenius What to learn from the eigenvalues/eigenvectors of P ■ Time to absorption Time to absorption How to analyse P numerically ■ Hitting times Hitting times Detailed balance Detailed balance Summary Summary Time-reversible Markov chains and “detailed balance” Exercise Exercise Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby – Denmark Email: uht@imm.dtu.dk 1 / 21 2 / 21 The vector ρ ( k ) ρ ( k ) for a two-state Markov chain � 1 − p Outline Let the chain start with X 0 = k , and stop monitoring the process at Outline � p The vector ρ ( k ) The vector ρ ( k ) P = with 0 < p, q ≤ 1 T k = min { n > 0 : X n = k } . q 1 − q The stationary The stationary distribution distribution Finite state spaces Let N i denote number of visits to state i Finite state spaces We can write up the distribution of N 2 explicitly: Perron-Frobenius Perron-Frobenius Time to absorption Time to absorption � k = 0 ∞ Hitting times Hitting times : 1 − p � P 1 ( N 2 = k ) = N i = # { n : X n = i, T k ≥ n } = 1 ( X n = i, T k ≥ n ) Detailed balance Detailed balance p (1 − q ) k − 1 q k > 0 : Summary Summary n =1 Exercise Exercise Note the geometric tails. In particular Let ρ i ( k ) = E k N i . Taking expectations of the indicator functions ρ (1) = (1 , p/q ) ∞ � P k ( X n = i, T k ≥ n ) ρ i ( k ) = so that ρ (1) = ρ (1) P . n =1 Compare the stationary distribution 1 π = q + p ( q, p ) 3 / 21 4 / 21
Lemma 6.4.5: ρ ( k ) = ρ ( k ) P The simple symmetric random walk In this case Outline During one cycle, we expect to spend ρ i ( k ) time steps at state i . Outline The vector ρ ( k ) The vector ρ ( k ) ρ i ( k ) = 1 The stationary The stationary For each of these time steps, the probability that the next time step distribution distribution Finite state spaces is at state j , is p ij . Finite state spaces is a solution of ρ ( k ) = ρ ( k ) P . Perron-Frobenius Perron-Frobenius Time to absorption Time to absorption Thus So: Start the walk in k = 0 , and let i be arbitrary. The expected Hitting times Hitting times number of visits to i before returning to the origin is 1. Detailed balance Detailed balance Summary Summary � ρ j ( k ) = ρ i ( k ) p ij Exercise Exercise (We already saw this surprising result in theorem 3.10.8) i 5 / 21 6 / 21 ρ ( k ) and the mean recurrence time The stationary distribution Outline The time until recurrence must be spent somewhere: T k = � i N i . Outline In general the distribution evolves according to the The vector ρ ( k ) The vector ρ ( k ) Assuming positive recurrency, we take expectations w.r.t. P k : Chapman-Kolmogorov equations (lemma 6.1.8 with n = 1 ) The stationary The stationary distribution distribution Finite state spaces Finite state spaces µ ( m +1) = µ ( m ) P � µ k = ρ i ( k ) Perron-Frobenius Perron-Frobenius Time to absorption Time to absorption i Hitting times Hitting times (Recall that µ ( m ) = P ( X m = i ) ) Detailed balance Detailed balance i This, together with the other properties of ρ ( k ) : Summary Summary A stationary distribution is a constant (in time) solution to this Exercise Exercise recursion 1. ρ ( k ) = ρ ( k ) P , π = π P 2. ρ k ( k ) = 1 , In addition π must be a distribution - so means that we can generate a stationary distribution from ρ , whenever � j ρ j ( k ) is finite: N � π i ≥ 0 and π i = 1 π = ρ ( k ) /µ k i =1 Note that π k = 1 /µ k . 7 / 21 8 / 21
Recap properties of the stationary distribution How to find the stationary distribution π ? 1. If the initial state is distributed according to π , then any later Outline Outline (Simulate for a long time and plot a histogram - provided the chain The vector ρ ( k ) The vector ρ ( k ) state is, too. is ergodic ) The stationary The stationary distribution distribution 2. The mean recurrence time µ i is 1 /π i . Finite state spaces Finite state spaces A simple technique is null spaces: Perron-Frobenius Perron-Frobenius 3. If the chain is irreducible and aperiodic Time to absorption Time to absorption In Matlab Hitting times Hitting times Detailed balance p ij ( n ) → π j as n → ∞ . Detailed balance pivec = null(P’-eye(length(P))); Summary Summary Exercise Exercise pivec = pivec/sum(pivec); 4. If the chain is irreducible, the fraction of time spent in state i over a long time interval is π i (see problem 7.11.32) In R/ MASS pivec <- Null(P - diag(nrow(P))) pivec <- pivec/sum(pivec) 9 / 21 10 / 21 Finding π using eigenvectors The Perron-Frobenius theorem 6.6.1 Outline The stationarity condition Outline We know that 1 is an eigenvalue of a stochastic matrix, because The vector ρ ( k ) The vector ρ ( k ) with 1 = (1 , . . . , 1) ′ : The stationary The stationary distribution π = π P distribution Finite state spaces Finite state spaces � Perron-Frobenius Perron-Frobenius P1 = 1 , i.e. p ij = 1 says that π is a left eigenvector of P , with eigenvalue λ = 1 . Time to absorption Time to absorption j Hitting times Hitting times In Matlab: Detailed balance Detailed balance According to the Perron-Frobenius theorem: If the chain is aperiodic Summary Summary [V,D] = eig(P’); Exercise Exercise and irreducible, then the eigenvalue 1 is simple (i.e., multiplicity 1) pivec = V(:, abs(diag(D) - 1)< 1e-6); and all other eigenvalues λ have | λ | < 1 . pivec = real(pivec/sum(pivec)); According to Farkas’ theorem (exercise 6.6.2), the left eigenvector In R: can be taken to be a distribution. evs <- eigen(t(P)) pivec <- evs$vectors[, abs(evs$values - 1) < 1e-6] pivec <- Re(pivec/sum(pivec)) 11 / 21 12 / 21
Finding the time to absorption Finding the hitting time distribution f ij ( n ) Consider a chain with one absorbing state, and transition matrix Consider an irreducible chain. Outline Outline The vector ρ ( k ) The vector ρ ( k ) � P The stationary The stationary Let T j be the stopping time min { n ≥ 1 : X n = j } . � distribution p distribution Finite state spaces Finite state spaces 1 0 f ij ( · ) is the distribution of T j , when starting in X 0 = i . Perron-Frobenius Perron-Frobenius Time to absorption Time to absorption Let π be the initial distribution on { 1 , . . . , N − 1 } . Let Hitting times Hitting times Modify P so state j is absorbing - this does not change f ij when Detailed balance Detailed balance e = (1 , . . . , 1) ′ . i � = j . Summary Summary Exercise Exercise Iterate µ ( n +1) = µ ( n ) P starting with an µ (0) which has a 1 at i and The probability of not being absorbed by time n is π P n e . This is the survival function . zeros elsewhere. Then n =0 π P n e = π ( I − P ) − 1 e . The expected time to absorption is � ∞ P i ( T j ≤ n ) = µ ( n ) j is the distribution function of T j , and f ij ( n ) = P i ( T j ≤ n ) − P i ( T j ≤ n − 1) 13 / 21 14 / 21 The mean hitting time Steady-state and detailed balance Outline (as in the gamble for the Jaguar) Outline The steady-state criterion The vector ρ ( k ) The vector ρ ( k ) π = π P The stationary The stationary Define φ so that φ i = E i T j distribution distribution Finite state spaces Finite state spaces says that net flow away from i is zero. This is global balance . Condition on the first time step! Perron-Frobenius Perron-Frobenius Time to absorption Time to absorption A stronger criterion is that the net exchange between any two states Hitting times Hitting times E i T j = E i ( E i T j | X 1 ) = � p ik ( φ k + 1) Detailed balance Detailed balance i and j is zero: Summary Summary k π i p ij = π j p ji Exercise Exercise whenever i � = j . When i = j we have φ j = 0 . In matrix-vector form: This is the criterion of detailed balance . Sum over i to see that this implies global balance. φ = D ( P φ + 1 ) where 1 = (1 , . . . , 1) ′ , and D is a diagonal matrix with ones in the diagonal except one zero at position ( j, j ) . 15 / 21 16 / 21
Recommend
More recommend