discrete time markov chains
play

Discrete Time Markov Chains EECS 126 Fall 2019 October 15, 2019 1 - PowerPoint PPT Presentation

Discrete Time Markov Chains EECS 126 Fall 2019 October 15, 2019 1 / 24 Agenda Announcements Introduction Recap of Discrete Time Markov Chains n-step Transition Probabilities Classification of States Recurrent and Transient States


  1. Discrete Time Markov Chains EECS 126 Fall 2019 October 15, 2019 1 / 24

  2. Agenda Announcements Introduction Recap of Discrete Time Markov Chains n-step Transition Probabilities Classification of States Recurrent and Transient States Decomposition of States General Decomposition of States Periodicity Stationary Distributions Definitions Balance Equations 2 / 24

  3. Announcements ◮ Homework 7 due Tomorrow night (10/16)! ◮ Lab self-grades due on Monday night (10/21). 3 / 24

  4. Recap of Discrete Time Markov Chains Figure: Example of a Markov chain ◮ State changes at discrete times ◮ State X n belongs to a finite set S (for now) ◮ Satisfies the Markov property for transitions from state i ∈ S to state j ∈ S P ( X n +1 = j | X n = i , X n − 1 = x n − 1 . . . X 1 = x 1 ) = P ( X n +1 = j | X n = i ) = p ij Where, p ij ≥ 0 , � j p ij = 1 ◮ Time homogeneous: the evolution of the system or transition probabilities are time independent 4 / 24

  5. Recap of Discrete Time Markov Chains The probability transition matrix P contains all the information about transitions between different states   p 11 p 12 p 1 n . . . p 21 p 22 . . . p 2 n     P = . . . .  . . . .  . . . .   p n 1 p n 2 p nn . . . � � Let π ( n ) = P ( X n = 1) . . . P ( X n = m ) then π ( n +1) = π ( n ) P ⇒ π ( n ) = π (0) P n 5 / 24

  6. Example a b 1-b 1-a q 1 q 2 q 3 1-c c � � π 0 = 0 . 3 0 . 3 0 . 4 ◮ Write the probability transition matrix P ◮ What is P ( X 0 = q 1 , X 1 = q 2 , X 3 = q 1 )? ◮ What is P ( X 0 = q 1 , X 1 = q 1 , X 2 = q 2 , X 3 = q 3 , X 4 = q 3 )? 6 / 24

  7. Answers a b 1-b 1-a q 1 q 2 q 3 1-c c   a 1 − a 0 ◮ P = 0 1 − b b   0 c 1 − c ◮ P ( X 0 = q 1 , X 1 = q 2 , X 3 = q 1 ) = 0. You cannot go to q 1 from q 2 . ◮ Use the Markov Property. P ( X 0 = q 1 , X 1 = q 1 , X 2 = q 2 , X 3 = q 3 , X 4 = q 3 ) = P ( X 0 = q 1 ) · P ( X 1 = q 1 | X 0 = q 1 ) . . . P ( X 4 = q 3 | X 3 = q 3 ) = 0 . 3 · a · (1 − a ) · (1 − b ) · (1 − c ) 7 / 24

  8. n-step Transition Probabilities Let r ij ( n ) = P ( X n = j | X 0 = i ) represent the probability that you are in state j exactly n steps after reaching state i . The value of r ij ( n ) can be calculated recursively as � r ij ( n ) = r ik ( n − 1) p kj k ∈ S Observe that r ij (1) = p ij . ⇒ r ij ( n ) = P n i , j the i , j entry of P n . 8 / 24

  9. Recurrent and Transient States ◮ Accessible: State j is accessible or reachable from state i if ∃ n ∈ N such that r ij ( n ) > 0. ◮ Recurrence: A state i is recurrent if ∀ j reachable from i , i is reachable from j . That is if A ( i ) is the set of reachable states from i , then i is recurrent if ∀ j ∈ A ( i ) ⇒ i ∈ A ( j ). ◮ Transient: A state i is transient if it is not recurrent. ◮ Classify the states in the below Markov chain as recurrent or transient. a 0.5 0.5 1-a q 1 q 2 q 3 0.7 0.3 9 / 24

  10. Answer a 0.5 0.5 1-a q 1 q 2 q 3 0.7 0.3 ◮ If a = 1, q 1 , q 2 , q 3 are recurrent. ◮ If a < 1, q 1 is transient and q 2 , q 3 are recurrent. 10 / 24

  11. Decomposition of States ◮ Recurrent Class: For any recurrent state i , all states A ( i ) (the set of states reachable from i ) form a recurrent class. Any Markov chain can be decomposed into one ore more recurrent classes. ◮ A state in a recurrent class is not reachable from states in any other recurrent class (try to prove this). ◮ Transient states are not reachable from a recurrent state. Moreover, from every transient state atleast one recurrent state is reachable. ◮ Find the recurrent classes in the following MC: a 0.5 0.5 1-a q 1 q 2 q 3 0.7 0.3 11 / 24

  12. Answers a 0.5 0.5 1-a q 1 q 2 q 3 0.7 0.3 ◮ If a = 1, { q 1 } , { q 2 , q 3 } form two recurrent classes. ◮ If a < 1, q 1 is transient, and { q 2 , q 3 } form a recurrent class. 12 / 24

  13. General Decomposition of States A Markov chain is called irreducible if it only has one recurrent class. For any non-irreducible Markov chain, we can identify the recurrent classes using the following process ◮ Create directed edges between any two nodes that have a non-zero transition probability between them. ◮ Find the strongly connected components of the graph. ◮ Use transitions between different strongly connected components to further topologically sort the graph. ◮ Each strongly connected component at the bottom of the topologically sorted structure forms a recurrent class. All other nodes in this final structure are transient. 13 / 24

  14. Example 0.1 0.4 0.5 0.9 q 1 q 2 q 3 0.6 0.3 0.1 0.1 0.5 q 4 q 5 0.5 0.7 0.3 14 / 24

  15. Periodicity Consider an irreducible Markov chain. Define d ( i ) := g.c.d. { n ≥ 1 | r ii ( n ) > 0 } ◮ Remember: r ij ( n ) = P ( X t + n = j | X t = i ) ◮ ”All paths back to i take a multiple of d ( i ) steps” ◮ Fact: ∀ i d ( i ) is the same. ◮ Fact: for Markov chains with more than one recurrent class, each class has a separate value for d ◮ We define a Markov chain as aperiodic if d ( i ) = 1 ∀ i . ◮ Otherwise, we say it’s periodic with period d 15 / 24

  16. Periodicity Examples Are the following Markov chains aperiodic? 1 0.5 1 2 3 0.5 1 ◮ d = 2, so this is periodic . 1 0.5 1 2 3 0.01 0.5 0.99 ◮ d = 1, so this is aperiodic . ◮ Adding a self loop will make an irreducible Markov chain aperiodic! 16 / 24

  17. Periodicity Examples (continued) We won’t particularly worry about periodicity/aperiodicity for Markov chains with more than 1 recurrent class. 1 0.5 1 2 3 1 0.5 17 / 24

  18. Stationary Distribution If we choose the initial state of the Markov chain according to the distribution P ( X 0 = j ) = π 0 ( j ) ∀ j and this implies P ( X n = j ) = π 0 ( j ) ∀ j , n then we say that π 0 is stationary . The balance equations are sufficient for stationarity: m � π 0 ( j ) = π 0 ( k ) p kj ∀ j k =1 ◮ The balance equations can be written as π 0 = π 0 P . In linear algebra terms, π 0 is a left eigenvector of P that has corresponding eigenvalue λ = 1 ◮ In general, there can be multiple unique stationary distributions. 18 / 24

  19. Stationary Distribution Example 0.75 0.25 1 2 0.75 0.25 Let’s try π 0 = [1 , 0]. π 1 (1) = P ( X 1 = 1 | X 0 = 1) π 0 (1) + P ( X 1 = 1 | X 0 = 2) π 0 (2) (1) = (0 . 25)(1) + (0 . 25)(0) (2) = 0 . 25 (3) Similarly, π 1 (2) = P ( X 1 = 2 | X 0 = 1) π 0 (1) + P ( X 1 = 2 | X 0 = 2) π 0 (2) (4) = (0 . 75)(1) + (0 . 75)(0) (5) = 0 . 75 (6) π 1 = [0 . 25 , 0 . 75] � = π 0 , so π 0 = [1 , 0] is not stationary. 19 / 24

  20. Stationary Distribution Example (continued) 0.75 0.25 1 2 0.75 0.25 Let’s solve for the stationary distribution. Let π 0 = [ x , 1 − x ]. x = P ( X 1 = 1 | X 0 = 1) x + P ( X 1 = 1 | X 0 = 2)(1 − x ) (7) = 0 . 25 x + 0 . 25(1 − x ) (8) 1 − x = P ( X 1 = 2 | X 0 = 1) x + P ( X 1 = 2 | X 0 = 2)(1 − x ) (9) = 0 . 75 x + 0 . 75(1 − x ) (10) We see that 1 − x = 3 x , so x = 0 . 25. Our stationary distribution is π 0 = [0 . 25 , 0 . 75] 20 / 24

  21. Probability Flow Interpretation π ( i ) p ii π (1) p 1 i π ( i ) p i 1 . . . . . . i π ( m ) p mi π ( i ) p im ◮ For any distribution, probability mass flows in and out of every state at each step. ◮ By subtracting π ( i ) p ii from both sides of the balance equation, we have: � � π ( j ) p ji = π ( i ) ∀ i p ij j � = i j � = i � �� � � �� � flow in flow out 21 / 24

  22. Example Revisited a 1-a 1 2 1-b b Let π 0 = [ x , 1 − x ]. � � π ( j ) p ji = π ( i ) p ij (11) j � = i j � = i Using this at state 2, xa = (1 − x ) b (12) b x = (13) a + b a 1 − x = (14) a + b 22 / 24

  23. The Big Theorem ◮ If a Markov chain is finite and irreducible, it has a unique invariant distribution π and π ( i ) is the long term fraction of time that X ( n ) is equal to i , almost surely. ◮ If the Markov chain is also aperiodic, then the distribution of X ( n ) converges to π . 23 / 24

  24. References Introduction to probability. DP Bertsekas, JN Tsitsiklis - 2002 24 / 24

Recommend


More recommend