markov chains
play

Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for - PowerPoint PPT Presentation

Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ September 23, 2020 Introduction to Random Processes Markov Chains 1


  1. Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ September 23, 2020 Introduction to Random Processes Markov Chains 1

  2. Markov chains Definition and examples Chapman-Kolmogorov equations Gambler’s ruin problem Queues in communication networks: Transition probabilities Classes of states Introduction to Random Processes Markov Chains 2

  3. Markov chains in discrete time ◮ Consider discrete-time index n = 0 , 1 , 2 , . . . ◮ Time-dependent random state X n takes values on a countable set ◮ In general, states are i = 0 , ± 1 , ± 2 , . . . , i.e., here the state space is Z ◮ If X n = i we say “the process is in state i at time n ” ◮ Random process is X N , its history up to n is X n = [ X n , X n − 1 , . . . , X 0 ] T ◮ Def: process X N is a Markov chain (MC) if for all n ≥ 1 , i , j , x ∈ Z n � � X n = i , X n − 1 = x � X n = i � � � � � P X n +1 = j = P X n +1 = j = P ij ◮ Future depends only on current state X n (memoryless, Markov property) ⇒ Future conditionally independent of the past, given the present Introduction to Random Processes Markov Chains 3

  4. Observations ◮ Given X n , history X n − 1 irrelevant for future evolution of the process ◮ From the Markov property, can show that for arbitrary m > 0 � X n = i , X n − 1 = x � X n = i � � � � � � P X n + m = j = P X n + m = j ◮ Transition probabilities P ij are constant (MC is time invariant) � X n = i � X 0 = i � � � � � � P X n +1 = j = P X 1 = j = P ij ◮ Since P ij ’s are probabilities they are non-negative and sum up to 1 ∞ � P ij ≥ 0 , P ij = 1 j =0 ⇒ Conditional probabilities satisfy the axioms Introduction to Random Processes Markov Chains 4

  5. Matrix representation ◮ Group the P ij in a transition probability “matrix” P  . . . . . .  P 00 P 01 P 02 P 0 j . . . . . . P 10 P 11 P 12 P 1 j    . . . . . .  . . . . . .   P = . . . . . .     P i 0 P i 1 P i 2 . . . P ij . . .   . . . . .  ...  . . . . . . . . . . ⇒ Not really a matrix if number of states is infinite ◮ Row-wise sums should be equal to one, i.e., � ∞ j =0 P ij = 1 for all i Introduction to Random Processes Markov Chains 5

  6. Graph representation ◮ A graph representation or state transition diagram is also used P i − 1 , i − 1 P i , i P i +1 , i +1 P i − 2 , i − 1 P i − 1 , i P i , i +1 P i +1 , i +2 . . . . . . i − 1 i i +1 P i − 1 , i − 2 P i , i − 1 P i +1 , i P i +2 , i +1 ◮ Useful when number of states is infinite, skip arrows if P ij = 0 ◮ Again, sum of per-state outgoing arrow weights should be one Introduction to Random Processes Markov Chains 6

  7. Example: Happy - Sad ◮ I can be happy ( X n = 0) or sad ( X n = 1) ⇒ My mood tomorrow is only affected by my mood today ◮ Model as Markov chain with transition probabilities 0 . 8 0 . 7 0 . 2 � � 0 . 8 0 . 2 P = 0 . 3 0 . 7 H S 0 . 3 ◮ Inertia ⇒ happy or sad today, likely to stay happy or sad tomorrow ◮ But when sad, a little less likely so ( P 00 > P 11 ) Introduction to Random Processes Markov Chains 7

  8. Example: Happy - Sad with memory ◮ Happiness tomorrow affected by today’s and yesterday’s mood ⇒ Not a Markov chain with the previous state space ◮ Define double states HH (Happy-Happy), HS (Happy-Sad), SH, SS ◮ Only some transitions are possible ◮ HH and SH can only become HH or HS ◮ HS and SS can only become SH or SS 0 . 2 0 . 2 0 . 8 HH HS   0 . 8 0 . 2 0 0 0 0 0 . 3 0 . 7   P = 0 . 7 0 . 8   0 . 8 0 . 2 0 0   0 0 0 . 3 0 . 7 0 . 3 SH SS 0 . 7 0 . 3 ◮ Key: can capture longer time memory via state augmentation Introduction to Random Processes Markov Chains 8

  9. Random (drunkard’s) walk ◮ Step to the right w.p. p , to the left w.p. 1 − p ⇒ Not that drunk to stay on the same place p p p p . . . . . . i − 1 i i +1 1 − p 1 − p 1 − p 1 − p ◮ States are 0 , ± 1 , ± 2 , . . . (state space is Z ), infinite number of states ◮ Transition probabilities are P i , i +1 = p , P i , i − 1 = 1 − p ◮ P ij = 0 for all other transitions Introduction to Random Processes Markov Chains 9

  10. Random (drunkard’s) walk (continued) ◮ Random walks behave differently if p < 1 / 2, p = 1 / 2 or p > 1 / 2 p = 0 . 45 p = 0 . 50 p = 0 . 55 100 100 100 80 80 80 60 60 60 position (in steps) 40 position (in steps) 40 position (in steps) 40 20 20 20 0 0 0 − 20 − 20 − 20 − 40 − 40 − 40 − 60 − 60 − 60 − 80 − 80 − 80 − 100 − 100 − 100 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 time time time ⇒ With p > 1 / 2 diverges to the right ( ր almost surely) ⇒ With p < 1 / 2 diverges to the left ( ց almost surely) ⇒ With p = 1 / 2 always come back to visit origin (almost surely) ◮ Because number of states is infinite we can have all states transient ◮ Transient states not revisited after some time (more later) Introduction to Random Processes Markov Chains 10

  11. Two dimensional random walk 40 ◮ Take a step in random direction E, W, S or N 35 30 ⇒ E, W, S, N chosen with equal probability 25 Latitude (North − South) 20 ◮ States are pairs of coordinates ( X n , Y n ) 15 10 ◮ X n = 0 , ± 1 , ± 2 , . . . and Y n = 0 , ± 1 , ± 2 , . . . 5 0 ◮ Transiton probs. � = 0 only for adjacent points − 5 − 10 − 5 0 5 10 15 20 25 30 35 40 Longitude (East − West) = 1 � X n = i , Y n = j � � � East: P X n +1 = i +1 , Y n +1 = j 50 4 40 = 1 30 � X n = i , Y n = j � � � West: P X n +1 = i − 1 , Y n +1 = j Latitude (North − South) 4 20 = 1 10 � � � X n = i , Y n = j � North: P X n +1 = i , Y n +1 = j +1 0 4 = 1 − 10 � X n = i , Y n = j � � � South: P X n +1 = i , Y n +1 = j − 1 − 20 4 − 30 − 45 − 40 − 35 − 30 − 25 − 20 − 15 − 10 − 5 0 Longitude (East − West) Introduction to Random Processes Markov Chains 11

  12. More about random walks ◮ Some random facts of life for equiprobable random walks ◮ In one and two dimensions probability of returning to origin is 1 ⇒ Will almost surely return home ◮ In more than two dimensions, probability of returning to origin is < 1 ⇒ In three dimensions probability of returning to origin is 0 . 34 ⇒ Then 0 . 19, 0 . 14, 0 . 10, 0 . 08, . . . Introduction to Random Processes Markov Chains 12

  13. Another representation of a random walk ◮ Consider an i.i.d. sequence of RVs Y N = Y 1 , Y 2 , . . . , Y n , . . . ◮ Y n takes the value ± 1, P ( Y n = 1) = p , P ( Y n = − 1) = 1 − p ◮ Define X 0 = 0 and the cumulative sum n � X n = Y k k =1 ⇒ The process X N is a random walk (same we saw earlier) ⇒ Y N are i.i.d. steps (increments) because X n = X n − 1 + Y n ◮ Q: Can we formally establish the random walk is a Markov chain? ◮ A: Since X n = X n − 1 + Y n , n ≥ 1, and Y n independent of X n − 1 � X n − 1 = i , X n − 2 = x � X n − 1 = i , X n − 2 = x � � � � � � P X n = j = P X n − 1 + Y n = j = P ( Y 1 = j − i ) := P ij Introduction to Random Processes Markov Chains 13

  14. General result to identify Markov chains Theorem Suppose Y N = Y 1 , Y 2 , . . . , Y n , . . . are i.i.d. and independent of X 0 . Consider the random process X N = X 1 , X 2 , . . . , X n , . . . of the form X n = f ( X n − 1 , Y n ) , n ≥ 1 Then X N is a Markov chain with transition probabilities P ij = P ( f ( i , Y 1 ) = j ) ◮ Useful result to identify Markov chains ⇒ Often simpler than checking the Markov property ◮ Proof similar to the random walk special case, i.e., f ( x , y ) = x + y Introduction to Random Processes Markov Chains 14

  15. Random walk with boundaries (gambling) ◮ As a random walk, but stop moving when X n = 0 or X n = J ◮ Models a gambler that stops playing when ruined, X n = 0 ◮ Or when reaches target gains X n = J 1 1 p p i − 1 i +1 0 i J . . . . . . 1 − p 1 − p ◮ States are 0 , 1 , . . . , J , finite number of states ◮ Transition probabilities are P i , i +1 = p , P i , i − 1 = 1 − p , P 00 = 1 , P JJ = 1 ◮ P ij = 0 for all other transitions ◮ States 0 and J are called absorbing. Once there stay there forever ⇒ The rest are transient states. Visits stop almost surely Introduction to Random Processes Markov Chains 15

  16. Chapman-Kolmogorov equations Definition and examples Chapman-Kolmogorov equations Gambler’s ruin problem Queues in communication networks: Transition probabilities Classes of states Introduction to Random Processes Markov Chains 16

  17. Multiple-step transition probabilities ◮ Q: What can be said about multiple transitions? ◮ Ex: Transition probabilities between two time slots P 2 � � X m = i � � ij = P X m +2 = j ⇒ Caution: P 2 ij is just notation, P 2 ij � = P ij × P ij ◮ Ex: Probabilities of X m + n given X m ⇒ n -step transition probabilities � X m = i P n � � � ij = P X m + n = j ◮ Relation between n -, m -, and ( m + n )-step transition probabilities ⇒ Write P m + n in terms of P m ij and P n ij ij ◮ All questions answered by Chapman-Kolmogorov’s equations Introduction to Random Processes Markov Chains 17

Recommend


More recommend