Markov Chains and Pandemics Caleb Dedmore and Brad Smith December 12, 2016
Markov Chain Basics A displayed formula: ( InitialConditionVector ) · ( TransitionMatrix ) t (1) - or - x 0 T t = x (1) Markov Chains have to meet the requirements of two rules to exist: A) Each trial leads to a finite set of outcomes B) The outcome of any trial depends at most on the outcomes of the immediately preceding conditions. Theorem For t, the number of trials, our Markov equation (1) is the predicted outcome from the initial condition. Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Initial Condition Vector The first factor to consider in a Markov Chain, is the initial condition vector: ( x (0) 1 , x (0) 2 , ..., x (0) ) i The values of the vector must add up to the whole of the targeted subject, or one. Caleb Dedmore and Brad Smith Markov Chains and Pandemics
3 2 1 Figure: A Basic Transition Diagram. Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Transition Matrix . . . T 11 T 12 T 1 j T 21 T 22 . . . T 2 j ( x (0) 1 , x (0) 2 , . . . , x (0) = ( x (1) 1 , x (1) 2 , . . . , x (1) ) ) . . . . . . . . i i . . . . . . . T i 1 T i 2 T ij Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Markov Notation T 11 x (0) + T 21 x (0) 2 + · · · + T i 1 x (0) = x (1) 1 i 1 T 12 x (0) + T 22 x (0) 2 + · · · + T i 2 x (0) = x (1) 1 2 i . . . T 1 j x (0) + T 2 j x (0) 2 + · · · + T ij x (0) = x (1) 1 i i ( x (1) 1 , x (1) 2 , . . . , x (1) ) = x (1) i We can use the same method of calculations to find the next condition vector: x (1) T = x (2) Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Markov Notation T 11 x ( k − 1) + T 21 x ( k − 1) + · · · + T i 1 x ( k − 1) = x ( k ) 1 2 1 i T 12 x ( k − 1) + T 22 x ( k − 1) + · · · + T i 2 x ( k − 1) = x ( k ) 1 2 2 i . . . T 1 j x ( k − 1) + T 2 j x ( k − 1) + · · · + T ij x ( k − 1) = x ( k ) 1 2 i i We now have a feasible equation for one trial increments: x ( k ) = x ( k − 1) T (2) Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Determining Initial Condition Values So now, for our starting initial condition vector, we have the notation below: ( x (0) 1 , x (0) 2 , 1 − x (0) − x (0) 2 ) 1 ( x (0) 1 , x (0) 2 , x (0) 3 ) Lets place our values as: x (0) = ( . 90 , . 07 , . 03) (3) Caleb Dedmore and Brad Smith Markov Chains and Pandemics
SIR Diagram 0.12 I 0.15 0.88 S R 0.85 1.0 Figure: A Complete SIR Diagram. Caleb Dedmore and Brad Smith Markov Chains and Pandemics
SIR Matrix Using our values from our diagram 2, we can create a transition matrix for calculations of change within the population in our scenario. S represents susceptible, I represents infected, and R represents recovered: S I R S . 85 . 15 . 0 I . 0 . 12 . 88 (4) 0 . 0 1 . 0 R Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Beginning Trials We can now plug in our ICV (3) and SIR (4) for our disease scenario to calculate one trial. x (0) T = x (1) . 85 . 15 . 0 = x (1) ( . 90 , . 07 , . 03) . 0 . 12 . 88 0 . 0 1 . 0 Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Beginning Trials Calculating out our vector-matrix multiplication, we can find our condition vector for one trial, or one week of our scenario: 0 . 85( . 90) + 0( . 07) + 0( . 03) = 0 . 765 0 . 15( . 90) + 0 . 12( . 07) + 0( . 03) = 0 . 1434 0( . 90) + 0 . 88( . 07) + 1 . 0( . 03) = 0 . 0916 x (1) = (0 . 765 , 0 . 143 , 0 . 092) Caleb Dedmore and Brad Smith Markov Chains and Pandemics
The Second Trial Using equation (2), we can find our formula for two trials: x (2) = x (2 − 1) ( TransitionMatrix ) = x (1) ( TransitionMatrix ) . 85 . 15 . 0 = (0 . 650 , 0 . 132 , 0 . 218) (0 . 765 , 0 . 143 , 0 . 092) . 0 . 12 . 88 0 . 0 1 . 0 Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Matrix Power ( InitialConditionVector ) · ( TransitionMatrix ) t It is important to note that t is not the symbol T for transpose, but is instead the number of trials. For t = 5: 5 . 85 . 15 . 0 ( . 90 , . 07 , . 03) . 0 . 12 . 88 → ( . 399 , . 082 , . 519) − 0 . 0 1 . 0 Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Matrix Power 10 . 85 . 15 . 0 ( . 90 , . 07 , . 03) . 0 . 12 . 88 → ( . 178 , . 036 , . 786) − 0 . 0 1 . 0 - or - 0 . 197 0 . 040 0 . 763 6 . 19 ∗ 10 − 10 ( . 90 , . 07 , . 03) . 0 1 . 0 − → ( . 178 , . 036 , . 786) 0 . 0 1 . 0 Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Steady State Vector ( InitialConditionVector ) · ( MarkovMatrix ) t = ( InitialConditionVector ) - or - M p = p Where M is the Transpose of our Matrix, and p is the immediately preceding vector. Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Steady State Vector M p = p M p − p = 0 ( M − I ) p = 0 Now, simply finding the nullspace gives us the value of p . Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Steady State Vector . 85 0 0 1 0 0 ( M − I ) − . 15 . 12 0 − 0 1 0 → 0 . 88 1 . 0 0 0 1 − . 15 0 0 . 15 − . 88 0 − → 0 . 88 0 By row reducing, we can solve for our steady state vector: p = (0 , 0 , 1) Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Taking the Limit t . 85 . 15 . 0 . 0 . 0 1 . 0 ( . 90 , . 07 , . 03) lim . 0 . 12 . 88 → ( . 90 , . 07 , . 03) . 0 . 0 1 . 0 − t →∞ 0 . 0 1 . 0 . 0 . 0 1 . 0 → (0 , 0 , 1) − Caleb Dedmore and Brad Smith Markov Chains and Pandemics
SIRD S I R D . 85 . 15 . 0 . 0 S I . 0 . 20 . 70 . 10 . 0 . 0 1 . 0 0 . R D . 0 . 0 . 0 1 . 0 By adding a new state, we can manipulate our transition matrix into an IODA matrix. The form is show below: � I � O D A Caleb Dedmore and Brad Smith Markov Chains and Pandemics
IODA Through basic row operations, our matrix is now in IODA form: R D S I R 1 . 0 . 0 . 0 . 0 . 0 1 . 0 . 0 . 0 D S . 0 . 0 . 85 . 15 . 70 . 10 . 0 . 20 I Partitioned into IODA sections: 1 . 0 . 0 . 0 . 0 . 0 1 . 0 . 0 . 0 . 0 . 0 . 85 . 15 . 70 . 10 . 0 . 20 Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Average Now, to further manipulate our findings, we will solve N = ( I − A ) − 1 � � − 1 � � 1 . 0 � � . 85 . 0 . 15 ( I − A ) − 1 − → − . 0 1 . 0 . 0 . 20 � − 1 � . 15 − . 15 − → . 0 . 80 � . 80 � . 15 → (1 /. 12) − . 0 . 15 � 6 . 67 � 1 . 25 − → 0 . 1 . 25 Caleb Dedmore and Brad Smith Markov Chains and Pandemics
By reducing N, we get the matrix: � 5 . 34 � 1 0 . 1 The average amount of trials before a member of the population enters an absorptive state, either recovered or deceased, is 5.34 trials. Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Sources David Arnold. Writing Scientific Papers in L A T EX David Arnold. The Leslie Matrix Rose-Hulman Institute of Technology. Markov Chains Bernadette H. Perham and Arnold E. Perham. Topics in Discrete Mathematics: Markov Chain Theory https://www. rose-hulman.edu/~bryan/lottamath/transmat.pdf Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Recommend
More recommend