Stochastic Processes Markov Processes Hamid R. Rabiee 1
Overview o Markov Property o Markov Chains o Definition o Stationary Property o Paths in Markov Chains o Classification of States o Steady States in MCs. Stochastic Processes 2
Markov Property A discrete process has the Markov property if given its โข value at time t, the value at time t+1 is independent of values at times before t. That is: ๐๐ ๐ $%& = ๐ฆ $%& ๐ $ = ๐ฆ $ , ๐ $*& = ๐ฆ $*& , โฆ , ๐ & = ๐ฆ & = ๐๐ ๐ $%& = ๐ฆ $%& ๐ $ = ๐ฆ $ For all t, x -%& , x - , ๐ฆ $*& , ๐ฆ $*. , โฆ , ๐ฆ & . 3
Stationary Property A Markov Process is called stationary if โข Pr ๐ $%& = ๐ฃ|๐ $ = ๐ค = ๐๐ ๐ & = ๐ฃ| ๐ 0 = ๐ค for all t. The evolution of stationary processes donโt change over โข time. For defining the complete joint distribution of a โข stationary Markov Process it is sufficient to define ๐๐ ๐ & = ๐ฃ| ๐ 0 = ๐ค and ๐๐ ๐ 0 = ๐ค for all u and v. We will mainly consider stationary Markov processes โข here. 4
Example (Coin Tossing Game) Consider a single player game in which at every step a โข biased coin is tossed and according to the result, the score will be increased or decreased by one point. The game ends if either the score reaches 100 โข (winning) or -100 (losing). Score of the player at each step ๐ข โฅ 0 is a random โข variable and the sequence of scores as the game progresses forms a random process ๐ 7 , ๐ & , โฆ , ๐ $ . 5
Example (Coin Tossing Game) It is easy to verify that X is a stationary Markov chain: โข At the end of each step the score solely depends on the current score ๐ก 9 and the result of tossing the coin (which is independent of time and previous tosses). Stating this mathematically (for ๐ก 9 โ {โ100,100} ): โข ๐๐ ๐ $%& = ๐ก ๐ $ = ๐ก 9 , ๐ $*& = ๐ก $*& , โฆ , ๐ 7 = 0 ๐ ; ๐ก = ๐ก 9 + 1 Independent of t = ? 1 โ ๐ ; ๐ก = ๐ก 9 โ 1 and ๐ก 7 , โฆ , ๐ก $*& 0 ; ๐๐ขโ๐๐ ๐ฅ๐๐ก๐ = ๐๐ ๐ $%& = ๐ก ๐ $ = ๐ก 9 = ๐๐ ๐ & = ๐ก ๐ 7 = ๐ก 9 If value of p was subject to change in time, the process โข would not be stationary. In the formulation we would have ๐ $ instead of p . โข 6
Example (Tracking) Assume we want to track a target in XY plane, and so we need to โข model its movement. Assuming that current position and speed of the target describes โข everything about its future movements, we can consider the 4- dimensional state: $ , ฬ ๐ $ , ฬ ๐ $ = (๐ $ , ๐ ๐ $ ) It is common to consider linear relation between ๐ $ and ๐ $*& with a โข Gaussian additive noise: ๐ $ = ๐ต $ ๐ $*& + ๐ค $ ; ๐ค $ ~๐(0, ฮฃ) Or equivalently: ๐ S T |S TUV ๐ก $ ๐ก $*& = ๐(๐ก $ ; ๐ต $ ๐ก $*& , ฮฃ) There exists efficient algorithms to work with these linear- โข Gaussian models. 7
Example (Tracking) Considering the kinematics relations of a moving object we โข can define linear relation as: 1 0 ฮ๐ข 0 0 1 0 ฮ๐ข ๐ต $ = Independent of t (stationary) 0 0 1 0 0 0 0 1 This approach can not model Acceleration โข the speed is changed only because of noise. โข We can model arbitrary speed change by appropriately โข setting the 3 rd and 4 th rows of the matrix at each time. An example of a non stationary Markov process. โข โข Another approach is to extend the states to 6- dimensions containing ฬ ๐ $ and ฬ ๐ $ . 8
Example (Speech) A basic model for speech signal considers the value at time โข t to be a linear combination of its d previous values with a Gaussian additive noise: \ ๐ $ = Y ๐ฝ Z ๐ $*Z + ๐ค $ ; ๐ค $ ~๐(0, ฮฃ) Z[& ๐ $ is not a Markov process. โข 9
Example (Speech) We can make a stationary Markov process by considering โข the d-dimensional state ๐ $ = ๐ $ , ๐ $*& โฆ , ๐ $*\ _ : ๐ $ = ๐ต๐ $*& + ๐ค ` ๐ถ Where: ๐ฝ & ๐ฝ . โฏ ๐ฝ \ 1 1 0 โฏ 0 0 0 1 ๐ต = , ๐ถ = โฎ โฎ โฑ 1 0 1 Equivalently: โข ๐ e T |e TUV ๐ฃ $ ๐ฃ $*& = ๐ ๐ฃ $ ; ๐ต $ ๐ฃ $*& & , ฮฃ ๐ โ 1 โค ๐ โค ๐ โถ ๐ฃ $ Z*& = ๐ฃ $*& Z โข Which (๐ฆ) Z is the ๐ th dimension of vector ๐ฆ and ๐ is the indicator function (used for guaranteeing consistency). โข Note that we have repeated ๐ $ in d consecutive states of U and there should be no inconsistency between those values. 10
Markov Process Types โข Two types of Markov processes based on domain of ๐ $ values: โข Discrete โข Continuous โข Discrete Markov processes are called โMarkov Chainsโ (MC). โข In this session we will focus on stationary MCs. 11
Transition matrix According to the Markov property and stationary property, โข at each time step the process moves according to a fixed transition matrix: ๐ ๐ $%& = ๐ ๐ $ = ๐ = ๐ Zl Stochastic matrix: Rows sum up to 1 โข Double stochastic matrix: Rows and columns sum up to 1 12
State Graph It is convenient to visualize a stationary Markov Chain โข with a transition diagram: A node represents a possible value of ๐ $ . โข At each time t, we say the process is in state ๐ก if ๐ $ =s. โข Each edge represents the probability of going from one state โข to another (we omit edges with zero weight). We should also specify the vector of initial probabilities ๐ โข = ๐ & , โฆ , ๐ n where ๐ Z = ๐๐ (๐ 7 = ๐) . So a stationary discrete process could be described as a โข person walking randomly on a graph (considering each step to depend only on the state he is currently in). The result path is called a โRandom Walkโ. โข 13
Example The transition diagram of the coin tossing game: โข p p p 1 p 1 -99 -100 99 100 -98 1-p 1-p 1-p 1-p 1-p โข We modeled winning and losing by states which when we get into, we never get out. โข Note that if the process was not stationary we were not able to visualize it in this way: โข For example consider the case that p is changing in time. 14
Example (Modeling Weather) โข Example: Each day is sunny or rainy. If a day is rainy, the next day is rainy with probability ๐ฝ (and sunny with probability 1 โ ๐ฝ ). If the day is sunny, the next day is rainy with probability ๐พ (and sunny with probability 1 โ ๐พ ). S = {rainy, sunny}, ๐ = ๐ฝ 1 โ ๐ฝ ๐พ 1 โ ๐พ 1 โ ๐พ ๐ฝ 1 โ ๐ฝ R S ๐พ 15
Paths If the state space is {0,1} we have: โข ๐๐ ๐ฆ 2 = 0 ๐ฆ 0 = 0 = ๐๐ ๐ฆ & = 1 ๐ฆ 7 = 0 ร ๐ ๐ฆ . = 0 ๐ฆ & = 1 +๐๐ ๐ฆ & = 0 ๐ฆ 7 = 0 ร ๐ ๐ฆ . = 0 ๐ฆ & = 0 (n) as the probability that starting from state i, the Define ๐ Zl โข process stops at state j after n time steps. (.) = โ `[& โฌ ๐ Zl ๐ Z` ๐ `l Stochastic Processes 16
Paths Theorem: โข โฌ (n) = Y (n*&) ๐ `l ๐ Zl ๐ Z` `[& โข To simplify the notation we define the matrix ๐ (n) such that (n) . the value at the iโth row and jโth column is ๐ Zl โข Corollary 1: ๐ (n) can be calculated by: ๐ (n) = ๐ n Corollary 2: If the process starts at time 0 with distribution โข ๐ on the states then after n steps the distribution is ๐๐ n . Stochastic Processes 17
Absorbing Markov Chain An absorbing state is one in which the probability that the โข process remains in that state once it enters the state is 1 (i.e., ๐ ZZ = 1 ). A Markov chain is absorbing if it has at least one absorbing โข state, and if from every state it is possible to go to an absorbing state (not necessarily in one step). An absorbing Markov chain will eventually enter one of the โข absorbing states and never leave it. Example: The 100 and -100 states in coin tossing game โข Note: After playing ling enough, the player will either win โข or lose (with probability 1). p p p 1 p 1 -99 100 -100 99 -98 1-p 1-p 1-p 1-p 1-p Stochastic Processes 18
Absorption theorem In an absorbing Markov chain the probability that the โข process will be absorbed is 1. Proof: From each non-absorbing state ๐ก l it is possible to โข reach an absorbing state starting from ๐ก l . Therefore there exists p and m, such that the probability of not absorbing after m steps is at most p, in 2m steps at most ๐ . , etc. Since the probability of not being absorbed is โข monotonically decreasing, we have: nโโ Pr(not absorbed after n steps) =0 lim โข Stochastic Processes 19
Classification of States Accessibility: State j is said to be accessible from state i if โข starting in i it is possible that the process will ever enter state j: (๐ n ) Zl > 0 . Communication: Two states i and j that are accessible to each โข other are said to communicate. Every node communicates with itself: โข 7 = Pr ๐ 7 = ๐ ๐ 7 = ๐ = 1 โข p โ โ Communication is an equivalence relation: It divides the โข state space up into a number of separate classes in which every pair of states communicate. (why?) The Markov chain is said to be irreducible if it has only one โข class. Stochastic Processes 20
Recommend
More recommend