markov chains
play

Markov Chains CS70 Summer 2016 - Lecture 6B David Dinh 26 July - PowerPoint PPT Presentation

Markov Chains CS70 Summer 2016 - Lecture 6B David Dinh 26 July 2016 UC Berkeley Agenda Quiz is out! Due: Friday at noon. What are Markov Chains? State machine and matrix representations. Hitting Time 1 Motivation Suppose we flip a coin


  1. Markov Chains CS70 Summer 2016 - Lecture 6B David Dinh 26 July 2016 UC Berkeley

  2. Agenda Quiz is out! Due: Friday at noon. What are Markov Chains? State machine and matrix representations. Hitting Time 1

  3. Motivation Suppose we flip a coin until we get a three heads in a row. How many coin flips should we expect to do? Drunkard on an arbitrary graph (remember HW?). When does the drunkard come home? Try solving directly? Problem: conditioning gets really messy. Need some way to express state . Solution: Markov chains! 2

  4. Motivation Suppose we flip a coin until we get a three heads in a row. How many coin flips should we expect to do? Drunkard on an arbitrary graph (remember HW?). When does the drunkard come home? Try solving directly? Problem: conditioning gets really messy. Need some way to express state . Solution: Markov chains! 2

  5. Motivation Suppose we flip a coin until we get a three heads in a row. How many coin flips should we expect to do? Drunkard on an arbitrary graph (remember HW?). When does the drunkard come home? Try solving directly? Problem: conditioning gets really messy. Need some way to express state . Solution: Markov chains! 2

  6. Motivation Suppose we flip a coin until we get a three heads in a row. How many coin flips should we expect to do? Drunkard on an arbitrary graph (remember HW?). When does the drunkard come home? Try solving directly? Problem: conditioning gets really messy. Need some way to express state . Solution: Markov chains! 2

  7. Motivation Suppose we flip a coin until we get a three heads in a row. How many coin flips should we expect to do? Drunkard on an arbitrary graph (remember HW?). When does the drunkard come home? Try solving directly? Problem: conditioning gets really messy. Need some way to express state . Solution: Markov chains! 2

  8. Intuition A finite Markov chain consists of states , transition probabilities between states, and an initial distribution . State: where are you now? Transition probability: From where you are, where do you go next? Initial distribution: how do you start? Markov chains are memoryless - they don’t remember anything other than what state they are. 3

  9. Intuition A finite Markov chain consists of states , transition probabilities between states, and an initial distribution . State: where are you now? Transition probability: From where you are, where do you go next? Initial distribution: how do you start? Markov chains are memoryless - they don’t remember anything other than what state they are. 3

  10. Intuition A finite Markov chain consists of states , transition probabilities between states, and an initial distribution . State: where are you now? Transition probability: From where you are, where do you go next? Initial distribution: how do you start? Markov chains are memoryless - they don’t remember anything other than what state they are. 3

  11. Intuition A finite Markov chain consists of states , transition probabilities between states, and an initial distribution . State: where are you now? Transition probability: From where you are, where do you go next? Initial distribution: how do you start? Markov chains are memoryless - they don’t remember anything other than what state they are. 3

  12. Intuition A finite Markov chain consists of states , transition probabilities between states, and an initial distribution . State: where are you now? Transition probability: From where you are, where do you go next? Initial distribution: how do you start? Markov chains are memoryless - they don’t remember anything other than what state they are. 3

  13. Intuition A finite Markov chain consists of states , transition probabilities between states, and an initial distribution . State: where are you now? Transition probability: From where you are, where do you go next? Initial distribution: how do you start? Markov chains are memoryless - they don’t remember anything other than what state they are. 3

  14. 0 on 0 i 0 i j P i j X n n 0 i i P i j X n X 0 • Pr X 0 j 1 • Pr X n (initial distribution) i i j i Formally Speaking... i 0 is defined so that: A finite set of states: 1 i j 0 • P i j Transition probabilities: P i j for i j 1 i 0 A initial probability distribution K 1 2 4

  15. 0 on 0 i 0 i j P i j X n n 0 i i P i j i X 0 j 1 • Pr X n i j (initial distribution) i X n Formally Speaking... • Pr X 0 0 is defined so that: i 1 i j 0 • P i j Transition probabilities: P i j for i j 1 i 0 A initial probability distribution 4 A finite set of states: X = { 1 , 2 , . . . , K }

  16. 0 i 0 i j P i j X n n 0 i i i j P i j i X n X 0 j 1 • Pr X n (initial distribution) i Formally Speaking... • Pr X 0 0 is defined so that: i 1 i j 0 • P i j Transition probabilities: P i j for i j 1 i 0 4 A finite set of states: X = { 1 , 2 , . . . , K } A initial probability distribution π 0 on X :

  17. j P i j X n n 0 i Formally Speaking... i j P i j i X n X 0 j 1 • Pr X n (initial distribution) i • Pr X 0 i 0 is defined so that: i 1 i j 0 • P i j Transition probabilities: P i j for i j 4 A finite set of states: X = { 1 , 2 , . . . , K } A initial probability distribution π 0 on X : π 0 ( i ) ≥ 0 , ∑ i π 0 ( i ) = 1

  18. j P i j X n n 0 i Formally Speaking... i j P i j i X n X 0 j 1 • Pr X n (initial distribution) i • Pr X 0 i 0 is defined so that: i 1 i j 0 • P i j 4 A finite set of states: X = { 1 , 2 , . . . , K } A initial probability distribution π 0 on X : π 0 ( i ) ≥ 0 , ∑ i π 0 ( i ) = 1 Transition probabilities: P ( i , j ) for i , j ∈ X

  19. j P i j X n n 0 i Formally Speaking... i j P i j i X n X 0 j 1 • Pr X n (initial distribution) i i • Pr X 0 0 is defined so that: i 1 4 A finite set of states: X = { 1 , 2 , . . . , K } A initial probability distribution π 0 on X : π 0 ( i ) ≥ 0 , ∑ i π 0 ( i ) = 1 Transition probabilities: P ( i , j ) for i , j ∈ X • P ( i , j ) ≥ 0 , ∀ i , j ;

  20. X n n 0 i (initial distribution) i j P i j i X n X 0 j 1 • Pr X n Formally Speaking... i i • Pr X 0 0 is defined so that: 4 A finite set of states: X = { 1 , 2 , . . . , K } A initial probability distribution π 0 on X : π 0 ( i ) ≥ 0 , ∑ i π 0 ( i ) = 1 Transition probabilities: P ( i , j ) for i , j ∈ X • P ( i , j ) ≥ 0 , ∀ i , j ; ∑ j P ( i , j ) = 1 , ∀ i

  21. 0 i Formally Speaking... (initial distribution) i j P i j i X n X 0 j 1 • Pr X n i i • Pr X 0 4 A finite set of states: X = { 1 , 2 , . . . , K } A initial probability distribution π 0 on X : π 0 ( i ) ≥ 0 , ∑ i π 0 ( i ) = 1 Transition probabilities: P ( i , j ) for i , j ∈ X • P ( i , j ) ≥ 0 , ∀ i , j ; ∑ j P ( i , j ) = 1 , ∀ i { X n , n ≥ 0 } is defined so that:

  22. Formally Speaking... • Pr X n i j P i j i X n X 0 j 1 (initial distribution) 4 A finite set of states: X = { 1 , 2 , . . . , K } A initial probability distribution π 0 on X : π 0 ( i ) ≥ 0 , ∑ i π 0 ( i ) = 1 Transition probabilities: P ( i , j ) for i , j ∈ X • P ( i , j ) ≥ 0 , ∀ i , j ; ∑ j P ( i , j ) = 1 , ∀ i { X n , n ≥ 0 } is defined so that: • Pr [ X 0 = i ] = π 0 ( i ) , i ∈ X

  23. Formally Speaking... 1 i j P i j i X n X 0 j • Pr X n 4 A finite set of states: X = { 1 , 2 , . . . , K } A initial probability distribution π 0 on X : π 0 ( i ) ≥ 0 , ∑ i π 0 ( i ) = 1 Transition probabilities: P ( i , j ) for i , j ∈ X • P ( i , j ) ≥ 0 , ∀ i , j ; ∑ j P ( i , j ) = 1 , ∀ i { X n , n ≥ 0 } is defined so that: • Pr [ X 0 = i ] = π 0 ( i ) , i ∈ X (initial distribution)

  24. 4 Formally Speaking... A finite set of states: X = { 1 , 2 , . . . , K } A initial probability distribution π 0 on X : π 0 ( i ) ≥ 0 , ∑ i π 0 ( i ) = 1 Transition probabilities: P ( i , j ) for i , j ∈ X • P ( i , j ) ≥ 0 , ∀ i , j ; ∑ j P ( i , j ) = 1 , ∀ i { X n , n ≥ 0 } is defined so that: • Pr [ X 0 = i ] = π 0 ( i ) , i ∈ X (initial distribution) • Pr [ X n + 1 = j | X 0 , . . . , X n = i ] = P ( i , j ) , i , j ∈ X .

  25. One Small (Time)step for a State (random variable.) Where do we go next? Pr X t 1 j X t i P i j Probability depends on the previous state, but is independent of how it got to the previous state. (It’s not independent of states before the previous state - but any dependence is captured in the previous state.) 5 At each timestep t we are in some state X t ∈ X .

  26. One Small (Time)step for a State Where do we go next? Pr X t 1 j X t i P i j Probability depends on the previous state, but is independent of how it got to the previous state. (It’s not independent of states before the previous state - but any dependence is captured in the previous state.) 5 At each timestep t we are in some state X t ∈ X . (random variable.)

  27. One Small (Time)step for a State Where do we go next? Probability depends on the previous state, but is independent of how it got to the previous state. (It’s not independent of states before the previous state - but any dependence is captured in the previous state.) 5 At each timestep t we are in some state X t ∈ X . (random variable.) Pr [ X t + 1 = j | X t = i ] = P i , j

Recommend


More recommend