identification of weak lumpability in markov chains
play

Identification of weak lumpability in Markov chains with - PowerPoint PPT Presentation

Identification of weak lumpability in Markov chains with application to Markov partitions Martin Nilsson Jacobi Projections and the Markov property e e P P Macro level: s t +1 s t +2 s t P P Micro level: s t


  1. Identification of weak lumpability in Markov chains � with application to Markov partitions Martin Nilsson Jacobi

  2. Projections and the Markov property e e P P ˜ ˜ Macro level: ˜ s t +1 s t +2 s t π π π P P Micro level: s t +1 s t +2 s t p (˜ s t +2 | ˜ s t +1 ˜ s t ) independent of ˜ s t

  3. markov lumping (aggregation) e P π π P   0 1 1 0 aggregates Aggregation of state 2 and 3 π = 1 0 0 0   at the micro level into one 0 0 0 1 state at the macro level variables/states

  4. e P e P X X ρ i e P P KL = P j ← i π π m ∈ L ρ m i ∈ L j ∈ K P e P = π P π + π D 1 / 2 ⌘ † D 1 / 2 ⇣ π + = � − 1 A T A T A A † � = = D ii ρ i ρ i

  5. e P 2 e e P P ˜ ˜ ˜ s t +1 s t +2 s t π π π P P s t +1 s t +2 s t p (˜ s t +2 | ˜ s t +1 ˜ s t ) independent of ˜ s t ⇢ π PP π + e P 2 = π P π + π P π + = π PP π + π P π + π P π +

  6. π P π + π P π + = π PP π + π P π + π = π P π + π P π + = P π + e P e P π π π π P P P π + = π + e e P π = π P P strong lumpability weak lumpability Either of these eq. are sufficient but not necessary for weak lumpability

  7. Invariance conditions Interpretation of the conditions Strong lumping Weak lumping lumping π + e e P = P π + P π = π P Column space of π + invariant under P Row space of π invariant under P T Invariance typically means eigenvactors

  8. strong lumping x t +1 = Px t y = Π x aggregates   0 1 1 0 Row-space spanned � Π = 1 0 0 0 by eigenvectors of P T   0 0 0 1 variables/states � ⇥ Linear combination of the rows: a b b c Search for eigenvectors with constant level structure! #levels = #aggregates = #eigenvectors with that level structure

  9. Weak lumping   1 0 0 ρ 2 0 0 π + = Column-space spanned �   ρ 2 + ρ 3   ρ 3 0 0 by eigenvectors of P   ρ 2 + ρ 3 0 0 1 Take right eigenvectors of P , say u . Look for level structure in the vector v i = u i / ρ i . #levels = #aggregates = #eigenvectors with that level structure

  10. Examples   0 . 25 0 . 0 . 875 P = 0 . 25 0 . 166667 0 . 125   0 . 5 0 . 833333 0 . / ρ i Right � (1 ., 1 ., 1 . ) ( − 0 . 721995 , − 0 . 309426 , − 0 . 618853) (1 . 11051 , − 0 . 863731 , − 0 . 863731) ( − 0 . 801784 , 0 . 267261 , 0 . 534522) eig vec ( − 1 . 12706 , 1 . 12706 , 0 . 751375) (0 . 813733 , − 0 . 348743 , − 0 . 464991) Weak lumping: {{1},{2,3}} Left � ( − 0 . 57735 , − 0 . 57735 , − 0 . 57735) No strong lumping ( − 0 . 0733017 , − 0 . 855186 , 0 . 513112) eig vec (0 . 0000 , − 0 . 894427 , 0 . 447214)

  11. Markov partitions e e P P ˜ ˜ ˜ s t +1 s t +2 s t π π π P P s t +1 s t +2 s t p ( s t +2 | s t +1 s t ) independent of s t Markov partition means weak lumpability weighted with the stationary distribution.

  12. Look at the tent map 1.0 0.8 0.6 0.4 0.2 0.2 0.4 0.6 0.8 1.0 Partition the interval into n bins and make a transition matrix.

  13. The transition matrix 10 bins 1 2 3 4 5 6 7 8 9 10   0 . 5 0 . 5 0 . 0 . 0 . 0 . 0 . 0 . 0 . 0 . 1 1 0 . 0 . 0 . 5 0 . 5 0 . 0 . 0 . 0 . 0 . 0 . 2 2     0 . 0 . 0 . 0 . 0 . 5 0 . 5 0 . 0 . 0 . 0 . 3 3     0 . 0 . 0 . 0 . 0 . 0 . 0 . 5 0 . 5 0 . 0 .   4 4   0 . 0 . 0 . 0 . 0 . 0 . 0 . 0 . 0 . 5 0 . 5   5 5 P =   0 . 0 . 0 . 0 . 0 . 0 . 0 . 0 . 0 . 5 0 . 5   6 6   0 . 0 . 0 . 0 . 0 . 0 . 0 . 5 0 . 5 0 . 0 .   7 7   0 . 0 . 0 . 0 . 0 . 5 0 . 5 0 . 0 . 0 . 0 .   8 8   0 . 0 . 0 . 5 0 . 5 0 . 0 . 0 . 0 . 0 . 0 .   9 9 0 . 5 0 . 5 0 . 0 . 0 . 0 . 0 . 0 . 0 . 0 . 10 10 1 2 3 4 5 6 7 8 9 10

  14. The transition matrix 9 bins 1 2 3 4 5 6 7 8 9 1 1   0 . 5 0 . 5 0 . 0 . 0 . 0 . 0 . 0 . 0 . 2 2 0 . 0 . 0 . 5 0 . 5 0 . 0 . 0 . 0 . 0 .     0 . 0 . 0 . 0 . 0 . 5 0 . 5 0 . 0 . 0 . 3 3     0 . 0 . 0 . 0 . 0 . 0 . 0 . 5 0 . 5 0 . 4 4     P = 0 . 0 . 0 . 0 . 0 . 0 . 0 . 0 . 1 .   5 5   0 . 0 . 0 . 0 . 0 . 0 . 0 . 5 0 . 5 0 .   6 6   0 . 0 . 0 . 0 . 0 . 5 0 . 5 0 . 0 . 0 .     7 7 0 . 0 . 0 . 5 0 . 5 0 . 0 . 0 . 0 . 0 .   0 . 5 0 . 5 0 . 0 . 0 . 0 . 0 . 0 . 0 . 8 8 9 9 1 2 3 4 5 6 7 8 9

  15. Invariance 10 bins 1.0 1.0 P 0.5 0.5 2 4 6 8 10 2 4 6 8 10 1 2 -cut - 0.5 - 0.5 - 1.0 - 1.0 1.0 1.0 0.8 P 0.5 0.6 2 4 6 8 10 0.4 2 - 0.5 3 -cut 0.2 - 1.0 2 4 6 8 10

  16. Invariance 9 bins 1.0 1.0 P 0.5 0.5 2 4 6 8 2 4 6 8 1 2 -cut - 0.5 - 0.5 - 1.0 - 1.0 1.0 1.0 P 0.5 0.5 2 4 6 8 2 4 6 8 - 0.5 2 - 0.5 3 -cut - 1.0 - 1.0

  17. Eigenvalues, problems 1 2 -cut ✓ ◆ 1 , i 2 , − i 2 , 0 , 0 , 0 , 0 , 0 , 0 , 0 2 3 -cut High degree of degeneracy!!! Eigenvectors are not unique. Eigenvectors with level structure are hidden as linear combinations of eigenvectors corresponding to degenerate eigenvalues. How do we find them?

  18. Possible solution? Linear programming problem Idea: boundaries are level sets... c? max c · s Constraint: ( P − λ 1 ) s = 0 Very much work in progress...

Recommend


More recommend