approximating probabilistic bisimulation by
play

Approximating Probabilistic Bisimulation by Introduction Background - PowerPoint PPT Presentation

Approximation by Averaging Panangaden Approximating Probabilistic Bisimulation by Introduction Background Conditional Expectation Cones and Duality Conditional expectation Prakash Panangaden Markov processes Bisimulation School of


  1. Expectation and conditional expectation Approximation by Averaging The expectation E p ( f ) of a measurable function f is the 1 Panangaden � average computed by f d p and therefore it is just a number. Introduction Background The conditional expectation is not a mere number but a 2 Cones and random variable. Duality Conditional It is meant to measure the expected value in the 3 expectation presence of additional information. Markov processes The additional information takes the form of a sub- σ 4 Bisimulation algebra, say Λ , of Σ . The experimenter knows, for Conclusions every B ∈ Λ , whether the outcome is in B or not.

  2. Expectation and conditional expectation Approximation by Averaging The expectation E p ( f ) of a measurable function f is the 1 Panangaden � average computed by f d p and therefore it is just a number. Introduction Background The conditional expectation is not a mere number but a 2 Cones and random variable. Duality Conditional It is meant to measure the expected value in the 3 expectation presence of additional information. Markov processes The additional information takes the form of a sub- σ 4 Bisimulation algebra, say Λ , of Σ . The experimenter knows, for Conclusions every B ∈ Λ , whether the outcome is in B or not. Now she can recompute the expectation values given 5 this information.

  3. Formalizing conditional expectation Approximation by Averaging It is an immediate consequence of the Radon-Nikodym Panangaden theorem that such conditional expectations exist. Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  4. Formalizing conditional expectation Approximation by Averaging It is an immediate consequence of the Radon-Nikodym Panangaden theorem that such conditional expectations exist. Introduction Kolmogorov Background Let ( X , Σ , p ) be a measure space with p a finite measure, f Cones and Duality be in L 1 ( X , Σ , p ) and Λ be a sub- σ -algebra of Σ , then there Conditional expectation exists a g ∈ L 1 ( X , Λ , p ) such that for all B ∈ Λ Markov processes � � f d p = g d p . Bisimulation B B Conclusions

  5. Formalizing conditional expectation Approximation by Averaging It is an immediate consequence of the Radon-Nikodym Panangaden theorem that such conditional expectations exist. Introduction Kolmogorov Background Let ( X , Σ , p ) be a measure space with p a finite measure, f Cones and Duality be in L 1 ( X , Σ , p ) and Λ be a sub- σ -algebra of Σ , then there Conditional expectation exists a g ∈ L 1 ( X , Λ , p ) such that for all B ∈ Λ Markov processes � � f d p = g d p . Bisimulation B B Conclusions This function g is usually denoted by E ( f | Λ) .

  6. Formalizing conditional expectation Approximation by Averaging It is an immediate consequence of the Radon-Nikodym Panangaden theorem that such conditional expectations exist. Introduction Kolmogorov Background Let ( X , Σ , p ) be a measure space with p a finite measure, f Cones and Duality be in L 1 ( X , Σ , p ) and Λ be a sub- σ -algebra of Σ , then there Conditional expectation exists a g ∈ L 1 ( X , Λ , p ) such that for all B ∈ Λ Markov processes � � f d p = g d p . Bisimulation B B Conclusions This function g is usually denoted by E ( f | Λ) . We clearly have f · p ≪ p so the required g is simply d f · p d p | Λ , where p | Λ is the restriction of p to the sub- σ -algebra Λ .

  7. Properties of conditional expectation Approximation by Averaging The point of requiring Λ -measurability is that it 1 Panangaden “smooths out” variations that are too rapid to show up in Λ . Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  8. Properties of conditional expectation Approximation by Averaging The point of requiring Λ -measurability is that it 1 Panangaden “smooths out” variations that are too rapid to show up in Λ . Introduction Background The conditional expectation is linear , increasing with 2 Cones and respect to the pointwise order. Duality Conditional expectation Markov processes Bisimulation Conclusions

  9. Properties of conditional expectation Approximation by Averaging The point of requiring Λ -measurability is that it 1 Panangaden “smooths out” variations that are too rapid to show up in Λ . Introduction Background The conditional expectation is linear , increasing with 2 Cones and respect to the pointwise order. Duality Conditional It is defined uniquely p -almost everywhere. 3 expectation Markov processes Bisimulation Conclusions

  10. What are cones? Approximation by Averaging Want to combine linear structure with order structure. Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  11. What are cones? Approximation by Averaging Want to combine linear structure with order structure. Panangaden If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is Introduction positive. Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  12. What are cones? Approximation by Averaging Want to combine linear structure with order structure. Panangaden If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is Introduction positive. Background Cones and What properties do the positive vectors have? Say Duality P ⊂ V are the positive vectors, we include 0 . Conditional expectation Markov processes Bisimulation Conclusions

  13. What are cones? Approximation by Averaging Want to combine linear structure with order structure. Panangaden If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is Introduction positive. Background Cones and What properties do the positive vectors have? Say Duality P ⊂ V are the positive vectors, we include 0 . Conditional expectation Then for any positive v ∈ P and positive real r , rv ∈ P . Markov For u , v ∈ P we have u + v ∈ P and if v ∈ P and − v ∈ P processes then v = 0 . Bisimulation Conclusions

  14. What are cones? Approximation by Averaging Want to combine linear structure with order structure. Panangaden If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is Introduction positive. Background Cones and What properties do the positive vectors have? Say Duality P ⊂ V are the positive vectors, we include 0 . Conditional expectation Then for any positive v ∈ P and positive real r , rv ∈ P . Markov For u , v ∈ P we have u + v ∈ P and if v ∈ P and − v ∈ P processes then v = 0 . Bisimulation We define a cone C in a vector space V to be a set with Conclusions exactly these conditions.

  15. What are cones? Approximation by Averaging Want to combine linear structure with order structure. Panangaden If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is Introduction positive. Background Cones and What properties do the positive vectors have? Say Duality P ⊂ V are the positive vectors, we include 0 . Conditional expectation Then for any positive v ∈ P and positive real r , rv ∈ P . Markov For u , v ∈ P we have u + v ∈ P and if v ∈ P and − v ∈ P processes then v = 0 . Bisimulation We define a cone C in a vector space V to be a set with Conclusions exactly these conditions. Any cone defines a order by u ≤ v if v − u ∈ C .

  16. What are cones? Approximation by Averaging Want to combine linear structure with order structure. Panangaden If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is Introduction positive. Background Cones and What properties do the positive vectors have? Say Duality P ⊂ V are the positive vectors, we include 0 . Conditional expectation Then for any positive v ∈ P and positive real r , rv ∈ P . Markov For u , v ∈ P we have u + v ∈ P and if v ∈ P and − v ∈ P processes then v = 0 . Bisimulation We define a cone C in a vector space V to be a set with Conclusions exactly these conditions. Any cone defines a order by u ≤ v if v − u ∈ C . Unfortunately for us, many of the structures that we want to look at are cones but are not part of any obvious vector space: e.g. the measures on a space.

  17. Cones that we use I Approximation by Averaging If µ is a measure on X , then one has the well-known Panangaden Banach spaces L 1 and L ∞ . Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  18. Cones that we use I Approximation by Averaging If µ is a measure on X , then one has the well-known Panangaden Banach spaces L 1 and L ∞ . Introduction These can be restricted to cones by considering the Background µ -almost everywhere positive functions. Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  19. Cones that we use I Approximation by Averaging If µ is a measure on X , then one has the well-known Panangaden Banach spaces L 1 and L ∞ . Introduction These can be restricted to cones by considering the Background µ -almost everywhere positive functions. Cones and We will denote these cones by L + Duality 1 ( X , Σ , µ ) and Conditional L + ∞ ( X , Σ) . expectation Markov processes Bisimulation Conclusions

  20. Cones that we use I Approximation by Averaging If µ is a measure on X , then one has the well-known Panangaden Banach spaces L 1 and L ∞ . Introduction These can be restricted to cones by considering the Background µ -almost everywhere positive functions. Cones and We will denote these cones by L + Duality 1 ( X , Σ , µ ) and Conditional L + ∞ ( X , Σ) . expectation These are complete normed cones. Markov processes Bisimulation Conclusions

  21. Cones that we use II Approximation by Averaging Let ( X , Σ , p ) be a measure space with finite measure p . Panangaden We denote by M ≪ p ( X ) , the cone of all measures on ( X , Σ , p ) that are absolutely continuous with respect to p Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  22. Cones that we use II Approximation by Averaging Let ( X , Σ , p ) be a measure space with finite measure p . Panangaden We denote by M ≪ p ( X ) , the cone of all measures on ( X , Σ , p ) that are absolutely continuous with respect to p Introduction Background If q is such a measure, we define its norm to be q ( X ) . Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  23. Cones that we use II Approximation by Averaging Let ( X , Σ , p ) be a measure space with finite measure p . Panangaden We denote by M ≪ p ( X ) , the cone of all measures on ( X , Σ , p ) that are absolutely continuous with respect to p Introduction Background If q is such a measure, we define its norm to be q ( X ) . Cones and M ≪ p ( X ) is also an ω -complete normed cone. Duality Conditional expectation Markov processes Bisimulation Conclusions

  24. Cones that we use II Approximation by Averaging Let ( X , Σ , p ) be a measure space with finite measure p . Panangaden We denote by M ≪ p ( X ) , the cone of all measures on ( X , Σ , p ) that are absolutely continuous with respect to p Introduction Background If q is such a measure, we define its norm to be q ( X ) . Cones and M ≪ p ( X ) is also an ω -complete normed cone. Duality Conditional The cones M ≪ p ( X ) and L + 1 ( X , Σ , p ) are isometrically expectation isomorphic in ω CC . Markov processes Bisimulation Conclusions

  25. Cones that we use II Approximation by Averaging Let ( X , Σ , p ) be a measure space with finite measure p . Panangaden We denote by M ≪ p ( X ) , the cone of all measures on ( X , Σ , p ) that are absolutely continuous with respect to p Introduction Background If q is such a measure, we define its norm to be q ( X ) . Cones and M ≪ p ( X ) is also an ω -complete normed cone. Duality Conditional The cones M ≪ p ( X ) and L + 1 ( X , Σ , p ) are isometrically expectation isomorphic in ω CC . Markov processes We write M p UB ( X ) for the cone of all measures on Bisimulation ( X , Σ) that are uniformly less than a multiple of the Conclusions measure p : q ∈ M p UB means that for some real constant K > 0 we have q ≤ Kp .

  26. Cones that we use II Approximation by Averaging Let ( X , Σ , p ) be a measure space with finite measure p . Panangaden We denote by M ≪ p ( X ) , the cone of all measures on ( X , Σ , p ) that are absolutely continuous with respect to p Introduction Background If q is such a measure, we define its norm to be q ( X ) . Cones and M ≪ p ( X ) is also an ω -complete normed cone. Duality Conditional The cones M ≪ p ( X ) and L + 1 ( X , Σ , p ) are isometrically expectation isomorphic in ω CC . Markov processes We write M p UB ( X ) for the cone of all measures on Bisimulation ( X , Σ) that are uniformly less than a multiple of the Conclusions measure p : q ∈ M p UB means that for some real constant K > 0 we have q ≤ Kp . The cones M p UB ( X ) and L + ∞ ( X , Σ , p ) are isomorphic.

  27. The pairing Approximation by Averaging Pairing function Panangaden There is a map from the product of the cones L + ∞ ( X , p ) and 1 ( X , p ) to R + defined as follows: Introduction L + Background Cones and � ∀ f ∈ L + ∞ ( X , p ) , g ∈ L + Duality 1 ( X , p ) � f , g � = fg d p . Conditional expectation Markov processes Bisimulation Conclusions

  28. The pairing Approximation by Averaging Pairing function Panangaden There is a map from the product of the cones L + ∞ ( X , p ) and 1 ( X , p ) to R + defined as follows: Introduction L + Background Cones and � ∀ f ∈ L + ∞ ( X , p ) , g ∈ L + Duality 1 ( X , p ) � f , g � = fg d p . Conditional expectation Markov processes This map is bilinear and is continuous and ω -continuous in Bisimulation both arguments; we refer to it as the pairing. Conclusions

  29. Duality expressed via pairing Approximation This pairing allows one to express the dualities in a very by Averaging convenient way. For example, the isomorphism between Panangaden 1 ( X , p )) ∗ sends f ∈ L + L + ∞ ( X , p ) and ( L + ∞ ( X , p ) to Introduction � λ g . � f , g � = λ g . fg d p . Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  30. � � � � � � � � � � Duality is the Key Approximation by Averaging ∼ ∼ � L + , ∗ � L + M ≪ p ( X ) 1 ( X , p ) ∞ ( X , p ) (1) Panangaden Introduction Background ∼ ∼ � L + , ∗ M p � L + ∞ ( X , p ) ( X , p ) Cones and UB 1 Duality Conditional where the vertical arrows represent dualities and the expectation horizontal arrows represent isomorphisms. Markov processes Pairing function Bisimulation Conclusions There is a map from the product of the cones L + ∞ ( X , p ) and 1 ( X , p ) to R + defined as follows: L + � ∀ f ∈ L + ∞ ( X , p ) , g ∈ L + 1 ( X , p ) � f , g � = fg d p .

  31. Where the action happens Approximation by Averaging We define two categories Rad ∞ and Rad 1 that will be Panangaden needed for the functorial definition of conditional expectation. Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  32. Where the action happens Approximation by Averaging We define two categories Rad ∞ and Rad 1 that will be Panangaden needed for the functorial definition of conditional expectation. Introduction Background This will allow for L ∞ and L 1 versions of the theory. Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  33. Where the action happens Approximation by Averaging We define two categories Rad ∞ and Rad 1 that will be Panangaden needed for the functorial definition of conditional expectation. Introduction Background This will allow for L ∞ and L 1 versions of the theory. Cones and Duality Going between these versions by duality will be very Conditional useful. expectation Markov processes Bisimulation Conclusions

  34. The “infinity” category Approximation by Averaging Panangaden Introduction Background Cones and Rad ∞ Duality Conditional The category Rad ∞ has as objects probability spaces, and expectation as arrows α : ( X , p ) − → ( Y , q ) , measurable maps such that Markov processes M α ( p ) ≤ Kq for some real number K . Bisimulation Conclusions The reason for choosing the name Rad ∞ is that α ∈ Rad ∞ maps to d / dqM α ( p ) ∈ L + ∞ ( Y , q ) .

  35. The “one” category Approximation by Averaging Panangaden Introduction Rad 1 Background The category Rad 1 has as objects probability spaces and as Cones and arrows α : ( X , p ) − → ( Y , q ) , measurable maps such that Duality M α ( p ) ≪ q . Conditional expectation Markov processes Bisimulation Conclusions

  36. The “one” category Approximation by Averaging Panangaden Introduction Rad 1 Background The category Rad 1 has as objects probability spaces and as Cones and arrows α : ( X , p ) − → ( Y , q ) , measurable maps such that Duality M α ( p ) ≪ q . Conditional expectation Markov processes The reason for choosing the name Rad 1 is that 1 Bisimulation α ∈ Rad 1 maps to d / dqM α ( p ) ∈ L + 1 ( Y , q ) . Conclusions

  37. The “one” category Approximation by Averaging Panangaden Introduction Rad 1 Background The category Rad 1 has as objects probability spaces and as Cones and arrows α : ( X , p ) − → ( Y , q ) , measurable maps such that Duality M α ( p ) ≪ q . Conditional expectation Markov processes The reason for choosing the name Rad 1 is that 1 Bisimulation α ∈ Rad 1 maps to d / dqM α ( p ) ∈ L + 1 ( Y , q ) . Conclusions The fact that the category Rad ∞ embeds in Rad 1 2 ∞ embeds in L + reflects the fact that L + 1 .

  38. Pairing function revisited Approximation by Averaging Panangaden Introduction Background ∞ ( X , p ) and L + , ∗ Recall the isomorphism between L + ( X , p ) Cones and 1 Duality mediated by the pairing function: Conditional expectation � Markov f ∈ L + ∞ ( X , p ) �→ λ g : L + 1 ( X , p ) . � f , g � = fg d p . processes Bisimulation Conclusions

  39. Precomposition Approximation by Averaging Now, precomposition with α in Rad ∞ gives a map P 1 ( α ) 1 Panangaden from L + 1 ( Y , q ) to L + 1 ( X , p ) . Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  40. Precomposition Approximation by Averaging Now, precomposition with α in Rad ∞ gives a map P 1 ( α ) 1 Panangaden from L + 1 ( Y , q ) to L + 1 ( X , p ) . Introduction Dually, given α ∈ Rad 1 : ( X , p ) − → ( Y , q ) and 2 Background g ∈ L + ∞ ( Y , q ) we have that P ∞ ( α )( g ) ∈ L + ∞ ( X , p ) . Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  41. Precomposition Approximation by Averaging Now, precomposition with α in Rad ∞ gives a map P 1 ( α ) 1 Panangaden from L + 1 ( Y , q ) to L + 1 ( X , p ) . Introduction Dually, given α ∈ Rad 1 : ( X , p ) − → ( Y , q ) and 2 Background g ∈ L + ∞ ( Y , q ) we have that P ∞ ( α )( g ) ∈ L + ∞ ( X , p ) . Cones and Duality Thus the subscripts on the two precomposition functors 3 Conditional describe the target categories. expectation Markov processes Bisimulation Conclusions

  42. Precomposition Approximation by Averaging Now, precomposition with α in Rad ∞ gives a map P 1 ( α ) 1 Panangaden from L + 1 ( Y , q ) to L + 1 ( X , p ) . Introduction Dually, given α ∈ Rad 1 : ( X , p ) − → ( Y , q ) and 2 Background g ∈ L + ∞ ( Y , q ) we have that P ∞ ( α )( g ) ∈ L + ∞ ( X , p ) . Cones and Duality Thus the subscripts on the two precomposition functors 3 Conditional describe the target categories. expectation Using the ∗ -functor we get a map ( P 1 ( α )) ∗ from Markov 4 processes L + , ∗ ( X , p ) to L + , ∗ ( Y , q ) in the first case and 1 1 Bisimulation Conclusions

  43. Precomposition Approximation by Averaging Now, precomposition with α in Rad ∞ gives a map P 1 ( α ) 1 Panangaden from L + 1 ( Y , q ) to L + 1 ( X , p ) . Introduction Dually, given α ∈ Rad 1 : ( X , p ) − → ( Y , q ) and 2 Background g ∈ L + ∞ ( Y , q ) we have that P ∞ ( α )( g ) ∈ L + ∞ ( X , p ) . Cones and Duality Thus the subscripts on the two precomposition functors 3 Conditional describe the target categories. expectation Using the ∗ -functor we get a map ( P 1 ( α )) ∗ from Markov 4 processes L + , ∗ ( X , p ) to L + , ∗ ( Y , q ) in the first case and 1 1 Bisimulation dually we get ( P ∞ ( α )) ∗ from L + , ∗ ∞ ( X , p ) to L + , ∗ ∞ ( Y , q ) . 5 Conclusions

  44. Expectation value functor Approximation by Averaging The functor E ∞ ( · ) is a functor from Rad ∞ to ω CC Panangaden which, on objects, maps ( X , p ) to L + ∞ ( X , p ) and on maps is given as follows: Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  45. Expectation value functor Approximation by Averaging The functor E ∞ ( · ) is a functor from Rad ∞ to ω CC Panangaden which, on objects, maps ( X , p ) to L + ∞ ( X , p ) and on maps is given as follows: Introduction Background Given α : ( X , p ) − → ( Y , q ) in Rad ∞ the action of the Cones and functor is to produce the map E ∞ ( α ) : L + ∞ ( X , p ) Duality ∞ ( Y , q ) obtained by composing ( P 1 ( α )) ∗ with the → L + − Conditional expectation isomorphisms between L + , ∗ and L + ∞ 1 Markov processes Bisimulation Conclusions

  46. � � � Expectation value functor Approximation by Averaging The functor E ∞ ( · ) is a functor from Rad ∞ to ω CC Panangaden which, on objects, maps ( X , p ) to L + ∞ ( X , p ) and on maps is given as follows: Introduction Background Given α : ( X , p ) − → ( Y , q ) in Rad ∞ the action of the Cones and functor is to produce the map E ∞ ( α ) : L + ∞ ( X , p ) Duality ∞ ( Y , q ) obtained by composing ( P 1 ( α )) ∗ with the → L + − Conditional expectation isomorphisms between L + , ∗ and L + ∞ 1 Markov processes Bisimulation L + , ∗ L + ( X , p ) ∞ ( X , p ) 1 Conclusions ( P 1 ( α )) ∗ E ∞ ( α ) L + , ∗ � L + ( Y , q ) ∞ ( Y , q ) 1

  47. Consequences Approximation by Averaging It is an immediate consequence of the definitions that 1 Panangaden for any f ∈ L + ∞ ( X , p ) and g ∈ L 1 ( Y , q ) Introduction � E ∞ ( α )( f ) , g � Y = � f , P 1 ( α )( g ) � X . Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  48. � � ✤ � Consequences Approximation by Averaging It is an immediate consequence of the definitions that 1 Panangaden for any f ∈ L + ∞ ( X , p ) and g ∈ L 1 ( Y , q ) Introduction � E ∞ ( α )( f ) , g � Y = � f , P 1 ( α )( g ) � X . Background Cones and Duality Conditional expectation λ h : L + 1 ( X , p ) . � f , h � f Markov processes ❴ ❴ Bisimulation Conclusions λ g : L + � E ∞ ( α )( f ) 1 ( Y , q ) . � f , g ◦ α � ✤

  49. � � ✤ � Consequences Approximation by Averaging It is an immediate consequence of the definitions that 1 Panangaden for any f ∈ L + ∞ ( X , p ) and g ∈ L 1 ( Y , q ) Introduction � E ∞ ( α )( f ) , g � Y = � f , P 1 ( α )( g ) � X . Background Cones and Duality Conditional expectation λ h : L + 1 ( X , p ) . � f , h � f Markov processes ❴ ❴ Bisimulation Conclusions λ g : L + � E ∞ ( α )( f ) 1 ( Y , q ) . � f , g ◦ α � ✤ Note that since we started with α in Rad ∞ we get the 2 expectation value as a map between the L + ∞ cones.

  50. � � � The other expectation value functor Approximation The functor E 1 ( · ) is a functor from Rad 1 to ω CC which by Averaging maps the object ( X , p ) to L + 1 ( X , p ) and on maps is given as Panangaden follows: Introduction Given α : ( X , p ) − → ( Y , q ) in Rad 1 the action of the functor is Background to produce the map E 1 ( α ) : L + → L + 1 ( X , p ) − 1 ( Y , q ) obtained by Cones and composing ( P ∞ ( α )) ∗ with the isomorphisms between L + , ∗ Duality ∞ Conditional and L + 1 as shown in the diagram below expectation Markov processes L + , ∗ L + ∞ ( X , p ) 1 ( X , p ) Bisimulation Conclusions ( P ∞ ( α )) ∗ E 1 ( α ) L + , ∗ � L + ∞ ( Y , q ) 1 ( Y , q )

  51. Markov kernels as linear maps Approximation by Averaging Given τ a Markov kernel from ( X , Σ) to ( Y , Λ) , we 1 Panangaden define T τ : L + ( Y ) − → L + ( X ) , for f ∈ L + ( Y ) , x ∈ X , as T τ ( f )( x ) = � Y f ( z ) τ ( x , dz ) . Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  52. Markov kernels as linear maps Approximation by Averaging Given τ a Markov kernel from ( X , Σ) to ( Y , Λ) , we 1 Panangaden define T τ : L + ( Y ) − → L + ( X ) , for f ∈ L + ( Y ) , x ∈ X , as T τ ( f )( x ) = � Y f ( z ) τ ( x , dz ) . Introduction Background This map is well-defined, linear and ω -continuous. 2 Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  53. Markov kernels as linear maps Approximation by Averaging Given τ a Markov kernel from ( X , Σ) to ( Y , Λ) , we 1 Panangaden define T τ : L + ( Y ) − → L + ( X ) , for f ∈ L + ( Y ) , x ∈ X , as T τ ( f )( x ) = � Y f ( z ) τ ( x , dz ) . Introduction Background This map is well-defined, linear and ω -continuous. 2 Cones and Duality If we write 1 B for the indicator function of the 3 Conditional measurable set B we have that T τ ( 1 B )( x ) = τ ( x , B ) . expectation Markov processes Bisimulation Conclusions

  54. Markov kernels as linear maps Approximation by Averaging Given τ a Markov kernel from ( X , Σ) to ( Y , Λ) , we 1 Panangaden define T τ : L + ( Y ) − → L + ( X ) , for f ∈ L + ( Y ) , x ∈ X , as T τ ( f )( x ) = � Y f ( z ) τ ( x , dz ) . Introduction Background This map is well-defined, linear and ω -continuous. 2 Cones and Duality If we write 1 B for the indicator function of the 3 Conditional measurable set B we have that T τ ( 1 B )( x ) = τ ( x , B ) . expectation It encodes all the transition probability information Markov 4 processes Bisimulation Conclusions

  55. From linear maps to markov kernels Approximation by Averaging Panangaden Introduction Background Cones and Conversely, any ω -continuous morphism L with 1 Duality L ( 1 Y ) ≤ 1 X can be cast as a Markov kernel by reversing Conditional expectation the process on the last slide. Markov processes Bisimulation Conclusions

  56. From linear maps to markov kernels Approximation by Averaging Panangaden Introduction Background Cones and Conversely, any ω -continuous morphism L with 1 Duality L ( 1 Y ) ≤ 1 X can be cast as a Markov kernel by reversing Conditional expectation the process on the last slide. Markov processes The interpretation of L is that L ( 1 B ) is a measurable 2 Bisimulation function on X such that L ( 1 B )( x ) is the probability of Conclusions jumping from x to B .

  57. Backwards Approximation by Averaging Panangaden Introduction Background Cones and We can also define an operator on M ( X ) by using τ the 1 Duality other way. Conditional expectation Markov processes Bisimulation Conclusions

  58. Backwards Approximation by Averaging Panangaden Introduction Background Cones and We can also define an operator on M ( X ) by using τ the 1 Duality other way. Conditional expectation We define ¯ T τ : M ( X ) − → M ( Y ) , for µ ∈ M ( X ) and 2 Markov processes B ∈ Λ , as ¯ T τ ( µ )( B ) = � X τ ( x , B ) d µ ( x ) . Bisimulation Conclusions

  59. Backwards Approximation by Averaging Panangaden Introduction Background Cones and We can also define an operator on M ( X ) by using τ the 1 Duality other way. Conditional expectation We define ¯ T τ : M ( X ) − → M ( Y ) , for µ ∈ M ( X ) and 2 Markov processes B ∈ Λ , as ¯ T τ ( µ )( B ) = � X τ ( x , B ) d µ ( x ) . Bisimulation It is easy to show that this map is linear and 3 Conclusions ω -continuous.

  60. What do they mean? Approximation by Averaging Panangaden Introduction Background The operator ¯ T τ transforms measures “forwards in 1 Cones and time”; if µ is a measure on X representing the current Duality state of the system, ¯ T τ ( µ ) is the resulting measure on Y Conditional expectation after a transition through τ . Markov processes Bisimulation Conclusions

  61. What do they mean? Approximation by Averaging Panangaden Introduction Background The operator ¯ T τ transforms measures “forwards in 1 Cones and time”; if µ is a measure on X representing the current Duality state of the system, ¯ T τ ( µ ) is the resulting measure on Y Conditional expectation after a transition through τ . Markov processes The operator T τ may be interpreted as a likelihood 2 Bisimulation transformer which propagates information “backwards”, Conclusions just as we expect from predicate transformers.

  62. What do they mean? Approximation by Averaging Panangaden Introduction Background The operator ¯ T τ transforms measures “forwards in 1 Cones and time”; if µ is a measure on X representing the current Duality state of the system, ¯ T τ ( µ ) is the resulting measure on Y Conditional expectation after a transition through τ . Markov processes The operator T τ may be interpreted as a likelihood 2 Bisimulation transformer which propagates information “backwards”, Conclusions just as we expect from predicate transformers. T τ ( f )( x ) is just the expected value of f after one τ -step 3 given that one is at x .

  63. Labelled abstract Markov processes Approximation by Averaging The definition Panangaden An abstract Markov kernel from ( X , Σ , p ) to ( Y , Λ , q ) is an ω -continuous linear map τ : L + → L + Introduction ∞ ( Y ) − ∞ ( X ) with � τ � ≤ 1 . Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  64. Labelled abstract Markov processes Approximation by Averaging The definition Panangaden An abstract Markov kernel from ( X , Σ , p ) to ( Y , Λ , q ) is an ω -continuous linear map τ : L + → L + Introduction ∞ ( Y ) − ∞ ( X ) with � τ � ≤ 1 . Background Cones and LAMPS Duality Conditional A labelled abstract Markov process on a probability expectation space ( X , Σ , p ) with a set of labels (or actions) A is a family Markov processes of abstract Markov kernels τ a : L + → L + ∞ ( X , p ) − ∞ ( X , p ) Bisimulation indexed by elements a of A . Conclusions

  65. � � The approximation map Approximation The expectation value functors project a probability space by Averaging onto another one with a possibly coarser σ -algebra. Panangaden Given an AMP on ( X , p ) and a map α : ( X , p ) − → ( Y , q ) in Introduction Rad ∞ , we have the following approximation scheme: Background Cones and Approximation scheme Duality Conditional τ a L + � L + ∞ ( X , p ) ∞ ( X , p ) expectation Markov processes P ∞ ( α ) E ∞ ( α ) Bisimulation α ( τ a ) � L + L + ∞ ( Y , q ) ∞ ( Y , q ) Conclusions

  66. A special case Approximation by Averaging Take ( X , Σ) and ( X , Λ) with Λ ⊂ Σ and use the Panangaden measurable function id : ( X , Σ) − → ( X , Λ) as α . Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  67. � � A special case Approximation by Averaging Take ( X , Σ) and ( X , Λ) with Λ ⊂ Σ and use the Panangaden measurable function id : ( X , Σ) − → ( X , Λ) as α . Introduction Coarsening the σ -algebra Background Cones and τ a L + � L + ∞ ( X , Σ , p ) ∞ ( X , Σ , p ) Duality Conditional expectation P ∞ ( id ) E ∞ ( id ) Markov id ( τ a ) � L + L + processes ∞ ( X , Λ , p ) ∞ ( X , Λ , p ) Bisimulation Conclusions

  68. � � A special case Approximation by Averaging Take ( X , Σ) and ( X , Λ) with Λ ⊂ Σ and use the Panangaden measurable function id : ( X , Σ) − → ( X , Λ) as α . Introduction Coarsening the σ -algebra Background Cones and τ a L + � L + ∞ ( X , Σ , p ) ∞ ( X , Σ , p ) Duality Conditional expectation P ∞ ( id ) E ∞ ( id ) Markov id ( τ a ) � L + L + processes ∞ ( X , Λ , p ) ∞ ( X , Λ , p ) Bisimulation Conclusions Thus id ( τ a ) is the approximation of τ a obtained by averaging over the sets of the coarser σ -algebra Λ .

  69. � � A special case Approximation by Averaging Take ( X , Σ) and ( X , Λ) with Λ ⊂ Σ and use the Panangaden measurable function id : ( X , Σ) − → ( X , Λ) as α . Introduction Coarsening the σ -algebra Background Cones and τ a L + � L + ∞ ( X , Σ , p ) ∞ ( X , Σ , p ) Duality Conditional expectation P ∞ ( id ) E ∞ ( id ) Markov id ( τ a ) � L + L + processes ∞ ( X , Λ , p ) ∞ ( X , Λ , p ) Bisimulation Conclusions Thus id ( τ a ) is the approximation of τ a obtained by averaging over the sets of the coarser σ -algebra Λ . We now have the machinery to consider approximating along arbitrary maps α .

  70. Bisimulation traditionally Approximation by Averaging Larsen-Skou definition Panangaden Given an LMP ( S , Σ , τ a ) an equivalence relation R on S is Introduction called a probabilistic bisimulation if sRt then for every Background measurable R -closed set C we have for every a Cones and Duality τ a ( s , C ) = τ a ( t , C ) . Conditional expectation This variation to the continuous case is due to Josée Markov processes Desharnais and her Indian friends. Bisimulation Conclusions

  71. Event bisimulation Approximation by Averaging In measure theory one should focus on measurable Panangaden sets rather than on points . Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

  72. Event bisimulation Approximation by Averaging In measure theory one should focus on measurable Panangaden sets rather than on points . Introduction Event Bisimulation Background Given a LMP ( X , Σ , τ a ) , an event-bisimulation is a Cones and Duality sub- σ -algebra Λ of Σ such that ( X , Λ , τ a ) is still an LMP . Conditional expectation Markov processes Bisimulation Conclusions

  73. Event bisimulation Approximation by Averaging In measure theory one should focus on measurable Panangaden sets rather than on points . Introduction Event Bisimulation Background Given a LMP ( X , Σ , τ a ) , an event-bisimulation is a Cones and Duality sub- σ -algebra Λ of Σ such that ( X , Λ , τ a ) is still an LMP . Conditional expectation This means τ a sends the subspace L + ∞ ( X , Λ , p ) to itself; Markov processes where we are now viewing τ a as a map on L + ∞ ( X , Λ , p ) . Bisimulation Conclusions

  74. � � � � � � The bisimulation diagram Approximation by Averaging τ a L + � L + ∞ ( X , Σ , p ) ∞ ( X , Σ , p ) Panangaden Introduction Background τ a L + � L + ∞ ( X , Λ , p ) ∞ ( X , Λ , p ) Cones and Duality This is a “lossless” approximation! Conditional expectation Markov processes Bisimulation Conclusions

  75. � � Zigzag maps Approximation We can generalize the notion of event bisimulation by using by Averaging maps other than the identity map on the underlying sets. Panangaden This would be a map α from ( X , Σ , p ) to ( Y , Λ , q ) , equipped Introduction with LMPs τ a and ρ a respectively, such that the following Background commutes: Cones and Duality τ a L + � L + Conditional ∞ ( X , Σ , p ) ∞ ( X , Σ , p ) (2) expectation Markov P ∞ ( α ) P ∞ ( α ) processes ρ a Bisimulation L + � L + ∞ ( Y , Λ , q ) ∞ ( Y , Λ , q ) Conclusions

  76. � � � � � � � A key diagram Approximation When we have a zigzag the following diagram commutes: by Averaging Panangaden ρ a L + L + ∞ ( Y ) ∞ ( Y ) (3) Introduction P ∞ ( α ) P ∞ ( α ) Background Cones and τ a L + � L + ∞ ( X ) ∞ ( X ) E 1 ( α )( 1 X ) · ( − ) Duality Conditional expectation P ∞ ( α ) E ∞ ( α ) Markov α ( τ a ) L + L + processes ∞ ( Y ) ∞ ( Y ) Bisimulation Conclusions The upper trapezium says we have a zigzag. The lower trapezium says that we have an “approximation” and the triangle on the right is an earlier lemma.

Recommend


More recommend