Minimizing Markov chains Beyond Bisimilarity* Giovanni Bacci, Giorgio Bacci, Kim G. Larsen , Radu Mardare Aalborg University, Denmark 22 April 2017 - Uppsala, Sweden SynCoP + PV 2017 (*) On the Metric-based Approximate Minimization of Markov Chains - accepted for ICALP 2017
Best Approximant & Parameter Synthesis 1/2 m 0 1/6 1/3 m 1 m 2 1/2 1/2 1/2 1/2 m 4 1 m 5 m 3 1 1 MC(5)
Best Approximant & Parameter Synthesis ? m 0 ? 1/2 m 0 m 12 1/6 1/3 ? ? ? m 5 m 4 m 3 m 1 m 2 1 1 1 1/2 1/2 1/2 1/2 m 4 1 m 5 m 3 1 1 MC(5)
Best Approximant & Parameter Synthesis ? m 0 ? 1/2 m 0 m 12 1/6 1/3 ? ? ? m 5 m 4 m 3 m 1 m 2 1 1 1 1/2 1/2 1/2 1/2 ? m 0 m 4 1 ? ? m 5 m 3 1 1 m 12 m 12 ? ? 1 MC(5) m 5 m 4 1 1
Best Approximant & Parameter Synthesis 1/2 m 0 1/2 1/2 m 0 m 12 1/6 1/3 1/6 1/3 1/2 m 5 m 4 m 3 m 1 m 2 1 1 1 1/2 1/2 1/2 1/2 m 4 1 m 5 m 3 1 1 MC(5)
Best Approximant & Parameter Synthesis 1/2 m 0 1/2 1/2 m 0 m 12 1/6 1/3 1/6 1/3 1/2 m 5 m 4 m 3 m 1 m 2 1 1 1 1/2 1/2 1/2 1/2 ? m 0 m 4 1 ? ? m 5 m 3 1 1 m 12 m 12 ? ? 1 MC(5) m 5 m 4 1 1
4/9 1/6 Best Approximant & Parameter Synthesis 1/2 m 0 1/2 1/2 m 0 m 12 1/6 1/3 1/6 1/3 1/2 m 5 m 4 m 3 m 1 m 2 1 1 1 1/2 1/2 1/2 1/2 1/2 m 0 m 4 1 1/6 1/3 m 5 m 3 1 1 m 12 m 12 1/2 1/2 1 MC(5) m 5 m 4 1 1
Optimal parameters may be irrational x 79/100 79/100 m 0 m 1 m 3 n 0 1 - x - y 21/100 79/100 21/100 y m 2 m 4 21/100 n 2 n 1 1 1 1 1
Optimal parameters may be irrational x 79/100 79/100 m 0 m 1 m 3 n 0 1 - x - y 21/100 79/100 21/100 y m 2 m 4 21/100 n 2 n 1 1 1 1 1 s r e t e m a r a p l a m t i p O ! l a n o t i a r r i e b y a m x = 1 ⇣ ⌘ √ 10 + 163 30 y = 21 200
Optimal parameters may be irrational x 79/100 79/100 m 0 m 1 m 3 n 0 1 - x - y 21/100 79/100 21/100 y m 2 m 4 21/100 n 2 n 1 1 1 1 1 s r e e c t n e a m t s a i r d a l p a m l a m t i p t i O p O ! l a n ! l o a n t i o a i r t r a i r r e b i s i y a m √ x = 1 ⇣ ⌘ δ ( m 0 , n 0 ) = 436 675 − 163 163 √ 10 + 163 ≈ 0 . 49 30 13500 y = 21 200
The focus of the talk • Probabilistic Models (Markov chains) • Automatic verification (e.g., Model Checking) • state space explosion (even after model reduction, symbolic tech., partial-order reduction) • Still too large: one needs to compromise in the accuracy of the model (introduce an error) • Our proposal: metric-based state space reduction
Probabilistic Bisimulation [Larsen & Skou’91] s 1/2 1/2 n 0 m 0 1/3 1/3 2/3 m 1 1 n 1 1/3 1/3 1 1 n 2 m 2 1 n 3 1
Probabilistic Bisimulation [Larsen & Skou’91] s 1/2 1/2 n 0 m 0 1/3 1/3 2/3 m 1 1 n 1 1/3 1/3 1 1 n 2 m 2 1 n 3 1
Probabilistic Bisimulation [Larsen & Skou’91] s 1/2 1/2 n 0 m 0 1/3 1/3 2/3 m 1 1 n 1 1/3 1/3 1 1 n 2 m 2 1 n 3 1
Probabilistic Bisimulation [Larsen & Skou’91] s 1/2 1/2 n 0 m 0 1/3 1/3 2/3 m 1 1 n 1 1/3 1/3 1 1 n 2 m 2 1 n 3 1
Probabilistic Bisimulation [Larsen & Skou’91] s 1 m 0 1/3 1 m 1 2/3 m 2 1
Probabilistic Bisimulation [Larsen & Skou’91] s 1 g n i p m u l l a m i t p O m 0 ] 0 6 ’ l l e n S & y n e m e K [ 1/3 1 m 1 2/3 m 2 1
Probabilistic Bisimulation [Larsen & Skou’91] s 1 g n i p m u l l a m i t p O m 0 ] 0 6 ’ l l e n S & y n e m e K [ 1/3 e u q i n h c e t t n e i c fi f E ] 3 0 ’ . l a t e i v a s i 1 r e D [ m 1 2/3 m 2 1
Probabilistic Bisimulation [Larsen & Skou’91] s 1 g n i p m u l l a m i t p O m 0 ] 0 6 ’ l l e n S & y n e m e K [ 1/3 e u q i n h c e t t n e i c fi f E ] 3 0 ’ . l a t e i v a s i 1 r e D [ m 1 2/3 …but small m 2 variations may 1 prevent aggregation
Probabilistic Bisimulation [Larsen & Skou’91] s 1/2 1/2 n 0 m 0 1/3+ ɛ 1/3 2/3 m 1 1 n 1 1/3- ɛ 1/3 1 1 n 2 m 2 …but small 1 variations may n 3 prevent aggregation 1
Probabilistic Bisimulation [Larsen & Skou’91] s 1/2 1/2 n 0 m 0 1/3+ ɛ 1/3 2/3 m 1 1 n 1 1/3- ɛ 1/3 1 1 n 2 m 2 …but small 1 variations may n 3 prevent aggregation 1
Bisimilarity Distance 𝓝 = (M, 𝜐 , � ,m 0 ) 𝓞 = (N, θ , 𝛽 , n 0 ) 1 1 1/3+ ɛ 1/3 n 1 n 0 m 0 m 1 1/3- ɛ 2/3 1/3 n 2 m 2 1 1 1 n 3
Bisimilarity Distance 𝓝 = (M, 𝜐 , � ,m 0 ) 𝓞 = (N, θ , 𝛽 , n 0 ) 1 1 1/3+ ɛ 1/3 n 1 n 0 m 0 m 1 1/3- ɛ 2/3 1/3 n 2 m 2 1 1 1 n 3
Bisimilarity Distance 𝓝 = (M, 𝜐 , � ,m 0 ) 𝓞 = (N, θ , 𝛽 , n 0 ) 1 1 1/3 1/3+ ɛ 1/3 n 1 n 0 m 0 m 1 1/3- ɛ 2/3 1/3 n 2 m 2 1 1 1 n 3
Bisimilarity Distance 𝓝 = (M, 𝜐 , � ,m 0 ) 𝓞 = (N, θ , 𝛽 , n 0 ) 1 1 1/3 1/3+ ɛ 1/3 n 1 n 0 m 0 m 1 1/3- ɛ 2/3 1/3 n 2 1/3- ɛ m 2 1 1 1 n 3
Bisimilarity Distance 𝓝 = (M, 𝜐 , � ,m 0 ) 𝓞 = (N, θ , 𝛽 , n 0 ) 1 1 1/3 1/3+ ɛ 1/3 n 1 n 0 m 0 m 1 1/3- ɛ 2/3 1/3 n 2 1/3- ɛ m 2 1 1/3 1 1 n 3
Bisimilarity Distance 𝓝 = (M, 𝜐 , � ,m 0 ) 𝓞 = (N, θ , 𝛽 , n 0 ) 1 1 1/3 1/3+ ɛ 1/3 n 1 n 0 m 0 m 1 ɛ 1/3- ɛ 2/3 1/3 n 2 1/3- ɛ m 2 1 1/3 1 1 n 3
Bisimilarity Distance (fixed point characterization by van Breugel & Worrell) Given a parameter 𝜇∈ (0,1], called discount factor , the bisimilarity distance 𝜀 𝜇 is the smallest distance satisfying 1 if � (m) ≠ 𝛽 (n) 𝜀 𝜇 (m,n) = 𝜇 ⋅ 𝓛 ( 𝜀 𝜇 )( 𝜐 (m), θ (n)) otherwise
Bisimilarity Distance (fixed point characterization by van Breugel & Worrell) Given a parameter 𝜇∈ (0,1], called discount factor , the bisimilarity distance 𝜀 𝜇 is the smallest distance satisfying 1 if � (m) ≠ 𝛽 (n) 𝜀 𝜇 (m,n) = 𝜇 ⋅ 𝓛 ( 𝜀 𝜇 )( 𝜐 (m), θ (n)) otherwise Kantorovich lifting
Bisimilarity Distance (fixed point characterization by van Breugel & Worrell) Given a parameter 𝜇∈ (0,1], called discount factor , the bisimilarity distance 𝜀 𝜇 is the smallest distance satisfying 1 if � (m) ≠ 𝛽 (n) 𝜀 𝜇 (m,n) = 𝜇 ⋅ 𝓛 ( 𝜀 𝜇 )( 𝜐 (m), θ (n)) otherwise Kantorovich lifting coupling ∑ u ∈ M C(u,v) = θ (n)(v) 𝓛 (d)( 𝜐 (m), θ (n)) = min ∑ d(u,v) ⋅ C(u,v) ∑ v ∈ N C(u,v) = 𝜐 (m)(u)
Bisimilarity Distance (fixed point characterization by van Breugel & Worrell) Given a parameter 𝜇∈ (0,1], called discount factor , the bisimilarity distance 𝜀 𝜇 is the smallest distance satisfying 1 if � (m) ≠ 𝛽 (n) 𝜀 𝜇 (m,n) = 𝜇 ⋅ 𝓛 ( 𝜀 𝜇 )( 𝜐 (m), θ (n)) otherwise discount at Kantorovich lifting coupling each step ∑ u ∈ M C(u,v) = θ (n)(v) 𝓛 (d)( 𝜐 (m), θ (n)) = min ∑ d(u,v) ⋅ C(u,v) ∑ v ∈ N C(u,v) = 𝜐 (m)(u)
Remarkable properties (Jonsson & L 91) Theorem (Desharnais et. al 99) m ~ n iff 𝜀 𝜇 (m,n) = 0 (Bacci, L, Mardare 13) Theorem (Chen, van Breugel, Worrell 12) The probabilistic bisimilarity distance can be computed in polynomial time
Approximate verification Theorem ( Chen et al. FoSSaCS’12, Bacci et al. ICTAC’15 ) |P( 𝓝 )([ φ ]) - P( 𝓞 )([ φ ])| ≤ 𝜀 1 ( 𝓝 , 𝓞 ) f o r a l l L T L f o r m u l a s ! difference in the probability of satisfying φ
Approximate verification Theorem ( Chen et al. FoSSaCS’12, Bacci et al. ICTAC’15 ) |P( 𝓝 )([ φ ]) - P( 𝓞 )([ φ ])| ≤ 𝜀 1 ( 𝓝 , 𝓞 ) f o r a l l L T L f o r m u l a s ! difference in the probability of satisfying φ …imagine that | 𝓝 | ≫ | 𝓞 |, we can use 𝓞 in place of 𝓝 approximate P( 𝓞 )([ φ ]) solution on φ d d 0 1 P( 𝓝 )([ φ ])
Metric-based State Space Reduction Closest Bounded Minimum Significant Approximant ( CBA ) Approximant Bound ( MSAB )
Metric-based State Space Reduction Closest Bounded Minimum Significant Approximant ( CBA ) Approximant Bound ( MSAB ) MC MC(k) d M N minimize d
Metric-based State Space Reduction Closest Bounded Minimum Significant Approximant ( CBA ) Approximant Bound ( MSAB ) MC MC MC(k) MC( k ) d < 1 M N M N minimize d minimize k
List of our Results • CBA as bilinear program • The CBA’s threshold problem is • NP-hard (complexity lower bound) • PSPACE (complexity upper bound) • The MSAB’s threshold problem is NP-complete • Expectation Maximization heuristic for CBA
The CBA- λ problem The Closest Bounded Approximant w.r.t. 𝜀 𝜇 Instance: An MC M, and a positive integer k Ouput: An MC Ñ, with at most k states minimizing 𝜀 𝜇 (m 0 ,ñ 0 ) 𝜀 𝜇 (m 0 ,ñ 0 ) = inf { 𝜀 𝜇 (m 0 ,n 0 ) | N ∈ MC(k) } we get a solution iff the infimum is a minimum
Recommend
More recommend