Construction of Lyapunov functions via relative entropy with application to caching Nicolas Gast 1 ACM MAMA 2016, Antibes, France 1 Inria Nicolas Gast – 1 / 23
Outline Why? 1 How to make the fixed point method work (sufficient condition) 2 What: application to caching policy 3 Conclusion 4 Nicolas Gast – 2 / 23
State space explosion and mean-field method We need to keep track P ( X 1 ( t ) = i 1 , . . . , X n ( t ) = i n ) 3 13 ≈ 10 6 states. Nicolas Gast – 3 / 23
State space explosion and mean-field method We need to keep track P ( X 1 ( t ) = i 1 , . . . , X n ( t ) = i n ) 3 13 ≈ 10 6 states. The decoupling assumption is P ( X 1 ( t ) = i 1 , . . . , X n ( t ) = i n ) ≈ P ( X 1 ( t ) = i 1 ) . . . P ( X n ( t ) = i n ) Problem: is this valid? Nicolas Gast – 3 / 23
Decoupling assumption: (always) valid in transient regime 0.40 0.40 0.35 0.35 0.30 0.30 probability in cache probability in cache 0.25 0.25 0.20 0.20 Mean-field: ˙ x = xQ ( x ) 0.15 0.15 0.10 0.10 Simulation 0.05 0.05 1 list (200) approx 1 list (200) 4 lists (50/50/50/50) approx 4 lists (50/50/50/50) 0.00 0.00 0 2000 4000 6000 8000 10000 0 2000 4000 6000 8000 10000 number of requests number of requests Nicolas Gast – 4 / 23
Decoupling assumption: (always) valid in transient regime Theorem (Kurtz (70’), Benaim, Le Boudec (08),...) For many systems and any fixed t, if x �→ xQ ( x ) is Lipschitz-continuous then, as the number of objects N goes to infinity: N →∞ P ( X k ( t ) = i ) = x k , i ( t ) , lim where x satisfies ˙ x = xQ ( x ) . 0.40 0.40 0.40 0.35 0.35 0.35 0.30 0.30 0.30 probability in cache probability in cache probability in cache 0.25 0.25 0.25 0.20 0.20 0.20 Mean-field: ˙ x = xQ ( x ) 0.15 0.15 0.15 0.10 0.10 0.10 1 list (200) Simulation 4 lists (50/50/50/50) 0.05 0.05 0.05 1 list (200) approx 1 list (200) ode aprox (1 list) 4 lists (50/50/50/50) approx 4 lists (50/50/50/50) ode approx (4 lists) 0.00 0.00 0.00 0 2000 4000 6000 8000 10000 0 0 2000 2000 4000 4000 6000 6000 8000 8000 10000 10000 number of requests number of requests number of requests Nicolas Gast – 4 / 23
The fixed point method We know that x i ( t ) ≈ P ( X ( t ) = i ) satisfies ˙ x = xQ ( x ). Does P ( X = i ) satisfies xQ ( x ) = 0? Method was used in many papers: Bianchi 00 2 Ramaiyan et al. 08 3 Kwak et al. 05 4 Kumar et al 08 5 2Performance analysis of the IEEE 802.11 distributed coordination function. – G. Bianchi. – IEEE J. Select. Areas Commun. 2000. 3Fixed point analys is of single cell IEEE 802.11e WLANs: Uniqueness, multistability. – V. Ramaiyan, A. Kumar, and E. Altman. – ACM/IEEE Trans. Networking. Oct. 2008. 4Performance analysis of exponenetial backoff. – B.-J. Kwak, N.-O. Song, and L. Miller. – ACM/IEEE Trans. Networking. 2005. 5New insights from a fixed-point analysis of single cell IEEE 802.11 WLANs. – A. Kumar, E. Altman, D. Miorandi, and M. Goyal. – ACM/IEEE Trans. Networking 2007 Nicolas Gast – 5 / 23
It does not always work 67 Markov chain is irreducible. I Unique fixed point xQ ( x ) = 0. 10 I 1 + 5 S + a 10 S + 10 − 3 S R 6 Benaim Le Boudec 08 7 Cho, Le Boudec, Jiang, On the Asymptotic Validity of the Decoupling Assumption for Analyzing 802.11 MAC Protoco. 2010 Nicolas Gast – 6 / 23
It does not always work 67 Markov chain is irreducible. I Unique fixed point xQ ( x ) = 0. Fixed point Stat. measure 10 I 1 + 5 xQ ( x ) = 0 N = 1000 S + a x S x I π S π I a = . 3 0.209 0.234 0.209 0.234 10 S + 10 − 3 S R 6 Benaim Le Boudec 08 7 Cho, Le Boudec, Jiang, On the Asymptotic Validity of the Decoupling Assumption for Analyzing 802.11 MAC Protoco. 2010 Nicolas Gast – 6 / 23
It does not always work 67 Markov chain is irreducible. I Unique fixed point xQ ( x ) = 0. Fixed point Stat. measure 10 I 1 + 5 xQ ( x ) = 0 N = 1000 S + a x S x I π S π I a = . 3 0.209 0.234 0.209 0.234 10 S + 10 − 3 S R a = . 1 0.078 0.126 0.11 0.13 6 Benaim Le Boudec 08 7 Cho, Le Boudec, Jiang, On the Asymptotic Validity of the Decoupling Assumption for Analyzing 802.11 MAC Protoco. 2010 Nicolas Gast – 6 / 23
It does not always work 0.0 S 1.0 limit cycle true stationnary distribution Fixed point 1.0 0.0 0.0 R 1.0 I Nicolas Gast – 7 / 23
Outline Why? 1 How to make the fixed point method work (sufficient condition) 2 What: application to caching policy 3 Conclusion 4 Nicolas Gast – 8 / 23
Outline Why? 1 How to make the fixed point method work (sufficient condition) 2 What: application to caching policy 3 Conclusion 4 Nicolas Gast – 9 / 23
Link between the decoupling assumption and ˙ x = xQ ( x ) P ( X 1 ( t ) = i 1 , . . . , X n ( t ) = i n ) ≈ P ( X 1 ( t ) = i 1 ) . . . P ( X n ( t ) = i n ) � �� � � �� � = x 1 , i 1 ( t ) = x n , in ( t ) When we zoom on one object P ( X 1 ( t + dt ) = j | X 1 ( t ) = i ) ≈ E [ P ( X 1 ( t ) = j | X 1 = i ∧ X 2 . . . X n )] � ≈ Q (1) i , j ( x ) := K ( i , i 2 ... i n ) → ( j , j 2 ... j n ) x 2 , i 2 . . . x n , i n i 2 ... i n We then get: d � x 1 , i Q (1) dt x 1 , j ( t ) ≈ i , j ( x ) i Nicolas Gast – 10 / 23
Exchangeability of limits Markov chain Transient regime p = pK ˙ t → ∞ Stationary π K = 0 Nicolas Gast – 11 / 23
Exchangeability of limits Markov chain Mean-field Transient regime p = pK ˙ x = xQ ( x ) ˙ N → ∞ t → ∞ Stationary xQ ( x ) = 0 π K = 0 ? fixed points Nicolas Gast – 11 / 23
Exchangeability of limits Markov chain Mean-field Transient regime p = pK ˙ x = xQ ( x ) ˙ N → ∞ t → ∞ t → ∞ Stationary xQ ( x ) = 0 xQ ( x ) = 0 π K = 0 N → ∞ fixed points Nicolas Gast – 11 / 23
Exchangeability of limits Markov chain Mean-field Transient regime p = pK ˙ x = xQ ( x ) ˙ N → ∞ if yes t → ∞ t → ∞ Stationary xQ ( x ) = 0 xQ ( x ) = 0 π K = 0 N → ∞ fixed points then yes Theorem ((i) Benaim Le Boudec 08,(ii) Le Boudec 12) The stationary distribution π N concentrates on the fixed points if : (i) All trajectories of the ODE converges to the fixed points. (ii) (or) The markov chain is reversible. Nicolas Gast – 11 / 23
Lyapunov functions A solution of d dt x ( t ) = xQ ( x ( t )) converges to the fixed points of xQ ( x ) = 0, if there exists a Lyapunov function f , that is: Lower bounded: inf x f ( x ) > + ∞ Decreasing along trajectories: d dt f ( x ( t )) < 0 , whenever x ( t ) Q ( x ( t )) � = 0. Nicolas Gast – 12 / 23
Lyapunov functions A solution of d dt x ( t ) = xQ ( x ( t )) converges to the fixed points of xQ ( x ) = 0, if there exists a Lyapunov function f , that is: Lower bounded: inf x f ( x ) > + ∞ Decreasing along trajectories: d dt f ( x ( t )) < 0 , whenever x ( t ) Q ( x ( t )) � = 0. How to find a Lyapnuov function Energy? Distance? Entropy? Luck? Nicolas Gast – 12 / 23
The relative entropy is a Lyapunov function for Markov chains Let Q be the generator of an irreducible Markov chain and π be its stationary distribution. Let P ( t ) be the solution of d dt P ( t ) = P ( t ) Q . Theorem (e.g. Budhiraja et al 15, Dupuis-Fischer 11) The relative entropy P i log P i � R ( P � π ) = π i i is a Lyapunov function: d dt R ( P ( t ) � π ) < 0 , with equality if and only if P ( t ) = π . Nicolas Gast – 13 / 23
Relative entropy for mean-field models Assume that Q ( x ) be a generator of an irreducible Markov chain and let π ( x ) be its stationary distribution. Let P ( t ) be the solution of d dt P ( t ) = P ( t ) Q ( P ( t )). Then dt R ( P ( t ) � π ( t )) = d d dt P ( t ) ∂ + d dt π ( t ) ∂ ∂ P R ( P ( t ) , π ( t )) ∂π R ( P ( t ) , π ( t )) � �� � � �� � ≤ 0 i x i ( t ) d = − � dt log π i ( t ) x i ( t ) d � ≤ − dt log π i ( t ) i Nicolas Gast – 14 / 23
Relative entropy for mean-field models Assume that Q ( x ) be a generator of an irreducible Markov chain and let π ( x ) be its stationary distribution. Let P ( t ) be the solution of d dt P ( t ) = P ( t ) Q ( P ( t )). Then dt R ( P ( t ) � π ( t )) = d d dt P ( t ) ∂ + d dt π ( t ) ∂ ∂ P R ( P ( t ) , π ( t )) ∂π R ( P ( t ) , π ( t )) � �� � � �� � ≤ 0 i x i ( t ) d = − � dt log π i ( t ) x i ( t ) d � ≤ − dt log π i ( t ) i Theorem x i ( t ) d � If there exists a lower bounded integral F ( x ) of − dt log π i ( t ) , i then x �→ R ( x � π ( x )) + F ( x ) is a Lyapunov function for the mean-field model. Nicolas Gast – 14 / 23
Outline Why? 1 How to make the fixed point method work (sufficient condition) 2 What: application to caching policy 3 Conclusion 4 Nicolas Gast – 15 / 23
I consider a cache (virtually) divided into lists Application IRM Probability request p i RAND Upon hit/miss: Exchanged with random from next list. . . . . . . list 1 list j list j +1 list h data source Nicolas Gast – 16 / 23
I consider a cache (virtually) divided into lists Application IRM Probability request p i RAND Upon hit/miss: Exchanged with random from next list. miss . . . . . . list 1 list j list j +1 list h data source Nicolas Gast – 16 / 23
I consider a cache (virtually) divided into lists Application IRM Probability request p i RAND Upon hit/miss: Exchanged with random from next list. miss hit . . . . . . list 1 list j list j +1 list h data source Nicolas Gast – 16 / 23
Recommend
More recommend