efficiency of the cross entropy method for markov chain
play

Efficiency of the Cross-Entropy Method for Markov Chain Problems Ad - PowerPoint PPT Presentation

Efficiency of the Cross-Entropy Method for Markov Chain Problems Ad Ridder 1 Bruno Tuffin 2 1 Vrije Universiteit, Amsterdam, Netherlands aridder@feweb.vu.nl http://staff.feweb.vu.nl/aridder/ 2 INRIA, Rennes, France bruno.tuffin@irisa.fr Rare


  1. Efficiency of the Cross-Entropy Method for Markov Chain Problems Ad Ridder 1 Bruno Tuffin 2 1 Vrije Universiteit, Amsterdam, Netherlands aridder@feweb.vu.nl http://staff.feweb.vu.nl/aridder/ 2 INRIA, Rennes, France bruno.tuffin@irisa.fr Rare Event Simulation Workshop 2010, Cambridge 21 June 2010

  2. Introduction ◮ Suppose that { A n : n = 1 , 2 , . . . } is a family of events in a probability space (Ω , A , P ) , ◮ such that P ( A n ) → 0 as n → ∞ . ◮ Furthermore, suppose that P ( A n ) is difficult to compute analytically or numerically, ◮ but easy to estimate by simulation.

  3. Research Question ◮ There might be many simulation algorithms. ◮ Denote by Y n the associated unbiased estimator of P ( A n ) . Can we give conditions for strong efficiency (bounded relative error) E [ Y 2 n ] lim sup ( E [ Y n ]) 2 < ∞ , n →∞ or logarithmic efficiency (asymptotic optimality), log E [ Y 2 n ] log ( E [ Y n ]) 2 = 1 ? lim n →∞

  4. Specific or General? Many studies in the rare event simulation literature show efficiency of a specific algorithm for a specific problem. For instance, concerning asymptotic optimality. ◮ Specific: importance sampling with exponentially twisted distribution for a level crossing probability ([1]). ◮ More general: Dupuis and co-authors ([2], [3]) developed an importance sampling method based on a control-theoretic approach to large deviations, which is applicable for a large class of problems involving Markov chains and queueing networks. ◮ More abstract: assume large deviations probabilities, i.e., lim n →∞ 1 n log P ( A n ) = − θ , and derive conditions under which exponentially twisted importance sampling distribution is asymptotically optimal ([4], [5] ).

  5. References 1. Siegmund, D. 1976. Annals of Statistics 4, 673-684. 2. Dupuis, P ., and Wang, H. 2005. Annals of Applied Probability 15, 1-38. 3. Dupuis, P ., Sezer, D., and Wang, H. 2007. Annals of Applied Probability 17, 1306-1346. 4. Sadowsky, J.S. 1996. Annals of Applied Probability 6, 399-422. 5. Dieker, T., and Mandjes, M. 2005. Advances in Applied Probability 37, 539-552.

  6. BRE Studies Examples with strong efficiency. ◮ Importance sampling with a biasing scheme in a highly reliable Markovian system [1]. ◮ Zero-variance approximation importance sampling in a highly reliable Markovian system [2]. ◮ Combination of conditioning and importance sampling for tail probabilities of geometric sums of heavy tailed rv’s [3]. ◮ State-dependent importance sampling (based on zero-variance approximation) for sums of Gaussian rv’s [4].

  7. References 1. Shahabuddin, P . 1994. Management Science 40, 333-352. 2. L ’Ecuyer, P . and Tuffin B. 2009. Annals of Operations Research , to appear. 3. Juneja, S. 2007. Queueing Systems 57, 115-127. 4. Blanchet, J.H. and Glynn, P .W. 2006. Proceedings ValueTools 2006 .

  8. More References More or less general studies. 1. Heidelberger, P . 1995. ACM Transactions on Modeling and Computer Simulation 5, 43-85. 2. Asmussen, S. and Rubinstein, R. 1995. In Advances in Queueing Theory, Methods, and Open problems , 429-462. 3. L ’Ecuyer, P ., Blanchet, J.H., Tuffin, B. and Glynn, P .W. 2010. ACM Transactions on Modeling and Computer Simulation 20, 6:1-6:41.

  9. Cross-Entropy Method ◮ The cross-entropy method is a heuristic for rare event simulation to find the importance sampling distributions within a parameterized class (book Kroese and Rubinstein 2004). ◮ A summary on one of the next slides. ◮ Then proving analytically the efficiency of the resulting estimator is ‘impossible’. ◮ The usual approach is to estimate the efficiency by empirical (simulation) data. Contribution This paper gives sufficient conditions for the cross-entropy method to be efficient for a certain type of rare event problems in Markov chains.

  10. The Rare Event Problem P ( A n ) is an absorption probability in a finite-state discrete-time Markov chain. We allow two versions. A. The rarity parameter n is associated with the problem size which is increasing in n . B. We assume a constant problem size and we let the rarity parameter to be associated with transition probabilities that are decreasing in n . For ease of notation, drop rarity parameter n .

  11. Notation ◮ Markov chain is { X ( t ) : t = 0 , 1 , . . . } . ◮ Statespace X ; transition prob’s p ( x , y ) . ◮ Markov chain starts off in a reference state 0 , X ( 0 ) = 0 . ◮ A ‘good’ set G ⊂ X of absorbing states. ◮ A failure set F ⊂ X of absorbing states. ◮ No other absorbing states. ◮ The time to absorption is T = inf { t > 0 : X ( t ) ∈ G ∪ F} . ◮ Absorption probabilities γ ( x ) = P ( X ( T ) ∈ F| X ( 0 ) = x ) . ◮ Rare event A = 1 { X ( T ) ∈ F} with probability P ( A ) = γ ( 0 ) .

  12. Illustration

  13. Importance Sampling Importance sampling simulation implements a change of measure P ∗ to obtain unbiased importance sampling estimator Y = 1 { A } d P d P ∗ , where d P / d P ∗ is the likelihood ratio. Feasibility Restrict to changes of measure for which p ( x , y ) > 0 ⇔ p ∗ ( x , y ) > 0 . Notation: probability measure P (or P ∗ ) and associated matrix of transition probabilities P (or P ∗ ) are used for the same purpose whenever convenient.

  14. Zero Variance ◮ Optimal change of measure P opt = P ( ·| A ) gives Var opt [ Y ] = 0 . ◮ P opt is feasible, p opt ( x , y ) = p ( x , y ) γ ( y ) γ ( x ) , ◮ not implementable, since it requires knowledge of the unknown absorption probabilities.

  15. Cross-Entropy Minimization Find P ∗ by minimizing the Kullback-Leibler distance (or cross-entropy) within the class of feasible changes of measure: P ∗ ∈P D ( d P opt , d P ∗ ) , inf where the cross-entropy is defined by � � d P opt �� � d P opt � d P opt �� D ( d P opt , d P ∗ ) = E opt d P ∗ ( X ) = E d P ( X ) log d P ∗ ( X ) . log Notation: X is a random sample path of the Markov chain from the reference state 0 .

  16. Cross-Entropy Solution Solution denoted P min has (after some algebra) E [ 1 { A } N ( x , y )] p min ( x , y ) = � � , 1 { A } � z ∈X N ( x , z ) E where N ( x , y ) is the number of times that transition ( x , y ) occurs.

  17. Equivalence Lemma p min ( x , y ) = p opt ( x , y ) for all x , y ∈ X . Proof. (a) Indirect way: P opt is a feasible change of measure for the minimization. (b) Direct way: we can show analytically that the expressions of p min ( x , y ) and p opt ( x , y ) given above are equal.

  18. ZVA and ZVE ZVA: an importance sampling estimator based on approximating the numerators � γ ( x ) of the zero-variance transition probabilities p opt . ZVE: an importance sampling estimator based on estimating the numerators � E [ 1 { A } N ( x , y )] of the zero-variance transition probabilities p min .

  19. Cross-Entropy Based ZVE Easy: P ∗ ∈P D ( d P opt , d P ∗ ) E [ 1 { X ( T ) ∈ F} log d P ∗ ( X )] , ⇔ inf sup P ∗ ∈P where, by a change of measure: � d P � E [ 1 { X ( T ) ∈ F} log d P ∗ ( X )] = E ( 0 ) d P ( 0 ) 1 { X ( T ) ∈ F} log d P ∗ ( X ) . Estimate and iterate: � k 1 d P P ( j + 1 ) = arg max d P ( j ) ( X ( i ) ) 1 { X ( i ) ( T ) ∈ F} log d P ∗ ( X ( i ) ) . k P ∗ ∈P i = 1 After a few iterations (convergence?): ZVE P ce .

  20. Asymptotic Optimality n and P opt Notation: P ce for explicitly indicating that the change of n measure depends also on the rarity parameter. Theorem Assume D ( P opt n , P ce n ) = o ( log P ( A n )) as n → ∞ , then the associated importance sampling estimator is asymptotically efficient.

  21. Proof D ( P opt n ) = E opt [ log d P opt n , P ce n / d P ce n ( X )] ≥ 0 . �� d P � 2 � �� d P � 2 � � 2 � d P opt E ce � � n Y 2 = E ce = E ce ( X ) 1 { A n } ( X ) 1 { A n } ( X ) n d P opt d P ce d P ce n n n �� d P opt � 2 � � d P opt � = P ( A n ) 2 E ce n = P ( A n ) 2 E opt n ( X ) ( X ) . d P ce d P ce n n So, we can conclude log ( P ( A n )) 2 + log E opt � � log E opt � � d P opt d P opt n ( X ) n ( X ) n n log E ce [ Y 2 n ] d P ce d P ce log P ( A n ) = = 2 + , log P ( A n ) log P ( A n ) with log E opt � � log E opt � � E opt � � d P opt d P opt log d P opt n ( X ) n ( X ) n ( X ) n n n d P ce d P ce d P ce � � lim = lim = 0 . log d P opt log P ( A n ) log P ( A n ) n →∞ n →∞ E opt n ( X ) n d P ce

  22. A Simple Example: M/M/1 { X ( t ) : t = 0 , 1 , . . . } on { 0 , 1 , . . . } is the discrete-time Markov chain by embedding at jump times of the M / M / 1 queue. The rare event is hitting state n before returning to the zero state: γ ( 0 ) = P (( X ( t )) reaches n before 0 | X ( 1 ) = 1 ) . D ( P opt n , P ce log E ce [ Y 2 n ) / log P ( A n ) . n ] / log P ( A n ) .

  23. Bounded Relative Error From a probabilistic point of view, the cross-entropy method is a randomized algorithm that delivers (unbiased) estimators � P n ( x , y ) of the zero-variance transition probabilities p opt n ( x , y ) . Theorem Assume that for any α < 1 there is K > 0 such that   p opt n ( x , y )   ≤ K  ≤ α, lim sup P  max � P n ( x , y ) ( x , y ) ∈X×X n →∞ p ( x , y ) > 0 then the associated importance sampling estimator Y n is strongly efficient (with probability α ). (Notice that the expectation of Y n given the estimators � P n ( x , y ) is a rv.)

Recommend


More recommend