Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks Jose Blanchet (joint work with Jingchen Liu) Columbia IEOR Department RESIM Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 1 / 43
Agenda Introduction The One Dimensional Case Multidimensional Case Markov Random Walks Conclusions Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 2 / 43
Introduction What is the general goal of this talk? Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 3 / 43
Introduction What is the general goal of this talk? Discuss simulation methodology for estimating small expectations associated to rare events in multidimensional heavy-tailed settings. Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 3 / 43
Introduction What is the general goal of this talk? Discuss simulation methodology for estimating small expectations associated to rare events in multidimensional heavy-tailed settings. What is the objective of the talk? Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 3 / 43
Introduction What is the general goal of this talk? Discuss simulation methodology for estimating small expectations associated to rare events in multidimensional heavy-tailed settings. What is the objective of the talk? Illustrate our methodology for multidimensional regularly varying 1 random walks. Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 3 / 43
Introduction What is the general goal of this talk? Discuss simulation methodology for estimating small expectations associated to rare events in multidimensional heavy-tailed settings. What is the objective of the talk? Illustrate our methodology for multidimensional regularly varying 1 random walks. Discuss optimality properties of our proposed simulation procedure. 2 Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 3 / 43
Initial Notation Let X 1 , X 2 ,... are iid regularly varying in R d (to be discussed – > TBD) . EX i = η 2 R d and A is an appropriate subset of R d . Given b > 0 we write bA = f ba : a 2 A g . S n = X 1 + ... + X n , ( S 0 = s ). T bA = inf f n � 0 : S n 2 bA g . Object of interest: u b ( s , f ) = E ( f ( S 0 , S 1 , ..., S T bA , T bA ) I ( T bA < ∞ )) , for any f ( � ) such that 0 < δ f � f � δ � 1 and u b ( s , 1 ) � ! 0 as f b % ∞ (TBD). Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 4 / 43
E¢ciency and Performance Guarantee Given any f for which there is δ f 2 ( 0 , ∞ ) , so that δ f � f � δ � 1 estimate f u b ( s , f ) = E ( f ( S 0 , S 1 , ..., S T bA , T bA ) I ( T bA < ∞ )) , with good relative precision . The relative error of an estimator Z for u b ( s , f ) is � � Rel.Error = ( Var ( Z )) 1 / 2 � � EZ � � + u b ( s ) � 1 � � u b ( s ) � u b ( s , f ) 2 � Suppose EZ = u b ( s , f ) and that Var ( Z ) = O , then we say that Z is strongly e¢cient . Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 5 / 43
If Z is unbiased, then sampling n iid replications of Z gives an u b ( n ) = n � 1 ∑ n estimator b j = 1 Z j such that Var ( Z ) P ( j b u b ( n ) � u b ( s , f ) j � ε u b ( s , f )) � n ε 2 u b ( s , f ) 2 . � ε � 2 δ � 1 Var ( Z ) / u b ( s , f ) 2 � So, it takes O replications to achieve ε -relative error with ( 1 � δ ) � 100 % con…dence. Any procedure for estimating u b ( s , f ) (for arbitrary f ) must consider ( S k : 0 � k � T bA ) (this requires on average O ( E s ( T bA j T bA < ∞ )) operations). Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 6 / 43
In summary, the best performance that one can expect for ε -relative precision and ( 1 � δ ) � 100 % con…dence based on iid replications of a given estimator involves � � ε � 2 δ � 1 E s ( T b j T b < ∞ ) O operations. An algorithm that achieves such performance is said to be optimal Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 7 / 43
Agenda Introduction The One Dimensional Case Multidimensional Case Markov Random Walks Conclusions Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 8 / 43
The One Dimensional Case There is a rich large deviations theory for heavy-tailed random walks based on subexponential rv’s P ( X 1 + X 2 > b ) = 2 P ( X 1 > b ) ( 1 + o ( 1 )) as b � ! ∞ . Focus on an important class of a subexponential distributions is the class of regularly varying distributions (basically power-law type) P ( X 1 > t ) = t � α L ( t ) ! 1 as t % ∞ for each β > 0. for α > 1 and L ( t β ) / L ( t ) � Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 9 / 43
The One Dimensional Case Let EX i = η < 0 and set A = [ 1 , ∞ ) , (we write T b = T bA ) estimate ruin (Pakes, Veraberbeker, Cohen... see text of Asmussen ’03) u b ( s , 1 ) = u b ( s ) = P s ( T b < ∞ ) . Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 10 / 43
The One Dimensional Case Let EX i = η < 0 and set A = [ 1 , ∞ ) , (we write T b = T bA ) estimate ruin (Pakes, Veraberbeker, Cohen... see text of Asmussen ’03) u b ( s , 1 ) = u b ( s ) = P s ( T b < ∞ ) . Strategy : use importance sampling consider the Markov kernel K ( � ) r � 1 ( s 0 , s 1 ) P ( s 0 + X 1 2 s 1 + ds 1 ) K ( s 0 , s 1 + ds 1 ) = r � 1 ( s 0 , s 1 ) f X 1 ( s 1 � s 0 ) ds 1 K ( s 0 , s 1 ) ds 1 = (1) for a positive function r ( � ) . ((1) valid in presence of densities). Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 10 / 43
The One Dimensional Case Let EX i = η < 0 and set A = [ 1 , ∞ ) , (we write T b = T bA ) estimate ruin (Pakes, Veraberbeker, Cohen... see text of Asmussen ’03) u b ( s , 1 ) = u b ( s ) = P s ( T b < ∞ ) . Strategy : use importance sampling consider the Markov kernel K ( � ) r � 1 ( s 0 , s 1 ) P ( s 0 + X 1 2 s 1 + ds 1 ) K ( s 0 , s 1 + ds 1 ) = r � 1 ( s 0 , s 1 ) f X 1 ( s 1 � s 0 ) ds 1 K ( s 0 , s 1 ) ds 1 = (1) for a positive function r ( � ) . ((1) valid in presence of densities). The importance sampling estimator is: T b � 1 ∏ Z = r ( S j , S j + 1 ) I ( T b < ∞ ) , j = 1 the S n ’s are simulated under K ( � ) . Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 10 / 43
The One Dimensional Case Classical result: The conditional distribution of the random walk, given that T b < ∞ , gives an exact (zero variance) estimator for u b ( s ) . Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 11 / 43
The One Dimensional Case Classical result: The conditional distribution of the random walk, given that T b < ∞ , gives an exact (zero variance) estimator for u b ( s ) . Moral of the story: Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 11 / 43
The One Dimensional Case Classical result: The conditional distribution of the random walk, given that T b < ∞ , gives an exact (zero variance) estimator for u b ( s ) . Moral of the story: Select an importance sampler that mimics the behavior of such conditional distribution. Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 11 / 43
The One Dimensional Case Theorem (Asmussen and Kluppelberg): Conditional on T b < ∞ , we have that � S uT b � , S T b � b , T b = ) ( η u , Z 1 , Z 2 ) T b b b on D ( 0 , 1 ) � R � R as b % ∞ , where Z 1 and Z 2 are Pareto with index α � 1 . Interpretation : Prior to ruin, the random walk has drift η and a large jump of size b occurs suddenly in O ( b ) time... Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 12 / 43
The One Dimensional Case Theorem (Asmussen and Kluppelberg): Conditional on T b < ∞ , we have that � S uT b � , S T b � b , T b = ) ( η u , Z 1 , Z 2 ) T b b b on D ( 0 , 1 ) � R � R as b % ∞ , where Z 1 and Z 2 are Pareto with index α � 1 . Interpretation : Prior to ruin, the random walk has drift η and a large jump of size b occurs suddenly in O ( b ) time... So, given that a jump hasn’t occurred by time k , then S k � η k and the chance of reaching b in the next increment given that we eventually reach b ( T b < ∞ ) � 1 � 0 P ( X > b � η u ) du � � η P ( X > b � η k ) P ( X > b � η k ) R ∞ R ∞ = O . b b P ( X > s ) du Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 12 / 43
The One Dimensional Case Family of changes-of-measure: Here s is the current position of the walk, f is the density (NOTE p ( s ) and a 2 ( 0 , 1 ) ) p ( s ) f X ( x ) I ( x > a ( b � s )) f X j s ( x j s ) = P ( X > a ( b � s )) + ( 1 � p ( s )) f X ( x ) I ( x � a ( b � s )) P ( X > a ( b � s )) In other words, s 0 = s and s 1 = s 0 + x p ( s 0 ) I ( s 1 � s 0 > a ( b � s 0 )) r ( s 0 , s 1 ) � 1 = P ( X > a ( b � s 0 )) + ( 1 � p ( s 1 )) I ( s 1 � s 0 � a ( b � s 0 )) P ( X � a ( b � s 0 )) Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 13 / 43
The One Dimensional Case Lyapunov Inequalities for Variance Control: Lemma (B. & Glynn ’07) Suppose that there is a positive function g ( � ) such that ! � g ( S 1 ) r ( s , S 1 ) � g ( S 1 ) r ( s , S 1 ) 2 E K = E s � 1 s g ( s ) g ( s ) for all s � b and g ( s ) � 1 for s > b. Then, ! T b � 1 r ( S j , S j + 1 ) 2 I ( T b < ∞ ) s Z 2 = E K E K ∏ � g ( s ) . s j = 1 Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 14 / 43
Recommend
More recommend