precise large deviation probabilities for a heavy tailed
play

Precise large deviation probabilities for a heavy-tailed random walk - PowerPoint PPT Presentation

Precise large deviation probabilities for a heavy-tailed random walk 1 Thomas Mikosch University of Copenhagen www.math.ku.dk/ mikosch 1 Conference in Honor of Sren Asmussen, Sandbjerg, August 1-5, 2011 1 2 3 4 Large deviations for a


  1. Precise large deviation probabilities for a heavy-tailed random walk 1 Thomas Mikosch University of Copenhagen www.math.ku.dk/ ∼ mikosch 1 Conference in Honor of Søren Asmussen, Sandbjerg, August 1-5, 2011 1

  2. 2

  3. 3

  4. 4 Large deviations for a heavy-tailed iid sequence • We define heavy tails by regular variation of the tails. • Assume that ( X t ) is iid regularly varying, i.e. there exists an α > 0, constants p, q ≥ 0 with p + q = 1 and a slowly varying function L such that P ( X > x ) ∼ p L ( x ) P ( X ≤ − x ) ∼ q L ( x ) and as x → ∞ . x α x α • Define the partial sums S n = X 1 + · · · + X n , n ≥ 1 , and assume EX = 0 if E | X | is finite.

  5. 5 • Large deviations refer to sequences of rare events { b − 1 n S n ∈ A } , i.e. P ( b − 1 n S n ∈ A ) → 0 as n → ∞ . • For example, if EX = 0 and A is bounded away from zero then P ( n − 1 S n ∈ A ) → 0 as n → ∞ , e.g. P ( | S n | > δn ) → 0. • Then the following relations hold for α > 0 and suitable sequences b n ↑ ∞ 2 � � � � P ( S n > x ) � � n →∞ sup lim n P ( | X | > x ) − p � = 0 . � x ≥ b n • For fixed n and x → ∞ , the result is a trivial consequence of regular variation (subexponentiality); e.g. Feller (1971). 2A.V. Nagaev (1969), S.V. Nagaev (1979), Cline and Hsing (1998), Heyde (1967)

  6. 6 • If p > 0, the result can be written in the form � � � � P ( S n > x ) � � n →∞ sup lim P ( M n > x ) − 1 � = 0 , � x ≥ b n where M n = max( X 1 , . . . , X n ). • If α > 2 one can choose b n = √ an log n , where a > α − 2, and for α ∈ (0 , 2], b n = n 1 /α + δ for any δ > 0. • In particular, one can always choose b n = δ n , δ > 0, provided E | X | < ∞ . • For α > 2 and √ n ≤ x ≤ √ an log n , a < α − 2, the probability P ( S n − ES n > x ) is approximated by the tail of a normal distribution.

  7. 7 • A functional (Donsker) version for multivariate regularly varying summands holds. Hult, Lindskog, M., Samorodnitsky (2005). • Then, for example, P (max i ≤ n S i > b n ) ∼ c max n P ( | X | > b n ) P provided b − 1 n S n → 0.

  8. 8 • The iid heavy tail large deviation heuristics: Large values of the random walk occur in the most natural way: due to a single large step. • This means: In the presence of heavy tails it is very unlikely that two steps X i and X j of the sum S n are large. • These results are in stark contrast with large deviation probabilities when X has exponential moments (Cram´ er-type large deviations). Then P ( | S n − ES n | > εn ) decays exponentially fast to zero. 3 3Cram´ er-type large deviations are usually more difficult to prove than heavy-tailed large deviations.

  9. 9

  10. 10 Ruin probabilities for an iid sequence • Assume the conditions of Nagaev’s Theorem: ( X t ) iid regularly varying with index α > 1 and EX = 0. • For fixed µ > 0 and any u > 0, consider the ruin probability ψ ( u ) = P (sup ( S n − µ n ) > u ) . n ≥ 1 • It is in general impossible to calculate ψ ( u ) exactly and therefore most results on ruin study the asymptotic behavior of ψ ( u ) when u → ∞ . • If the sequence ( X t ) is iid it is well known 4 that � ∞ ψ ( u ) ∼ u P ( X > u ) ∼ 1 P ( X > x ) dx , u → ∞ . µ ( α − 1) µ u 4Embrechts and Veraverbeke (1982), also for subexponentials.

  11. 11 • There is a direct relation between large deviations and ruin: uP ( X > u )(1 + µ ) − α ∼ P ( S [ u ] > [ u ] (1 + µ )) ≤ P (sup ( S n − µ n ) > u ) n ≥ 1 ≈ P ( sup ( S n − µ n ) > u ) M − 1 u ≤ n ≤ M u ≈ P ( S [ u ] > u ) . ∼ u P ( X > u ) • Lundberg (1905) and Cram´ er (1930s) proved that ψ ( u ) decays exponentially fast if X has exponential moments.

  12. 12 Examples of regularly varying stationary sequences Linear processes. • Examples of linear processes are ARMA processes with iid noise ( Z t ), e.g. the AR( p ) and MA( q ) processes X t = Z t + ϕ 1 X t − 1 + · · · + ϕ p X t − p , X t = Z t + θ 1 Z t − 1 + · · · + θ q Z t − q . • Linear processes constitute the class of time series which have been applied most frequently in practice. • Linear processes are regularly varying with index α if the iid noise ( Z t ) is regularly varying with index α .

  13. 13 • Linear processes � X t = ψ j Z t − j , t ∈ Z , j with iid regularly varying noise ( Z t ) with index α > 0 and EZ = 0 if E | Z | is finite: 5 � P ( X > x ) | ψ j | α ( p I ψ j > 0 + q I ψ j < 0 )= � ψ � α P ( | Z | > x ) ∼ α , x → ∞ . j • Regular variation of X is in general not sufficient for regular variation of Z . Jacobsen, M., Samorodnitsky, Rosi´ nski (2009, 2011) 5Davis, Resnick (1985); M., Samorodnitsky (2000) under conditions which are close to those in the 3-series theorem.

  14. 14 Solutions to stochastic recurrence equation. • For an iid sequence (( A t , B t )) t ∈ Z , A > 0, the stochastic recurrence equation X t = A t X t − 1 + B t , t ∈ Z , has a unique stationary solution t − 1 � X t = B t + A t · · · A i +1 B i , t ∈ Z , i = −∞ provided E log A < 0, E | log | B || < ∞ . • The sequence ( X t ) is regularly varying with index α which is the unique positive solution to EA κ = 1 (given this solution exists) Kesten (1973), Goldie (1991) and ∞ x − α , ∞ x − α , P ( X > x ) ∼ c + P ( X ≤ − x ) ∼ c − x → ∞ .

  15. 15 • The GARCH(1 , 1) process 6 satisfies a stochastic recurrence equation: for an iid standard normal sequence ( Z t ) σ 2 t = α 0 + ( α 1 Z 2 t − 1 + β 1 ) σ 2 t − 1 . The process X t = σ t Z t is regularly varying with index α satisfying E ( α 1 Z 2 + β 1 ) α/ 2 = 1. 6Bollerslev (1986)

  16. 16

  17. 17 Large deviations for a regularly varying linear process • Assume ( Z t ) iid, regularly varying with index α > 1 and EZ = 0, hence EX = 0. • Consider the linear process � X t = ψ j Z t − j , t ∈ Z . j • Let m ψ = � α = � j ψ j and � ψ � α j | ψ j | α ( p I ψ j > 0 + q I ψ j < 0 ). • Then M., Samorodnitsky (2000) � � n P ( | X | > x ) − p ( m ψ ) α + + q ( m ψ ) α � � P ( S n > x ) � − � n →∞ sup lim � = 0 . � � ψ � α x ≥ b n α • The threshold b n is chosen as in the iid case.

  18. 18 Ruin probabilities for a regularly varying linear process • Assume ( Z t ) iid, regularly varying with index α > 1 and EZ = 0, hence EX = 0. • Also assume � j | jψ j | < ∞ , excluding long range dependence. • Then for µ > 0 M., Samorodnitsky (2000) ψ ( u ) = P (sup ( S n − µ n ) > u ) n ≥ 1 ∼ p ( m ψ ) α + + q ( m ψ ) α u P ( X > u ) − � ψ � α µ ( α − 1) α ∼ p ( m ψ ) α + + q ( m ψ ) α − ψ ind ( u ) , u → ∞ . � ψ � α α • The proof is purely probabilistic.

  19. 19 • The constants � ψ � α α and p ( m ψ ) α + + q ( m ψ ) α − are crucial for measuring the dependence in the linear process ( X t ) with respect to large deviation behavior and the ruin functional. • A quantity of interest in this context is related to the maximum functional M n = max( X 1 , . . . , X n ). • Assume P ( | X | > a n ) ∼ n − 1 . Then, for x > 0, 7 n M n ≤ x ) → p max j ( ψ j ) α + + q max j ( ψ j ) α − x α log P ( a − 1 − . � ψ � α α • The right-hand expression is the extremal index of ( X t ) and measures the degree of extremal clustering in the sequence. 7Rootz´ en (1978), Davis, Resnick (1985)

  20. 20

  21. 21 Large deviation probabilities for solutions to stochastic recurrence equations • Assume Kesten’s conditions for the stochastic recurrence equation X t = A t X t − 1 + B t , t ∈ Z , and A > 0. Then for some α > 0, constants c ± ∞ ≥ 0 such that c + ∞ + c − ∞ > 0 ∞ x − α , P ( X ≤ − x ) ∼ c − ∞ x − α P ( X > x ) ∼ c + and x → ∞ . • Then Buraczewski, Damek, M., Zienkiewicz (2011) if c + ∞ > 0 � � � � P ( S n − ES n > x ) � � lim sup − c ∞ � = 0 , � n P ( X > x ) n →∞ b n ≤ x ≤ e sn where b n = n 1 /α (log n ) M , M > 2, for α ∈ (1 , 2], and b n = c n n 0 . 5 log n , c n → ∞ , for α > 2, c ∞ corresponds to the case B = 1, and s n /n → 0.

  22. 22 • Write Π ij = A i · · · A j . Then X i = Π 1 i X 0 + � X i , where � X i = Π 2 i B 1 + Π 3 i B 2 + · · · + Π ii B i − 1 + B i , i ≥ 1 , and n n � � � S n = X 0 Π 1 i + X i . i =1 i =1 The summands � X i are chopped into distinct parts of length log x and sums are taken over disjoint blocks of length log x . Then Nagaev-Fuk and Prokhorov inequalities for independent summands apply.

  23. 23 • The condition s n /n → 0 is essential. • Also notice that Embrechts and Veraverbeke (1982) n ∞ � � Π 1 i > x ) ∼ c x − α log x . P ( X 0 Π 1 i > x ) ≤ P ( X 0 i =1 i =1 • Then � n x − α log x P ( X 0 i =1 Π 1 i > x ) = log x “ ≤ ” . n x − α n P ( X > x ) n

Recommend


More recommend