exponential functionals of markov additive processes
play

Exponential functionals of Markov additive processes Anita Behme - PowerPoint PPT Presentation

Exponential functionals of Markov additive processes Anita Behme joint work in progress with Apostolos Sideris May 24th 2019 Probability and Analysis Overview Exponential functionals of L evy processes From L evy processes to


  1. Exponential functionals of Markov additive processes Anita Behme joint work in progress with Apostolos Sideris May 24th 2019 Probability and Analysis

  2. Overview ◮ Exponential functionals of L´ evy processes ◮ From L´ evy processes to MAPs ◮ Exponential functionals of MAPs: ◮ Definition ◮ An example ◮ Main results and methodology ◮ Discussion and more results ◮ Open questions Anita Behme P&A 2019, 2

  3. Generalized Ornstein-Uhlenbeck processes Let ( ξ t , η t ) t ≥ 0 be a bivariate L´ evy process. The generalized Ornstein-Uhlenbeck (GOU) process ( V t ) t ≥ 0 driven by ( ξ, η ) is given by � � � V t = e − ξ t e ξ s − d η s V 0 + t ≥ 0 , , (0 , t ] where V 0 is a finite random variable, independent of ( ξ, η ). Anita Behme P&A 2019, 3

  4. L´ evy processes evy process in R d on a probability space (Ω , F , P ) is a Definition: A L´ stochastic process X = ( X t ) t ≥ 0 , X t : Ω → R d satisfying the following properties: ◮ X 0 = 0 a.s. ◮ X has independent increments, i.e. for all 0 ≤ t 0 ≤ t 1 ≤ . . . ≤ t n the random variables X t 0 , X t 1 − X t 0 , . . . , X t n − X t n − 1 are independent. ◮ X has stationary increments, i.e. for all s , t ≥ 0 it holds d X s + t − X s = X t . ◮ X has a.s. c` adl` ag paths, i.e. for P -a.e. ω ∈ Ω the path t �→ X t ( ω ) is right-continuous in t ≥ 0 and has left limits in t > 0. Anita Behme P&A 2019, 4

  5. L´ evy processes evy process in R d on a probability space (Ω , F , P ) is a Definition: A L´ stochastic process X = ( X t ) t ≥ 0 , X t : Ω → R d satisfying the following properties: ◮ X 0 = 0 a.s. ◮ X has independent increments, i.e. for all 0 ≤ t 0 ≤ t 1 ≤ . . . ≤ t n the random variables X t 0 , X t 1 − X t 0 , . . . , X t n − X t n − 1 are independent. ◮ X has stationary increments, i.e. for all s , t ≥ 0 it holds d X s + t − X s = X t . ◮ X has a.s. c` adl` ag paths, i.e. for P -a.e. ω ∈ Ω the path t �→ X t ( ω ) is right-continuous in t ≥ 0 and has left limits in t > 0. A L´ evy process X is uniquely determined by its characteristic triplet ( γ X , σ X , ν X ). Anita Behme P&A 2019, 4

  6. Exponential functionals Theorem (Lindner, Maller ’05): The GOU process � t � � V t = e − ξ t e ξ s − d η s V 0 + , t ≥ 0 , 0 solving dV t = V t − dU t + dL t , with ◮ ξ t = − log( E ( U ) t ) ◮ ( U , L ) (or similarly ( ξ, η )) bivariate L´ evy processes ◮ V 0 starting random variable independent of ( ξ, η ) has a (nontrivial) stationary distribution if and only if the integral � ∞ e − ξ t − dL t V ∞ := 0 converges a.s. Anita Behme P&A 2019, 5

  7. Exponential functionals Theorem (Lindner, Maller ’05): The GOU process � t � � V t = e − ξ t e ξ s − d η s V 0 + , t ≥ 0 , 0 solving dV t = V t − dU t + dL t , with ◮ ξ t = − log( E ( U ) t ) ◮ ( U , L ) (or similarly ( ξ, η )) bivariate L´ evy processes ◮ V 0 starting random variable independent of ( ξ, η ) has a (nontrivial) stationary distribution if and only if the integral � ∞ e − ξ t − dL t V ∞ := 0 converges a.s. The stationary distribution is given by the law of the so-called exponential functional V ∞ . Anita Behme P&A 2019, 5

  8. Exponential functionals Erickson and Maller (2005) proposed necessary and sufficient conditions for convergence of � ∞ e − ξ t − d η t . V ∞ := 0 Mainly one needs: ◮ ξ tends to infinity ◮ η has a finite log + -moment Anita Behme P&A 2019, 6

  9. Convergence of exponential functionals Precisely, they stated Theorem (Erickson, Maller ’05): � t 0 e − ξ s − d η s if and only if V ∞ exists as a.s. limit as t → ∞ of � � � log y t →∞ ξ t = ∞ a.s. and lim I ξ,η = | ¯ ν η ( dy ) | < ∞ , A ξ (log y ) ( e a , ∞ ) where � x ν + ν + A ξ ( x ) = γ ξ + ¯ ξ (1) + ¯ ξ ( y ) dy , 1 with ν + ν − ν + ν − ¯ ξ ( x ) = ν ξ (( x , ∞ )) , ¯ ξ ( x ) = ν ξ (( −∞ , − x )) , ¯ ν ξ ( x ) = ¯ ξ ( x )+¯ ξ ( x ) , ν + ν − and ¯ η , ¯ η and ¯ ν η defined likewise. Hereby a > 0 is chosen such that A ξ ( x ) > 0 for all x > 0 and its existence is guaranteed whenever lim t →∞ ξ t = ∞ a.s. Anita Behme P&A 2019, 7

  10. Convergence of exponential functionals Further: Theorem (Erickson, Maller ’05, continued): If lim t →∞ ξ t = ∞ a.s. but I ξ,η = ∞ , then � t � � � � P e − ξ s − d η s − → ∞ , (1) � � � � 0 while for lim t →∞ ξ t = −∞ or oscillating ξ either (1) holds, or there exists some k ∈ R \ { 0 } such that � t e − ξ s − d η s = k (1 − e − ξ t ) for all t > 0 a.s. 0 Anita Behme P&A 2019, 8

  11. Markov additive processes (MAPs) Anita Behme P&A 2019, 9

  12. Markov additive processes (MAPs) ( ξ t , η t , J t ) t ≥ 0 is a (bivariate) MAP with ◮ Markovian component ( J t ) t ≥ 0 : Right-continuous, ergodic, continuous time Markov chain with countable state space S , intensity matrix Q and stationary law π . Anita Behme P&A 2019, 9

  13. Markov additive processes (MAPs) ( ξ t , η t , J t ) t ≥ 0 is a (bivariate) MAP with ◮ Markovian component ( J t ) t ≥ 0 : Right-continuous, ergodic, continuous time Markov chain with countable state space S , intensity matrix Q and stationary law π . ◮ Additive component ( ξ t , η t ) t ≥ 0 : ( ξ t , η t ) := ( X (1) , Y (1) ) + ( X (2) , Y (2) ) , t ≥ 0 . t t t t ◮ ( X (1) , Y (1) ) behaves in law like a bivariate L´ evy process t t ( ξ ( j ) t , η ( j ) t ) whenever J t = j , ◮ ( X (2) , Y (2) ) is a pure jump process given by t t � � ( X (2) , Y (2) Z ( i , j ) ) = 1 { J Tn − = i , J Tn = j , T n ≤ t } , t t n n ≥ 1 i , j ∈S for i.i.d. random variables Z ( i , j ) in R 2 . n Anita Behme P&A 2019, 9

  14. Markov additive processes (MAPs) ( ξ t , η t , J t ) t ≥ 0 is a (bivariate) MAP with ◮ Markovian component ( J t ) t ≥ 0 : Right-continuous, ergodic, continuous time Markov chain with countable state space S , intensity matrix Q and stationary law π . ◮ Additive component ( ξ t , η t ) t ≥ 0 : ( ξ t , η t ) := ( X (1) , Y (1) ) + ( X (2) , Y (2) ) , t ≥ 0 . t t t t ◮ ( X (1) , Y (1) ) behaves in law like a bivariate L´ evy process t t ( ξ ( j ) t , η ( j ) t ) whenever J t = j , ◮ ( X (2) , Y (2) ) is a pure jump process given by t t � � ( X (2) , Y (2) Z ( i , j ) ) = 1 { J Tn − = i , J Tn = j , T n ≤ t } , t t n n ≥ 1 i , j ∈S for i.i.d. random variables Z ( i , j ) in R 2 . n As starting value under P j we use ( ξ 0 , η 0 , J 0 ) = (0 , 0 , j ). Throughout neither ξ nor η is degenerate constantly equal to 0. Anita Behme P&A 2019, 9

  15. Special cases of MAPs ◮ S = { 0 } : (bivariate) L´ evy process Anita Behme P&A 2019, 10

  16. Special cases of MAPs ◮ S = { 0 } : (bivariate) L´ evy process ◮ ( X (1) , Y (1) ) ≡ 0: (bivariate) continuous-time Markov chain in R 2 t t Anita Behme P&A 2019, 10

  17. Special cases of MAPs ◮ S = { 0 } : (bivariate) L´ evy process ◮ ( X (1) , Y (1) ) ≡ 0: (bivariate) continuous-time Markov chain in R 2 t t ◮ ( X (2) , Y (2) ) ≡ 0: no common jumps of ( J t ) t ≥ 0 and ( ξ t , η t ) t ≥ 0 : t t 1.5 X_t(omega) 1.0 0.5 0.0 0 2 4 6 8 10 t Anita Behme P&A 2019, 10

  18. Exponential functionals of MAPs Anita Behme P&A 2019, 11

  19. Exponential functionals of MAPs Given a bivariate Markov additive process ( ξ t , η t , J t ) t ≥ 0 with Markovian component ( J t ) t ≥ 0 , we denote � e − ξ s − d η s , E ( t ) := E ( ξ,η ) ( t ) := 0 < t < ∞ . (0 , t ] Literature: Some recent results on E for η t = t on arXiv (Salminen et al./Stephenson) Anita Behme P&A 2019, 11

  20. An example 1 ◮ S = N 0 ◮ ( J t ) t ≥ 0 continuous time petal flower Markov chain with intensity matrix   − q q 0 , 1 q 0 , 2 . . . q − q 0 . . .     Q = ( q i , j ) i , j ∈ N 0 = q 0 − q     . . ... . . . . for q > 0 fixed and q 0 , j = qp 0 , j j ∈ N . ( J t ) t ≥ 0 is irreducible, recurrent with stationary distribution π 0 = 1 π j = p 0 , j = q 0 , j 2 , and 2 q , j ∈ N 2 1 Thanks to Gerold Alsmeyer for the picture! Anita Behme P&A 2019, 12

  21. An example Now choose ξ and η to be conditionally independent with  − p − 1 i = 0 , 0 , j ,   � � ξ t = X (2) Z ( i , j ) Z ( i , j ) 2 + p − 1 = := 1 { J Tn − = i , J Tn = j , T n ≤ t } , 0 , i , j = 0 , t n n  n ≥ 1 i , j ∈ N 0  0 , else. Then ξ τ n (0) = 2 n → ∞ P 0 -a.s. but lim inf t →∞ ξ t = −∞ . Thus ξ is oscillating. Anita Behme P&A 2019, 13

  22. An example Now choose ξ and η to be conditionally independent with  − p − 1 i = 0 , 0 , j ,   � � ξ t = X (2) Z ( i , j ) Z ( i , j ) 2 + p − 1 = := 1 { J Tn − = i , J Tn = j , T n ≤ t } , 0 , i , j = 0 , t n n  n ≥ 1 i , j ∈ N 0  0 , else. Then ξ τ n (0) = 2 n → ∞ P 0 -a.s. but lim inf t →∞ ξ t = −∞ . Thus ξ is oscillating. � 1 , j = 0 , � Choosing η t = (0 , t ] γ η ( Js ) ds with γ η ( j ) = 0 , otherwise , we observe under P 0 � � � e − ξ s − d η s = e − ξ s − γ η ( Js ) ds = e − N s − 1 { J s =0 } ds . (0 , t ] (0 , t ] (0 , t ] Anita Behme P&A 2019, 13

Recommend


More recommend