stability convergence to equilibrium and simulation of
play

Stability, convergence to equilibrium and simulation of non-linear - PowerPoint PPT Presentation

Stability, convergence to equilibrium and simulation of non-linear Hawkes Processes with memory kernels given by the sum of Erlang kernels Aline Duarte joint work with E. L ocherbah and G. Ost Universidade de S ao Paulo XXIII EBP, S ao


  1. Stability, convergence to equilibrium and simulation of non-linear Hawkes Processes with memory kernels given by the sum of Erlang kernels Aline Duarte joint work with E. L¨ ocherbah and G. Ost Universidade de S˜ ao Paulo XXIII EBP, S˜ ao Carlos, July 2019

  2. Hawkes process Let N be a counting process on R + characterised by its intensity process ( λ t ) t ≥ 0 defined, for each t ≥ 0, through the relation P ( N has a jump in ] t , t + dt ] |F t ) = λ t dt , where F t = σ ( N (] u , s ]) , 0 ≤ u < s ≤ t )

  3. Hawkes process Let N be a counting process on R + characterised by its intensity process ( λ t ) t ≥ 0 defined, for each t ≥ 0, through the relation P ( N has a jump in ] t , t + dt ] |F t ) = λ t dt , where F t = σ ( N (] u , s ]) , 0 ≤ u < s ≤ t ) and � � � λ t = f δ + h ( t − s ) dN s . (1) ]0 , t [ Here, f : R → R + is the jump rate function and h : R + → R is the memory kernel.

  4. Hawkes process Let N be a counting process on R + characterised by its intensity process ( λ t ) t ≥ 0 defined, for each t ≥ 0, through the relation P ( N has a jump in ] t , t + dt ] |F t ) = λ t dt , where F t = σ ( N (] u , s ]) , 0 ≤ u < s ≤ t ) and � � � λ t = f δ + h ( t − s ) dN s . (1) ]0 , t [ Here, f : R → R + is the jump rate function and h : R + → R is the memory kernel. The parameter δ ∈ R is interpreted as an initial input to the jump rate function.

  5. Simple Erlang kernel Assumption 1 The rate function f : R → R + is either bounded or Lipschitz continuous with Lipschitz constant � f � Lip .

  6. Simple Erlang kernel Assumption 1 The rate function f : R → R + is either bounded or Lipschitz continuous with Lipschitz constant � f � Lip . Consider the memory kernel h : R + → R can be written as Erlang kernel h ( t ) = ce − α t t n n ! , t ≥ 0 , where c ∈ R , α > 0 and n ∈ N .

  7. Associated PDMP process For each 0 ≤ k ≤ n , consider, for each t ≥ 0, ce − α ( t − s ) ( t − s ) ( n − k ) � X ( k ) = δ + dN s , (2) t ( n − k )! ]0 , t ] in this case, � � X (0) λ t = f , t − and, for t ≥ 0,

  8. Associated PDMP process For each 0 ≤ k ≤ n , consider, for each t ≥ 0, ce − α ( t − s ) ( t − s ) ( n − k ) � X ( k ) = δ + dN s , (2) t ( n − k )! ]0 , t ] in this case, � � X (0) λ t = f , t − and, for t ≥ 0, ce − α t t ( n − 1) ( n − 1)! dt − α ce − α t t n dX (0) = n ! dt t

  9. Associated PDMP process For each 0 ≤ k ≤ n , consider, for each t ≥ 0, ce − α ( t − s ) ( t − s ) ( n − k ) � X ( k ) = δ + dN s , (2) t ( n − k )! ]0 , t ] in this case, � � X (0) λ t = f , t − and, for t ≥ 0, dX (0) X (1) dt − α X (0) = dt (3) t t t

  10. Associated PDMP process For each 0 ≤ k ≤ n , consider, for each t ≥ 0, ce − α ( t − s ) ( t − s ) ( n − k ) � X ( k ) = δ + dN s , (2) t ( n − k )! ]0 , t ] in this case, � � X (0) λ t = f , t − and, for t ≥ 0, dX (0) X (1) dt − α X (0) = dt (3) t t t dX (1) X (2) dt − α X (1) = dt t t t

  11. Associated PDMP process For each 0 ≤ k ≤ n , consider, for each t ≥ 0, ce − α ( t − s ) ( t − s ) ( n − k ) � X ( k ) = δ + dN s , (2) t ( n − k )! ]0 , t ] in this case, � � X (0) λ t = f , t − and, for t ≥ 0, dX (0) X (1) dt − α X (0) = dt (3) t t t dX (1) X (2) dt − α X (1) = dt t t t . . . dX ( n − 1) X ( n ) dt − α X ( n − 1) = dt t t t dX ( n ) − α X ( n ) = dt + cdN t , t t ] −∞ , 0] ce α s ( − s ) ( n − k ) with initial condition X ( k ) = x ( k ) � = ( n − k )! n ( ds ). 0 0

  12. Associated PDMP process The associated PDMP is the Markov process X = ( X t ) t ≥ 0 taking values in R n , defined, for each t ≥ 0, by � � X (0) , . . . , X ( n ) X t = . t t We call the process X Markovian cascade of memory terms .

  13. Associated PDMP process The associated PDMP is the Markov process X = ( X t ) t ≥ 0 taking values in R n , defined, for each t ≥ 0, by � � X (0) , . . . , X ( n ) X t = . t t We call the process X Markovian cascade of memory terms . Its infinitesimal generator L is given for any smooth test function g : R n �→ R by x (0) �� L g ( x ) = � F ( x ) , ∇ g ( x ) � + f � � � − g ( x ) � g x + ce ( n ) , where x = ( x (0) , . . . , x ( n ) ) and e ( n ) ∈ R n is the unit vector having entry 1 in the coordinate n and 0 elsewhere.

  14. Finally, F : R n �→ R n is the vector field associated to the system of ODE’s d  dt x (0) = x (1) − α x (0)  t t t    .   .  .   d dt x ( n − 1) = x ( n ) − α x ( n − 1)  t t t     d  dt x ( n ) = − α x ( n )    t t given by F ( x ) = ( F (0) ( x ) , . . . , F ( n ) ( x )) with F ( k ) ( x ) = − α x ( k ) + x ( k +1) for 0 ≤ k < n − 1 , and F ( n ) ( x ) = − α x ( n ) .

  15. Finally, F : R n �→ R n is the vector field associated to the system of ODE’s d  dt x (0) = x (1) − α x (0)  t t t    .   .  .   d dt x ( n − 1) = x ( n ) − α x ( n − 1)  t t t     d  dt x ( n ) = − α x ( n )    t t given by F ( x ) = ( F (0) ( x ) , . . . , F ( n ) ( x )) with F ( k ) ( x ) = − α x ( k ) + x ( k +1) for 0 ≤ k < n − 1 , and F ( n ) ( x ) = − α x ( n ) . ◮ Jumps introduce discontinuities only in the coordinates X ( n ) of X t . t

  16. X (0) t 6 X (1) t X (2) 5 t 4 3 2 1 0 −1 0 2 4 6 8 10 12 14 16 18 20 18 N t 16 14 12 10 8 6 4 2 0 0 2 4 6 8 10 12 14 16 18 20 A finite joint realization of the Markovian cascade X = ( X t ) 0 ≤ t ≤ T (upper panel) and its associated counting process N = ( N t ) 0 ≤ t ≤ T (lower panel) for the choices n = 2, c = 2, α = 1, T = 20 and f ( x ) = x / 5 + 1 with initial input x 0 = ( x (0) 0 , x (1) 0 , x (2) 0 ) = (0 , 0 , 0). The blue (resp. red and black) trajectory corresponds to the realisation of ( X (2) ) 0 ≤ t ≤ T (resp. ( X (1) ) 0 ≤ t ≤ T and ( X (0) ) 0 ≤ t ≤ T ). t t t

  17. Sum of Erlang kernels Consider the memory kernel h : R + → R can be written as Erlang kernel h ( t ) = ce − α t t n n ! , t ≥ 0 , (4) where c ∈ R , α > 0 and n ∈ N .

  18. Sum of Erlang kernels Consider the memory kernel h : R + → R can be written as sum of Erlang kernel L c i e − α i t t n i � h ( t ) = n i ! , t ≥ 0 , (4) i =1 where, for each 1 ≤ i ≤ L , c i ∈ R , α i > 0 and n i ∈ N .

  19. Sum of Erlang kernels Consider the memory kernel h : R + → R can be written as sum of Erlang kernel L c i e − α i t t n i � h ( t ) = n i ! , t ≥ 0 , (4) i =1 where, for each 1 ≤ i ≤ L , c i ∈ R , α i > 0 and n i ∈ N . ◮ The class of Erlang kernels is dense in L 1 ( R + )

  20. Sum of Erlang kernels Consider the memory kernel h : R + → R can be written as sum of Erlang kernel L c i e − α i t t n i � h ( t ) = n i ! , t ≥ 0 , (4) i =1 where, for each 1 ≤ i ≤ L , c i ∈ R , α i > 0 and n i ∈ N . ◮ The class of Erlang kernels is dense in L 1 ( R + ) Any Hawkes process N having integrable memory kernel h can be approximated by a sequence of Hawkes processes N ( n ) having Erlang memory kernel h ( n ) such that � h ( n ) − h � L 1 ( R + ) → 0 as n → ∞ and � t | h ( n ) − h | ( s ) ds , | N − N ( n ) � E � | t ≤ C T 0 | N − N ( n ) � for all t ≤ T , where � | t denotes the total variation distance between N and N ( n ) on [0 , t ] .

  21. Associated PDMP process Write κ = L + � L i =1 n i . The associated PDMP X = ( X t ) t ≥ 0 taking values in R κ , defined, for each t ≥ 0, by � � � � X (1) , . . . , X ( L ) with X ( i ) X ( i , 0) , . . . , X ( i , n i ) X t = = , 1 ≤ i ≤ L . (5) t t t t t We call the process X Markovian cascade of successive memory terms .

  22. Associated PDMP process Write κ = L + � L i =1 n i . The associated PDMP X = ( X t ) t ≥ 0 taking values in R κ , defined, for each t ≥ 0, by � � � � X (1) , . . . , X ( L ) with X ( i ) X ( i , 0) , . . . , X ( i , n i ) X t = = , 1 ≤ i ≤ L . (5) t t t t t We call the process X Markovian cascade of successive memory terms . Its infinitesimal generator L is given for any smooth test function g : R κ �→ R by L L � x ( i , 0) �� � � � � � L g ( x ) = � F ( x ) , ∇ g ( x ) � + f g x + c i e ( i , n i ) − g ( x ) , (6) i =1 i =1 with x ( i ) = ( x ( i , 0) , . . . , x ( i , n i ) ) and e ( i , n i ) ∈ R κ is the x (1) , . . . , x ( L ) � � where x = unit vector having entry 1 in the coordinate ( i , n i ) , and 0 elsewhere.

  23. And F : R κ �→ R κ is the vector field associated to the system of first-order ODE’s d  dt x ( i , 0) = x ( i , 1) − α i x ( i , 0)  t t t    .   .  .   (7) d dt x ( i , n i − 1) = x ( i , n i ) − α i x ( i , n i − 1)  t t t     d  dt x ( i , n i ) = − α x ( i , n i )  , 1 ≤ i ≤ L ,   t t given by F ( x )=(( F (1) ( x ) , . . . , F ( L ) ( x )) , with F ( i ) ( x )=( F ( i , 0) ( x ) , . . . , F ( i , n i ) ( x )) and F ( i , k ) ( x ) = − α i x ( i , k ) + x ( i , k +1) for 0 ≤ k < n i − 1 , and F ( i , n i ) ( x ) = − α i x ( i , n i ) .

Recommend


More recommend