non homogeneous random walks
play

Non-homogeneous random walks Ostap Hryniv Department of - PowerPoint PPT Presentation

Non-homogeneous random walks Ostap Hryniv Department of Mathematical Sciences Durham University April 2014 Joint work with Iain MacPhee, Mikhail Menshikov, and Andrew Wade 1 Introduction 2 From classical to nonhomogeneous random walk 3


  1. Non-homogeneous random walks Ostap Hryniv Department of Mathematical Sciences Durham University April 2014 Joint work with Iain MacPhee, Mikhail Menshikov, and Andrew Wade

  2. 1 Introduction 2 From classical to nonhomogeneous random walk 3 One-dimensional case 4 Illustration: A walk on Z 5 Processes with non-integrable jumps 6 Concluding remarks

  3. Introduction Z + := { 0 , 1 , 2 , 3 , . . . } . Consider X t , t ∈ Z + a nearest-neighbour random walk on Z + . We are interested in random quantities such as • τ = min { t > 0 : X t = 0 } , the first return time ;

  4. Introduction Z + := { 0 , 1 , 2 , 3 , . . . } . Consider X t , t ∈ Z + a nearest-neighbour random walk on Z + . We are interested in random quantities such as • τ = min { t > 0 : X t = 0 } , the first return time ; • M = max 0 ≤ s ≤ τ X s , the excursion maximum ;

  5. Introduction Z + := { 0 , 1 , 2 , 3 , . . . } . Consider X t , t ∈ Z + a nearest-neighbour random walk on Z + . We are interested in random quantities such as • τ = min { t > 0 : X t = 0 } , the first return time ; • M = max 0 ≤ s ≤ τ X s , the excursion maximum ; • max 0 ≤ s ≤ t X s , the running maximum process;

  6. Introduction Z + := { 0 , 1 , 2 , 3 , . . . } . Consider X t , t ∈ Z + a nearest-neighbour random walk on Z + . We are interested in random quantities such as • τ = min { t > 0 : X t = 0 } , the first return time ; • M = max 0 ≤ s ≤ τ X s , the excursion maximum ; • max 0 ≤ s ≤ t X s , the running maximum process; � t 1 • s =0 X s , the centre of mass process; 1+ t

  7. Introduction Z + := { 0 , 1 , 2 , 3 , . . . } . Consider X t , t ∈ Z + a nearest-neighbour random walk on Z + . We are interested in random quantities such as • τ = min { t > 0 : X t = 0 } , the first return time ; • M = max 0 ≤ s ≤ τ X s , the excursion maximum ; • max 0 ≤ s ≤ t X s , the running maximum process; � t 1 • s =0 X s , the centre of mass process; 1+ t • etc. . . describing the process ( X t ) t ≥ 0 at large but finite times.

  8. bc bc bc bc bc bc bc Introduction (cont.) How do these quantities behave (tails, asymptotics, . . . ) for this random walk?: 1 1 1 2 2 2 1 2 0 x − 1 x + 1 x Symmetric ( zero drift ) walk with reflection at the origin.

  9. bc bc bc bc bc bc bc Introduction (cont.) What about this random walk?: 1 1 1 1 1 2 − 2 + 2 8 x 8 x 1 2 0 x − 1 x + 1 x Non-homogeneous random walk with asymptotically zero 1 drift 4 x .

  10. bc bc bc bc bc bc bc Introduction (cont.) Or this one?: 1 1 3 1 3 2 + 2 − 2 8 x 8 x 1 2 0 x − 1 x x + 1 Another walk with asymptotically zero drift − 3 4 x .

  11. bc bc bc bc bc bc bc bc bc bc bc bc Introduction (cont.) Or this combination?: 1 1 1 1 1 1 2 − 2 + 2 2 8 x 8 x 0 x − 1 x + 1 x Symmetric walk for non-positive sites, non-homogeneous walk with 1 asymptotically zero drift 4 x for positive sites.

  12. Introduction (cont.) I will describe answers to these questions. I will emphasize that the answers depend not at all on the nearest-neighbour structure, bounded jumps, or even the Markov property.

  13. Introduction (cont.) I will describe answers to these questions. I will emphasize that the answers depend not at all on the nearest-neighbour structure, bounded jumps, or even the Markov property. All that really matters are the first two moment functions of the increments, i.e., E[( X t +1 − X t ) 2 | X t = x ] E[ X t +1 − X t | X t = x ] and and some regenerative structure for the process (so excursions are well defined).

  14. Introduction (cont.) I will describe answers to these questions. I will emphasize that the answers depend not at all on the nearest-neighbour structure, bounded jumps, or even the Markov property. All that really matters are the first two moment functions of the increments, i.e., E[( X t +1 − X t ) 2 | X t = x ] E[ X t +1 − X t | X t = x ] and and some regenerative structure for the process (so excursions are well defined). First I will give a general overview of non-homogeneous random walks .

  15. 1 Introduction 2 From classical to nonhomogeneous random walk 3 One-dimensional case 4 Illustration: A walk on Z 5 Processes with non-integrable jumps 6 Concluding remarks

  16. Random walk origin • Lord Rayleigh’s theory of sound (1880s) • Louis Bachelier’s thesis on random models of stock prices (1900) • Karl Pearson’s theory of random migration (1905-06) • Einstein’s theory of Brownian motion (1905-08)

  17. Random walk origin • Lord Rayleigh’s theory of sound (1880s) • Louis Bachelier’s thesis on random models of stock prices (1900) • Karl Pearson’s theory of random migration (1905-06) • Einstein’s theory of Brownian motion (1905-08)

  18. Random walk origin • Lord Rayleigh’s theory of sound (1880s) • Louis Bachelier’s thesis on random models of stock prices (1900) • Karl Pearson’s theory of random migration (1905-06) • Einstein’s theory of Brownian motion (1905-08)

  19. Random walk origin • Lord Rayleigh’s theory of sound (1880s) • Louis Bachelier’s thesis on random models of stock prices (1900) • Karl Pearson’s theory of random migration (1905-06) • Einstein’s theory of Brownian motion (1905-08)

  20. Simple random walk Let X t be symmetric simple random walk (SRW) on Z d , i.e., given X 1 , . . . , X t , the new location X t +1 is uniformly distributed on the 2 d adjacent lattice sites to X t . Theorem (P´ olya 1921) SRW is recurrent if d = 1 or d = 2 , but transient if d ≥ 3 .

  21. Simple random walk Let X t be symmetric simple random walk (SRW) on Z d , i.e., given X 1 , . . . , X t , the new location X t +1 is uniformly distributed on the 2 d adjacent lattice sites to X t . Theorem (P´ olya 1921) SRW is recurrent if d = 1 or d = 2 , but transient if d ≥ 3 . “A drunk man will find his way home, but a drunk bird may get lost forever.” —Shizuo Kakutani

  22. Lyapunov functions • There are several proofs of P´ olya’s theorem available, typically using combinatorics or electrical network theory. • These classical approaches are of limited use if one starts to generalize or perturb the model slightly. • Lamperti (1960) gave a very robust approach, based on the method of Lyapunov functions . • Reduce the d -dimensional problem to a 1-dimensional one by taking Z t := � X t � . • Z t = 0 if and only if X t = 0, but the reduction of dimensionality comes at a (modest) price: Z t is not in general a Markov process.

  23. Lyapunov functions • There are several proofs of P´ olya’s theorem available, typically using combinatorics or electrical network theory. • These classical approaches are of limited use if one starts to generalize or perturb the model slightly. • Lamperti (1960) gave a very robust approach, based on the method of Lyapunov functions . • Reduce the d -dimensional problem to a 1-dimensional one by taking Z t := � X t � . • Z t = 0 if and only if X t = 0, but the reduction of dimensionality comes at a (modest) price: Z t is not in general a Markov process.

  24. Lyapunov functions • There are several proofs of P´ olya’s theorem available, typically using combinatorics or electrical network theory. • These classical approaches are of limited use if one starts to generalize or perturb the model slightly. • Lamperti (1960) gave a very robust approach, based on the method of Lyapunov functions . • Reduce the d -dimensional problem to a 1-dimensional one by taking Z t := � X t � . • Z t = 0 if and only if X t = 0, but the reduction of dimensionality comes at a (modest) price: Z t is not in general a Markov process.

  25. Lyapunov functions • There are several proofs of P´ olya’s theorem available, typically using combinatorics or electrical network theory. • These classical approaches are of limited use if one starts to generalize or perturb the model slightly. • Lamperti (1960) gave a very robust approach, based on the method of Lyapunov functions . • Reduce the d -dimensional problem to a 1-dimensional one by taking Z t := � X t � . • Z t = 0 if and only if X t = 0, but the reduction of dimensionality comes at a (modest) price: Z t is not in general a Markov process.

  26. Lyapunov functions • There are several proofs of P´ olya’s theorem available, typically using combinatorics or electrical network theory. • These classical approaches are of limited use if one starts to generalize or perturb the model slightly. • Lamperti (1960) gave a very robust approach, based on the method of Lyapunov functions . • Reduce the d -dimensional problem to a 1-dimensional one by taking Z t := � X t � . • Z t = 0 if and only if X t = 0, but the reduction of dimensionality comes at a (modest) price: Z t is not in general a Markov process.

  27. bc bc bc Lyapunov functions (cont.) E.g. in d = 2, consider the two events { X t = (3 , 4) } and { X t = (5 , 0) } . Both imply Z t = 5, but in only one case there is positive probability of Z t +1 = 6. 6 6 5 5 4 4 3 3 2 2 1 1 0 0 -1 -1 -1 0 1 2 3 4 5 6 -1 0 1 2 3 4 5 6 So our methods cannot rely on the Markov property.

  28. Lyapunov functions (cont.) • Elementary calculations based on Taylor’s theorem and properties of the increments ∆ n = X n +1 − X n show that 1 1 − 1 � � + O ( Z − 2 � � E Z t +1 − Z t | X 1 , . . . , X t = ) , t 2 Z t d = 1 ( Z t +1 − Z t ) 2 | X 1 , . . . , X t d + O ( Z − 1 � � E ) . t

Recommend


More recommend