APTS-ASP 1 APTS-ASP 2 APTS Applied Stochastic Processes Markov chains and reversibility Renewal processes and stationarity Nicholas Georgiou 1 & Matt Roberts 2 nicholas.georgiou@durham.ac.uk and mattiroberts@gmail.com Martingales Martingale convergence (Some slides originally produced by Wilfrid Kendall, Stephen Connor, Christina Goldschmidt and Amanda Turner) Recurrence 1 Department of Mathematical Sciences, Durham University Foster-Lyapunov criteria 2 Probability Laboratory, University of Bath Cutoff APTS Southampton, 30th March–3rd April 2020 APTS-ASP 3 APTS-ASP 4 Introduction Introduction Learning outcomes Two notions in probability What you should be able to do after working through this module “. . . you never learn anything unless you are willing to take a risk and tol- erate a little randomness in your life.” After successfully completing this module an APTS student will be – Heinz Pagels, The Dreams of Reason, 1988. able to: ◮ describe and calculate with the notion of a reversible Markov This module is intended to introduce students to two important notions in stochastic processes — reversibility and martingales — chain, both in discrete and continuous time; identifying the basic ideas, outlining the main results and giving a ◮ describe the basic properties of discrete-parameter martingales flavour of some significant ways in which these notions are used in and check whether the martingale property holds; statistics. ◮ recall and apply some significant concepts from martingale theory; These notes outline the content of the module; they represent ◮ explain how to use Foster-Lyapunov criteria to establish work-in-progress and will grow, be corrected, and be modified as recurrence and speed of convergence to equilibrium for time passes. Comments and suggestions are most welcome! Please Markov chains. feel free to e-mail us.
APTS-ASP 5 APTS-ASP 6 Introduction Introduction An important instruction Books First of all, read the preliminary notes . . . Some useful texts (I) “There is no such thing as a moral or an immoral book. Books are well written or badly written.” – Oscar Wilde (1854–1900), The Picture of Dorian Gray, 1891, preface They provide notes and examples concerning a basic framework The next three slides list various useful textbooks. covering: At increasing levels of mathematical sophistication: ◮ Probability and conditional probability; 1. H¨ aggstr¨ om (2002) “Finite Markov chains and algorithmic ◮ Expectation and conditional expectation; applications”. ◮ Discrete-time countable-state-space Markov chains; 2. Grimmett and Stirzaker (2001) “Probability and random ◮ Continuous-time countable-state-space Markov chains; processes”. ◮ Poisson processes. 3. Breiman (1992) “Probability”. 4. Norris (1998) “Markov chains”. 5. Ross (1996) “Stochastic processes”. 6. Williams (1991) “Probability with martingales”. APTS-ASP 7 APTS-ASP 8 Introduction Markov chains and reversibility Books Some useful texts (II): free on the web Markov chains and reversibility 1. Doyle and Snell (1984) “Random walks and electric networks” available on web at www.arxiv.org/abs/math/0001057 . 2. Kelly (1979) “Reversibility and stochastic networks” available on web at “People assume that time is a strict progression of cause to effect, but actually from a non-linear, non-subjective viewpoint, it’s more like a big http://www.statslab.cam.ac.uk/~frank/BOOKS/kelly_book.html . ball of wibbly-wobbly, timey-wimey . . . stuff.” 3. Kindermann and Snell (1980) “Markov random fields and The Tenth Doctor, Doctor Who, in the episode “Blink”, 2007 their applications” available on web at www.ams.org/online_bks/conm1/ . 4. Meyn and Tweedie (1993) “Markov chains and stochastic stability” available on web at www.probability.ca/MT/ . 5. Aldous and Fill (2001) “Reversible Markov Chains and Random Walks on Graphs” only available on web at www.stat.berkeley.edu/~aldous/RWG/book.html .
APTS-ASP 9 APTS-ASP 10 Markov chains and reversibility Markov chains and reversibility Introduction to reversibility Reminder: convergence to equilibrium A simple example Consider simple symmetric random walk X on { 0 , 1 , . . . , k } , with “prohibition” boundary conditions: moves 0 → − 1, k → k + 1 are Recall from the preliminary notes that if a Markov chain X on a replaced by 0 → 0, k → k . countable state space (in discrete time) is 1. X is irreducible and aperiodic , so there is a unique ◮ irreducible equilibrium distribution π = ( π 0 , π 1 , . . . , π k ). 1 2. The equilibrium equations π P = π are solved by π i = ◮ aperiodic (only an issue in discrete time) k +1 for all i . ◮ positive recurrent (only an issue for infinite state spaces) 3. Consider X in equilibrium: then P [ X n − 1 = x , X n = y ] = P [ X n − 1 = x ] P [ X n = y | X n − 1 = x ] P [ X n = i | X 0 = j ] → π i = π x p x , y as n → ∞ for all states i . π is the unique solution to π P = π such that � and i π i = 1. P [ X n = x , X n − 1 = y ] = π y p y , x = π x p x , y . 4. In equilibrium, the chain looks the same forwards and backwards. We say that the chain is reversible . ANIMATION APTS-ASP 11 APTS-ASP 12 Markov chains and reversibility Markov chains and reversibility Introduction to reversibility Introduction to reversibility Reversibility Detailed balance 1. Generalising the calculation we did for the random walk shows that a discrete-time Markov chain is reversible if it starts from equilibrium and the detailed balance equations hold: π x p x , y = π y p y , x . 2. If one can solve for π in π x p x , y = π y p y , x , then it is easy to Definition show that π P = π . Suppose that ( X n − k ) 0 ≤ k ≤ n and ( X k ) 0 ≤ k ≤ n have the same 3. So, if one can solve the detailed balance equations, and if the distribution for every n . Then we say that X is reversible . solution can be normalized to have unit total probability, then ANIMATION the result also solves the equilibrium equations. 4. In continuous time we instead require π x q x , y = π y q y , x , and if we can solve this system of equations then π Q = 0. 5. From a computational point of view, it is usually worth trying to solve the (easier) detailed balance equations first; if these are insoluble then revert to the more complicated π P = π or π Q = 0.
APTS-ASP 13 APTS-ASP 14 Markov chains and reversibility Markov chains and reversibility A key theorem A key theorem Detailed balance and reversibility Definition We will now consider progressively more and more complicated The Markov chain X satisfies detailed balance if Markov chains: Discrete time: there is a non-trivial solution of ◮ the M / M / 1 queue; π x p x , y = π y p y , x ; ◮ a discrete-time chain on a 8 × 8 state space; Continuous time: there is a non-trivial solution of π x q x , y = π y q y , x . ◮ Gibbs samplers; ◮ and Metropolis-Hastings samplers (briefly). Theorem The irreducible Markov chain X satisfies detailed balance and the solution { π x } can be normalized by � x π x = 1 if and only if { π x } is an equilibrium distribution for X and X started in equilibrium is statistically the same whether run forwards or backwards in time. APTS-ASP 15 APTS-ASP 16 Markov chains and reversibility Markov chains and reversibility Queuing for insight A simple multidimensional example M / M / 1 queue Random chess (Aldous and Fill 2001, Ch1, Ch3 § 2) Here is a continuous-time example, the M / M / 1 queue. We have Example (A mean knight’s tour) ◮ Arrivals: x → x + 1 at rate λ ; Place a chess knight at the corner of a standard ◮ Departures: x → x − 1 at rate µ if x > 0. 8 × 8 chessboard. Move it randomly, at each move choosing uniformly from the available legal chess Detailed balance gives µπ x = λπ x − 1 and therefore when λ < µ moves independently of the past. (stability) the equilibrium distribution is π x = ρ x (1 − ρ ) for 1. Is the resulting Markov chain periodic? x = 0 , 1 , . . . , where ρ = λ µ (the traffic intensity). ANIMATION (What if you sub-sample at even times?) Reversibility is more than a computational device: it tells us that if 2. What is the equilibrium distribution? a stable M / M / 1 queue is in equilibrium then people leave ANIMATION (Use detailed balance) according to a Poisson process of rate λ . (This is known as 3. What is the mean time till the knight returns Burke’s theorem.) to its starting point? Hence, if a stable M / M / 1 queue feeds into another stable · / M / 1 queue then in (Inverse of equilibrium probability) equilibrium the second queue on its own behaves as an M / M / 1 queue in equilibrium.
Recommend
More recommend