bayesian inference for partially observed markov
play

Bayesian inference for partially observed Markov processes, with - PowerPoint PPT Presentation

Stochastic modelling of dynamical systems Bayesian inference Particle MCMC Summary and conclusions Bayesian inference for partially observed Markov processes, with application to systems biology Darren Wilkinson http://tinyurl.com/darrenjw


  1. Stochastic modelling of dynamical systems Bayesian inference Particle MCMC Summary and conclusions Bayesian inference for partially observed Markov processes, with application to systems biology Darren Wilkinson http://tinyurl.com/darrenjw School of Mathematics & Statistics, Newcastle University, UK Bayes–250 Informatics Forum Edinburgh, UK 5th–7th September, 2011 Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  2. Stochastic modelling of dynamical systems Systems biology models Bayesian inference Population dynamics Particle MCMC Stochastic chemical kinetics Summary and conclusions Genetic autoregulation Systems biology modelling Uses accurate high-resolution time-course data on a relatively small number of bio-molecules to parametrise carefully constructed mechanistic dynamic models of a process of interest based on current biological understanding Traditionally, models were typically deterministic, based on a system of ODEs known as the Reaction Rate Equations (RREs) It is now increasingly accepted that biochemical network dynamics at the single-cell level are intrinsically stochastic The theory of stochastic chemical kinetics provides a solid foundation for describing network dynamics using a Markov jump process Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  3. Stochastic modelling of dynamical systems Systems biology models Bayesian inference Population dynamics Particle MCMC Stochastic chemical kinetics Summary and conclusions Genetic autoregulation Stochastic Chemical Kinetics Stochastic molecular approach: Statistical mechanical arguments lead to a Markov jump process in continuous time whose instantaneous reaction rates are directly proportional to the number of molecules of each reacting species Such dynamics can be simulated (exactly) on a computer using standard discrete-event simulation techniques Standard implementation of this strategy is known as the “Gillespie algorithm” (just discrete event simulation), but there are several exact and approximate variants of this basic approach Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  4. Stochastic modelling of dynamical systems Systems biology models Bayesian inference Population dynamics Particle MCMC Stochastic chemical kinetics Summary and conclusions Genetic autoregulation Lotka-Volterra system Trivial (familiar) example from population dynamics (in reality, the “reactions” will be elementary biochemical reactions taking place inside a cell) Reactions X − → 2 X (prey reproduction) X + Y − → 2 Y (prey-predator interaction) Y − → ∅ (predator death) X – Prey, Y – Predator We can re-write this using matrix notation Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  5. Stochastic modelling of dynamical systems Systems biology models Bayesian inference Population dynamics Particle MCMC Stochastic chemical kinetics Summary and conclusions Genetic autoregulation Forming the matrix representation The L-V system in tabular form Rate Law LHS RHS Net-effect h ( · , c ) X Y X Y X Y R 1 c 1 x 1 0 2 0 1 0 R 2 c 2 xy 1 1 0 2 -1 1 R 3 c 3 y 0 1 0 0 0 -1 Call the 3 × 2 net-effect (or reaction) matrix N . The matrix S = N ′ is the stoichiometry matrix of the system. Typically both are sparse. The SVD of S (or N ) is of interest for structural analysis of the system dynamics... Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  6. Stochastic modelling of dynamical systems Systems biology models Bayesian inference Population dynamics Particle MCMC Stochastic chemical kinetics Summary and conclusions Genetic autoregulation Stochastic chemical kinetics u species: X 1 , . . . , X u , and v reactions: R 1 , . . . , R v R i : p i 1 X 1 + · · · + p iu X u − → q i 1 X 1 + · · · + q iu X u , i = 1 , . . . , v In matrix form: P X − → Q X ( P and Q are sparse) S = ( Q − P ) ′ is the stoichiometry matrix of the system X jt : # molecules of X j at time t . X t = ( X 1 t , . . . , X ut ) ′ Reaction R i has hazard (or rate law, or propensity) h i ( X t , c i ), where c i is a rate parameter, c = ( c 1 , . . . , c v ) ′ , h ( X t , c ) = ( h 1 ( X t , c 1 ) , . . . , h v ( X t , c v )) ′ and the system evolves as a Markov jump process For mass-action stochastic kinetics, u � X jt � � h i ( X t , c i ) = c i , i = 1 , . . . , v p ij j =1 Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  7. Stochastic modelling of dynamical systems Systems biology models Bayesian inference Population dynamics Particle MCMC Stochastic chemical kinetics Summary and conclusions Genetic autoregulation The Lotka-Volterra model 25 [Y1] 15 [Y2] 20 15 10 [ Y ] [Y2] 10 5 5 0 0 0 20 40 60 80 100 0 2 4 6 8 Time [Y1] Y1 400 400 Y2 300 300 Y2 Y 200 200 100 100 0 5 10 15 20 25 50 100 150 200 250 300 350 Time Y1 Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  8. Stochastic modelling of dynamical systems Systems biology models Bayesian inference Population dynamics Particle MCMC Stochastic chemical kinetics Summary and conclusions Genetic autoregulation Example — genetic auto-regulation P r P2 RNAP p q g DNA Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  9. Stochastic modelling of dynamical systems Systems biology models Bayesian inference Population dynamics Particle MCMC Stochastic chemical kinetics Summary and conclusions Genetic autoregulation Biochemical reactions Simplified view: Reactions g + P 2 ← → g · P 2 Repression g − → g + r Transcription r − → r + P Translation 2 P ← → P 2 Dimerisation r − → ∅ mRNA degradation P − → ∅ Protein degradation Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  10. Stochastic modelling of dynamical systems Systems biology models Bayesian inference Population dynamics Particle MCMC Stochastic chemical kinetics Summary and conclusions Genetic autoregulation Simulated realisation of the auto-regulatory network 2.0 1.5 Rna 1.0 0.5 0.0 50 30 P 10 0 600 400 P2 200 0 0 1000 2000 3000 4000 5000 Time Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  11. Stochastic modelling of dynamical systems Partially observed Markov process (POMP) models Bayesian inference Bayesian inference Particle MCMC Likelihood-free algorithms for stochastic model calibration Summary and conclusions Partially observed Markov process (POMP) models Continuous-time Markov process: X = { X s | s ≥ 0 } (for now, we suppress dependence on parameters, θ ) Think about integer time observations (extension to arbitrary times is trivial): for t ∈ N , X t = { X s | t − 1 < s ≤ t } Sample-path likelihoods such as π ( x t | x t − 1 ) can often (but not always) be computed (but are often computationally difficult), but discrete time transitions such as π ( x t | x t − 1 ) are typically intractable Partial observations: D = { d t | t = 1 , 2 , . . . , T } where d t | X t = x t ∼ π ( d t | x t ) , t = 1 , . . . , T , where we assume that π ( d t | x t ) can be evaluated directly (simple measurement error model) Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  12. Stochastic modelling of dynamical systems Partially observed Markov process (POMP) models Bayesian inference Bayesian inference Particle MCMC Likelihood-free algorithms for stochastic model calibration Summary and conclusions Bayesian inference for POMP models Most “obvious” MCMC algorithms will attempt to impute (at least) the skeleton of the Markov process: X 0 , X 1 , . . . , X T This will typically require evaluation of the intractable discrete time transition likelihoods, and this is the problem... Two related strategies: Data augmentation: “fill in” the entire process in some way, typically exploiting the fact that the sample path likelihoods are tractable — works in principle, but difficult to “automate”, and exceptionally computationally intensive due to the need to store and evaluate likelihoods of cts sample paths Likelihood-free (AKA plug-and-play): exploits the fact that it is possible to forward simulate from π ( x t | x t − 1 ) (typically by simulating from π ( x t | x t − 1 )), even if it can’t be evaluated Likelihood-free is really just a special kind of augmentation strategy Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

  13. Stochastic modelling of dynamical systems Partially observed Markov process (POMP) models Bayesian inference Bayesian inference Particle MCMC Likelihood-free algorithms for stochastic model calibration Summary and conclusions Bayesian inference Let π ( x | c ) denote the (complex) likelihood of the simulation model Let π ( D| x , τ ) denote the (simple) measurement error model Put θ = ( c , τ ), and let π ( θ ) be the prior for the model parameters The joint density can be written π ( θ, x , D ) = π ( θ ) π ( x | θ ) π ( D| x , θ ) . Interest is in the posterior distribution π ( θ, x |D ) Darren Wilkinson — Bayes–250, Edinburgh, 5/9/2011 Bayesian inference for POMP models using pMCMC

Recommend


More recommend