The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models • Dynamic linear models • Stochastic volatility • Generalised linear (mixed) models • Generalised additive (mixed) models • Measurement error models • Spline smoothing • Semiparametric regression • Space-varying (semiparametric) regression models • Models for disease mapping • Log-Gaussian Cox-processes • Model-based geostatistics • Spatio-temporal models • Survival analysis • +++
The R-INLA package Latent Gaussian Models The aim: Approximate the posterior marginals Compute from � π ( x , θ | y ) ∝ π ( θ ) π ( x | θ ) π ( y i | x i , θ ) i ∈I the posterior marginals: π ( x i | y ) , for some or all i and/or π ( θ i | y ) , for some or all i
The R-INLA package Latent Gaussian Models End result • Can we compute (approximate) marginals directly? • YES! • Gain • Huge speedup & accuracy • The ability to treat LGMs properly
The R-INLA package Latent Gaussian Models End result • Can we compute (approximate) marginals directly? • YES! • Gain • Huge speedup & accuracy • The ability to treat LGMs properly
The R-INLA package Latent Gaussian Models End result • Can we compute (approximate) marginals directly? • YES! • Gain • Huge speedup & accuracy • The ability to treat LGMs properly
The R-INLA package The main idea Smoothing noisy observations (I) Observations y i = m ( i ) + ǫ i , i = 1 , . . . , n for Gaussian iid noise ǫ i with known precision. Will assume m ( i ) is a smooth function wrt i
The R-INLA package The main idea Smoothing noisy observations (I) Observations y i = m ( i ) + ǫ i , i = 1 , . . . , n for Gaussian iid noise ǫ i with known precision. Will assume m ( i ) is a smooth function wrt i
The R-INLA package The main idea Smoothing noisy observations (II) 10 5 n = 50 y idx = 1:n 0 fun = 100*((idx-n/2)/n)^3 y = fun + rnorm(n) plot(idx, y) −5 −10 0 10 20 30 40 50 idx
The R-INLA package The main idea Smoothing noisy observations (III) Likelihood Gaussian observations with known precision y i | x i , θ ∼ N ( x i , τ 0 ) Latent A Gaussian model for the smooth function � � n � − θ π ( x | θ ) ∝ θ ( n − 2) / 2 exp ( x i − 2 x i − 1 + x i − 2 ) 2 2 i =2 Hyperparameter The smoothing parameter θ which we assign a Γ( a , b ) prior π ( θ ) ∝ θ a − 1 exp ( − b θ ) , θ > 0
The R-INLA package The main idea Smoothing noisy observations (III) Likelihood Gaussian observations with known precision y i | x i , θ ∼ N ( x i , τ 0 ) Latent A Gaussian model for the smooth function � � n � − θ π ( x | θ ) ∝ θ ( n − 2) / 2 exp ( x i − 2 x i − 1 + x i − 2 ) 2 2 i =2 Hyperparameter The smoothing parameter θ which we assign a Γ( a , b ) prior π ( θ ) ∝ θ a − 1 exp ( − b θ ) , θ > 0
The R-INLA package The main idea Smoothing noisy observations (III) Likelihood Gaussian observations with known precision y i | x i , θ ∼ N ( x i , τ 0 ) Latent A Gaussian model for the smooth function � � n � − θ π ( x | θ ) ∝ θ ( n − 2) / 2 exp ( x i − 2 x i − 1 + x i − 2 ) 2 2 i =2 Hyperparameter The smoothing parameter θ which we assign a Γ( a , b ) prior π ( θ ) ∝ θ a − 1 exp ( − b θ ) , θ > 0
The R-INLA package The main idea Smoothing noisy observations (IV) Since x , y | θ ∼ N ( · , · ) we can compute (numerically) all marginals, using that Gaussian � �� � π ( x , y | θ ) π ( θ ) π ( θ | y ) ∝ π ( x | y , θ ) � �� � Gaussian and x | y , θ ∼ N ( · , · ) so that � π ( x i | y ) = π ( x i | θ, y ) π ( θ | y ) d θ � �� � Gaussian
The R-INLA package The main idea Smoothing noisy observations (IV) Since x , y | θ ∼ N ( · , · ) we can compute (numerically) all marginals, using that Gaussian � �� � π ( x , y | θ ) π ( θ ) π ( θ | y ) ∝ π ( x | y , θ ) � �� � Gaussian and x | y , θ ∼ N ( · , · ) so that � π ( x i | y ) = π ( x i | θ, y ) π ( θ | y ) d θ � �� � Gaussian
The R-INLA package The main idea Smoothing noisy observations (IV) Since x , y | θ ∼ N ( · , · ) we can compute (numerically) all marginals, using that Gaussian � �� � π ( x , y | θ ) π ( θ ) π ( θ | y ) ∝ π ( x | y , θ ) � �� � Gaussian and x | y , θ ∼ N ( · , · ) so that � π ( x i | y ) = π ( x i | θ, y ) π ( θ | y ) d θ � �� � Gaussian
The R-INLA package The main idea Smoothing noisy observations (IV) Since x , y | θ ∼ N ( · , · ) we can compute (numerically) all marginals, using that Gaussian � �� � π ( x , y | θ ) π ( θ ) π ( θ | y ) ∝ π ( x | y , θ ) � �� � Gaussian and x | y , θ ∼ N ( · , · ) so that � π ( x i | y ) = π ( x i | θ, y ) π ( θ | y ) d θ � �� � Gaussian
The R-INLA package The main idea Posterior marginal for theta 1.0 0.8 0.6 exp(log.dens) 0.4 0.2 0.0 1 2 3 4 5 6 log.prec
The R-INLA package The main idea Posterior marginal for theta, interpolated 1.0 0.8 0.6 exp(log.dens) 0.4 0.2 0.0 1 2 3 4 5 6 log.prec
The R-INLA package The main idea Posterior marginals for x[1] for each theta (unweighted) 0.8 0.6 density 0.4 0.2 0.0 −14 −12 −10 −8 x
The R-INLA package The main idea Posterior marginals for x[1] for each theta (weighted) 0.15 0.10 density 0.05 0.00 −14 −12 −10 −8 x
The R-INLA package The main idea Posterior marginals for x[1] 0.8 0.6 density 0.4 0.2 0.0 −14 −12 −10 −8 x
The R-INLA package The main idea Extensions This is the basic idea behind INLA. It is really really simple. However, we need to extend this basic idea so we can deal with • More than one hyperparameter • Non-Gaussian observations ...the devil is in the details!
The R-INLA package The main idea Extensions This is the basic idea behind INLA. It is really really simple. However, we need to extend this basic idea so we can deal with • More than one hyperparameter • Non-Gaussian observations ...the devil is in the details!
The R-INLA package The main idea Extensions This is the basic idea behind INLA. It is really really simple. However, we need to extend this basic idea so we can deal with • More than one hyperparameter • Non-Gaussian observations ...the devil is in the details!
The R-INLA package The main idea Extensions This is the basic idea behind INLA. It is really really simple. However, we need to extend this basic idea so we can deal with • More than one hyperparameter • Non-Gaussian observations ...the devil is in the details!
The R-INLA package The main idea Extensions This is the basic idea behind INLA. It is really really simple. However, we need to extend this basic idea so we can deal with • More than one hyperparameter • Non-Gaussian observations ...the devil is in the details!
The R-INLA package The tools The tools • Precision matrices • Sparse matrices/GMRFs/Markov • Laplace approximations
The R-INLA package The tools The tools • Precision matrices • Sparse matrices/GMRFs/Markov • Laplace approximations
The R-INLA package The tools The tools • Precision matrices • Sparse matrices/GMRFs/Markov • Laplace approximations
The R-INLA package The tools Precision matrices Hierarchical models First layer x ∼ N ( 0 , Q x ) Second layer y | x ∼ N ( x , Q y ) Then � x � � Q x + Q y � − Q y Prec = − Q y y Q y Very efficient: computational and storage
The R-INLA package The tools Precision matrices Hierarchical models First layer x ∼ N ( 0 , Q x ) Second layer y | x ∼ N ( x , Q y ) Then � x � � Q x + Q y � − Q y Prec = − Q y y Q y Very efficient: computational and storage
The R-INLA package The tools Precision matrices Hierarchical models First layer x ∼ N ( 0 , Q x ) Second layer y | x ∼ N ( x , Q y ) Then � x � � Q x + Q y � − Q y Prec = − Q y y Q y Very efficient: computational and storage
The R-INLA package The tools Sparse matrices Sparse matrices/GMRFs/Markov Conditional independence gives sparsity x i ⊥ x j | x − ij ⇐ ⇒ Q ij = 0 In most cases, only O ( n ) of the n ( n + 1) / 2 elements in Q are non-zero.
The R-INLA package The tools Sparse matrices Sparse matrices/GMRFs/Markov Conditional independence gives sparsity x i ⊥ x j | x − ij ⇐ ⇒ Q ij = 0 In most cases, only O ( n ) of the n ( n + 1) / 2 elements in Q are non-zero.
The R-INLA package The tools Sparse matrices Example (I) Auto-regressive model of order p x t = φ 1 x t − 1 + · · · + φ p x t − p + ǫ t then Q is a band-matrix with band-width p
The R-INLA package The tools Sparse matrices Example (I) Auto-regressive model of order p x t = φ 1 x t − 1 + · · · + φ p x t − p + ǫ t then Q is a band-matrix with band-width p 50 1.5 48 1.0 46 0.5 44 0.0 42 −0.5 40 The precision matrix The covariance matrix
The R-INLA package The tools Sparse matrices Example (II) Gaussian models for areal data 0.0 1600000 −0.2 1500000 1400000 −0.4 xylims$y 1300000 −0.6 1200000 −0.8 −1.0 2e+05 3e+05 4e+05 5e+05 6e+05 7e+05 8e+05 xylims$x
The R-INLA package The tools Sparse matrices Example (III) Gaussian models on the sphere. (Have to “make” it Markov!)
The R-INLA package The tools Sparse matrices Numerical methods for sparse matrices • Only O ( n ) of the O ( n 2 ) terms are non-zero • Computational costs (factorisation): • O ( n ) in time • O ( n 3 / 2 ) in space • O ( n 2 ) in space × time • Tasks Q = LL T diag( Q − 1 ) Qx = b log | Q ( θ ) |
The R-INLA package The tools Sparse matrices Numerical methods for sparse matrices • Only O ( n ) of the O ( n 2 ) terms are non-zero • Computational costs (factorisation): • O ( n ) in time • O ( n 3 / 2 ) in space • O ( n 2 ) in space × time • Tasks Q = LL T diag( Q − 1 ) Qx = b log | Q ( θ ) |
The R-INLA package The tools Sparse matrices Numerical methods for sparse matrices • Only O ( n ) of the O ( n 2 ) terms are non-zero • Computational costs (factorisation): • O ( n ) in time • O ( n 3 / 2 ) in space • O ( n 2 ) in space × time • Tasks Q = LL T diag( Q − 1 ) Qx = b log | Q ( θ ) |
The R-INLA package The tools Sparse matrices Numerical methods for sparse matrices • Only O ( n ) of the O ( n 2 ) terms are non-zero • Computational costs (factorisation): • O ( n ) in time • O ( n 3 / 2 ) in space • O ( n 2 ) in space × time • Tasks Q = LL T diag( Q − 1 ) Qx = b log | Q ( θ ) |
The R-INLA package The tools Sparse matrices Numerical methods for sparse matrices • Only O ( n ) of the O ( n 2 ) terms are non-zero • Computational costs (factorisation): • O ( n ) in time • O ( n 3 / 2 ) in space • O ( n 2 ) in space × time • Tasks Q = LL T diag( Q − 1 ) Qx = b log | Q ( θ ) |
The R-INLA package The tools Sparse matrices Numerical methods for sparse matrices • Only O ( n ) of the O ( n 2 ) terms are non-zero • Computational costs (factorisation): • O ( n ) in time • O ( n 3 / 2 ) in space • O ( n 2 ) in space × time • Tasks Q = LL T diag( Q − 1 ) Qx = b log | Q ( θ ) |
The R-INLA package The tools The Laplace approximation The Laplace approximation: The classic case Compute and approximation to the integral � exp( ng ( x )) dx where n is the parameter going to ∞ . Let x 0 be the mode of g ( x ) and assume g ( x 0 ) = 0: g ( x ) = 1 2 g ′′ ( x 0 )( x − x 0 ) 2 + · · · .
The R-INLA package The tools The Laplace approximation The Laplace approximation: The classic case Compute and approximation to the integral � exp( ng ( x )) dx where n is the parameter going to ∞ . Let x 0 be the mode of g ( x ) and assume g ( x 0 ) = 0: g ( x ) = 1 2 g ′′ ( x 0 )( x − x 0 ) 2 + · · · .
The R-INLA package The tools The Laplace approximation The Laplace approximation: The classic case... Then � � 2 π exp( ng ( x )) dx = n ( − g ′′ ( x 0 )) + · · · Error analysis gives Estimate( n ) = 1 + O (1 / n ) True so the relative error is O (1 / n ).
The R-INLA package The tools The Laplace approximation The Laplace approximation: The classic case... Then � � 2 π exp( ng ( x )) dx = n ( − g ′′ ( x 0 )) + · · · Error analysis gives Estimate( n ) = 1 + O (1 / n ) True so the relative error is O (1 / n ).
The R-INLA package The tools The Laplace approximation Errors in the approximations Result: 1 With n repeated measurements y of the same x , then π ( θ | y n ) � π ( θ | y n ) = 1 + O ( n − 3 / 2 ) after re-normalisation. • The Relative error is a very very very nice property! • The error-rate is impressive! • Unfortunately, the assumptions made are usually not valid for LGMs, but... 1 Tierney & Kadane, JASA, 1986
The R-INLA package The tools The Laplace approximation Errors in the approximations Result: 1 With n repeated measurements y of the same x , then π ( θ | y n ) � π ( θ | y n ) = 1 + O ( n − 3 / 2 ) after re-normalisation. • The Relative error is a very very very nice property! • The error-rate is impressive! • Unfortunately, the assumptions made are usually not valid for LGMs, but... 1 Tierney & Kadane, JASA, 1986
The R-INLA package The tools The Laplace approximation Errors in the approximations Result: 1 With n repeated measurements y of the same x , then π ( θ | y n ) � π ( θ | y n ) = 1 + O ( n − 3 / 2 ) after re-normalisation. • The Relative error is a very very very nice property! • The error-rate is impressive! • Unfortunately, the assumptions made are usually not valid for LGMs, but... 1 Tierney & Kadane, JASA, 1986
The R-INLA package The tools The Laplace approximation Errors in the approximations Result: 1 With n repeated measurements y of the same x , then π ( θ | y n ) � π ( θ | y n ) = 1 + O ( n − 3 / 2 ) after re-normalisation. • The Relative error is a very very very nice property! • The error-rate is impressive! • Unfortunately, the assumptions made are usually not valid for LGMs, but... 1 Tierney & Kadane, JASA, 1986
The R-INLA package The R-INLA package The R-INLA package • A front end in R to define LGMs and to do approximate Bayesian analysis using INLA • The project is located at www.r-inla.org
The R-INLA package The R-INLA package The interface result = inla(formula, data = data, family = family, ...) summary(result) plot(result) etc...
The R-INLA package The R-INLA package Formula The formula is mostly as “usual” y ~ 1 + x1 + x2 + x3:x4 + f(z1, model=...) + f(z2, model=...) • LHS: the response • RHS: the linear predictor • “fixed effects”: x 1, x 2... • “random effects” indexed by z 1, z 2 • f () define some Gaussian model!
The R-INLA package The R-INLA package Formula The formula is mostly as “usual” y ~ 1 + x1 + x2 + x3:x4 + f(z1, model=...) + f(z2, model=...) • LHS: the response • RHS: the linear predictor • “fixed effects”: x 1, x 2... • “random effects” indexed by z 1, z 2 • f () define some Gaussian model!
The R-INLA package The R-INLA package Formula The formula is mostly as “usual” y ~ 1 + x1 + x2 + x3:x4 + f(z1, model=...) + f(z2, model=...) • LHS: the response • RHS: the linear predictor • “fixed effects”: x 1, x 2... • “random effects” indexed by z 1, z 2 • f () define some Gaussian model!
The R-INLA package The R-INLA package Formula The formula is mostly as “usual” y ~ 1 + x1 + x2 + x3:x4 + f(z1, model=...) + f(z2, model=...) • LHS: the response • RHS: the linear predictor • “fixed effects”: x 1, x 2... • “random effects” indexed by z 1, z 2 • f () define some Gaussian model!
Recommend
More recommend