approximate bayesian inference for latent gaussian models
play

Approximate Bayesian inference for latent Gaussian models avard Rue - PowerPoint PPT Presentation

Approximative Bayesian inference Approximate Bayesian inference for latent Gaussian models avard Rue 1 H Department of Mathematical Sciences NTNU, Norway December 4, 2009 1 With S.Martino/N.Chopin Approximative Bayesian inference Overview


  1. Approximative Bayesian inference Latent Gaussian models: Characteristic features Main ideas Main ideas (II) Construct the approximations to 1. π ( θ | y ) 2. π ( x i | θ , y ) then we integrate � π ( x i | y ) = π ( θ | y ) π ( x i | θ , y ) d θ � π ( θ j | y ) = π ( θ | y ) d θ − j

  2. Approximative Bayesian inference Latent Gaussian models: Characteristic features Main ideas Main ideas (II) Construct the approximations to 1. π ( θ | y ) 2. π ( x i | θ , y ) then we integrate � π ( x i | y ) = π ( θ | y ) π ( x i | θ , y ) d θ � π ( θ j | y ) = π ( θ | y ) d θ − j

  3. Approximative Bayesian inference Gaussian Markov Random fields (GMRFs) GMRFs: def A Gaussian Markov random field (GMRF) , x = ( x 1 , . . . , x n ) T , is a normal distributed random vector with additional Markov properties x i ⊥ x j | x − ij ⇐ ⇒ Q ij = 0 where Q is the precision matrix (inverse covariance) Sparse matrices gives fast computations!

  4. Approximative Bayesian inference The GMRF-approximation The GMRF-approximation � � � − 1 2 x T Qx + π ( x | y ) ∝ exp log π ( y i | x i ) i � � − 1 2( x − µ ) T ( Q + diag( c i ))( x − µ ) ≈ exp = � π ( x | θ , y ) Constructed as follows: • Locate the mode x ∗ • Expand to second order Markov and computational properties are preserved

  5. Approximative Bayesian inference The GMRF-approximation The GMRF-approximation � � � − 1 2 x T Qx + π ( x | y ) ∝ exp log π ( y i | x i ) i � � − 1 2( x − µ ) T ( Q + diag( c i ))( x − µ ) ≈ exp = � π ( x | θ , y ) Constructed as follows: • Locate the mode x ∗ • Expand to second order Markov and computational properties are preserved

  6. Approximative Bayesian inference Part I Some more background: The Laplace approximation

  7. Approximative Bayesian inference Outline I Background: The Laplace approximation The Laplace-approximation for π ( θ | y ) The Laplace-approximation for π ( x i | θ , y ) The Integrated nested Laplace-approximation (INLA) Summary Assessing the error Examples Stochastic volatility Longitudinal mixed effect model Log-Gaussian Cox process Extensions Model choice Automatic detection of “surprising” observations Summary and discussion Bonus

  8. Approximative Bayesian inference Outline II High(er) number of hyperparameters Parallel computing using OpenMP Spatial GLMs

  9. Approximative Bayesian inference Background: The Laplace approximation The Laplace approximation: The classic case Compute and approximation to the integral � exp( ng ( x )) dx where n is the parameter going to ∞ . Let x 0 be the mode of g ( x ) and assume g ( x 0 ) = 0: g ( x ) = 1 2 g ′′ ( x 0 )( x − x 0 ) 2 + · · · .

  10. Approximative Bayesian inference Background: The Laplace approximation The Laplace approximation: The classic case Compute and approximation to the integral � exp( ng ( x )) dx where n is the parameter going to ∞ . Let x 0 be the mode of g ( x ) and assume g ( x 0 ) = 0: g ( x ) = 1 2 g ′′ ( x 0 )( x − x 0 ) 2 + · · · .

  11. Approximative Bayesian inference Background: The Laplace approximation The Laplace approximation: The classic case... Then � � 2 π exp( ng ( x )) dx = n ( − g ′′ ( x 0 )) + · · · • As n → ∞ , then the integrand gets more and more peaked. • Error should tends to zero as n → ∞ • Detailed analysis gives relative error( n ) = 1 + O (1 / n )

  12. Approximative Bayesian inference Background: The Laplace approximation The Laplace approximation: The classic case... Then � � 2 π exp( ng ( x )) dx = n ( − g ′′ ( x 0 )) + · · · • As n → ∞ , then the integrand gets more and more peaked. • Error should tends to zero as n → ∞ • Detailed analysis gives relative error( n ) = 1 + O (1 / n )

  13. Approximative Bayesian inference Background: The Laplace approximation The Laplace approximation: The classic case... Then � � 2 π exp( ng ( x )) dx = n ( − g ′′ ( x 0 )) + · · · • As n → ∞ , then the integrand gets more and more peaked. • Error should tends to zero as n → ∞ • Detailed analysis gives relative error( n ) = 1 + O (1 / n )

  14. Approximative Bayesian inference Background: The Laplace approximation The Laplace approximation: The classic case... Then � � 2 π exp( ng ( x )) dx = n ( − g ′′ ( x 0 )) + · · · • As n → ∞ , then the integrand gets more and more peaked. • Error should tends to zero as n → ∞ • Detailed analysis gives relative error( n ) = 1 + O (1 / n )

  15. Approximative Bayesian inference Background: The Laplace approximation Extension I n � g n ( x ) = 1 g i ( x ) n i =1 then the mode x 0 depends on n as well.

  16. Approximative Bayesian inference Background: The Laplace approximation Extension II � exp( ng ( x )) d x and x is multivariate, then � � (2 π ) n exp( ng ( x )) d x = n | − H | where H is the hessian (matrix) at the mode � � ∂ 2 � H ij = g ( x ) � ∂ x i ∂ x j � x = x 0

  17. Approximative Bayesian inference Background: The Laplace approximation Extension II � exp( ng ( x )) d x and x is multivariate, then � � (2 π ) n exp( ng ( x )) d x = n | − H | where H is the hessian (matrix) at the mode � � ∂ 2 � H ij = g ( x ) � ∂ x i ∂ x j � x = x 0

  18. Approximative Bayesian inference Background: The Laplace approximation Computing marginals • Our main issue is to compute marginals • We can use the Laplace-approximation for this issue as well • A more “statistical” derivation might be appropriate

  19. Approximative Bayesian inference Background: The Laplace approximation Computing marginals • Our main issue is to compute marginals • We can use the Laplace-approximation for this issue as well • A more “statistical” derivation might be appropriate

  20. Approximative Bayesian inference Background: The Laplace approximation Computing marginals • Our main issue is to compute marginals • We can use the Laplace-approximation for this issue as well • A more “statistical” derivation might be appropriate

  21. Approximative Bayesian inference Background: The Laplace approximation Computing marginals... Consider the general problem • θ is hyper-parameter with prior π ( θ ) • x is latent with density π ( x | θ ) • y is observed with likelihood π ( y | x ) then π ( θ | y ) = π ( x , θ | y ) π ( x | θ, y ) for any x !

  22. Approximative Bayesian inference Background: The Laplace approximation Computing marginals... Consider the general problem • θ is hyper-parameter with prior π ( θ ) • x is latent with density π ( x | θ ) • y is observed with likelihood π ( y | x ) then π ( θ | y ) = π ( x , θ | y ) π ( x | θ, y ) for any x !

  23. Approximative Bayesian inference Background: The Laplace approximation Computing marginals... Consider the general problem • θ is hyper-parameter with prior π ( θ ) • x is latent with density π ( x | θ ) • y is observed with likelihood π ( y | x ) then π ( θ | y ) = π ( x , θ | y ) π ( x | θ, y ) for any x !

  24. Approximative Bayesian inference Background: The Laplace approximation Computing marginals... Further, π ( x , θ | y ) π ( θ | y ) = π ( x | θ, y ) π ( θ ) π ( x | θ ) π ( y | x ) ∝ π ( x | θ, y ) � � π ( θ ) π ( x | θ ) π ( y | x ) � ≈ � π G ( x | θ, y ) � x = x ∗ ( θ ) where π G ( x | θ, y ) is the Gaussian approximation of π ( x | θ, y ) and x ∗ ( θ ) is the mode.

  25. Approximative Bayesian inference Background: The Laplace approximation Computing marginals... Further, π ( x , θ | y ) π ( θ | y ) = π ( x | θ, y ) π ( θ ) π ( x | θ ) π ( y | x ) ∝ π ( x | θ, y ) � � π ( θ ) π ( x | θ ) π ( y | x ) � ≈ � π G ( x | θ, y ) � x = x ∗ ( θ ) where π G ( x | θ, y ) is the Gaussian approximation of π ( x | θ, y ) and x ∗ ( θ ) is the mode.

  26. Approximative Bayesian inference Background: The Laplace approximation Computing marginals... Further, π ( x , θ | y ) π ( θ | y ) = π ( x | θ, y ) π ( θ ) π ( x | θ ) π ( y | x ) ∝ π ( x | θ, y ) � � π ( θ ) π ( x | θ ) π ( y | x ) � ≈ � π G ( x | θ, y ) � x = x ∗ ( θ ) where π G ( x | θ, y ) is the Gaussian approximation of π ( x | θ, y ) and x ∗ ( θ ) is the mode.

  27. Approximative Bayesian inference Background: The Laplace approximation Computing marginals... Error: With n repeated measurements of the same x, then the error is π ( θ | y ) = π ( θ | y )(1 + O ( n − 3 / 2 )) � after renormalisation. Relative error is a very nice property!

  28. Approximative Bayesian inference Background: The Laplace approximation Computing marginals... Error: With n repeated measurements of the same x, then the error is π ( θ | y ) = π ( θ | y )(1 + O ( n − 3 / 2 )) � after renormalisation. Relative error is a very nice property!

  29. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( θ | y ) The Laplace approximation The Laplace approximation for π ( θ | y ) is π ( x , θ | y ) π ( θ | y ) = (any x ) π ( x | y , θ ) � � π ( x , θ | y ) � ≈ = � π ( θ | y ) (1) � π ( x | y , θ ) � � x = x ∗ ( θ )

  30. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( θ | y ) Remarks The Laplace approximation � π ( θ | y ) turn out to be accurate: x | y , θ appears almost Gaussian in most cases, as • x is a priori Gaussian. • y is typically not very informative. • Observational model is usually ‘well-behaved’. Note: � π ( θ | y ) itself does not look Gaussian. Thus, a Gaussian approximation of ( θ , x ) will be inaccurate.

  31. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( θ | y ) Remarks The Laplace approximation � π ( θ | y ) turn out to be accurate: x | y , θ appears almost Gaussian in most cases, as • x is a priori Gaussian. • y is typically not very informative. • Observational model is usually ‘well-behaved’. Note: � π ( θ | y ) itself does not look Gaussian. Thus, a Gaussian approximation of ( θ , x ) will be inaccurate.

  32. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( θ | y ) Remarks The Laplace approximation � π ( θ | y ) turn out to be accurate: x | y , θ appears almost Gaussian in most cases, as • x is a priori Gaussian. • y is typically not very informative. • Observational model is usually ‘well-behaved’. Note: � π ( θ | y ) itself does not look Gaussian. Thus, a Gaussian approximation of ( θ , x ) will be inaccurate.

  33. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( θ | y ) Remarks The Laplace approximation � π ( θ | y ) turn out to be accurate: x | y , θ appears almost Gaussian in most cases, as • x is a priori Gaussian. • y is typically not very informative. • Observational model is usually ‘well-behaved’. Note: � π ( θ | y ) itself does not look Gaussian. Thus, a Gaussian approximation of ( θ , x ) will be inaccurate.

  34. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( θ | y ) Remarks The Laplace approximation � π ( θ | y ) turn out to be accurate: x | y , θ appears almost Gaussian in most cases, as • x is a priori Gaussian. • y is typically not very informative. • Observational model is usually ‘well-behaved’. Note: � π ( θ | y ) itself does not look Gaussian. Thus, a Gaussian approximation of ( θ , x ) will be inaccurate.

  35. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( x i | θ , y ) Approximating π ( x i | y , θ ) This task is more challenging, since • dimension of x , n is large • and there are potential n marginals to compute, or at least O ( n ). An obvious simple and fast alternative, is to use the GMRF-approximation π ( x i | θ , y ) = N ( x i ; µ ( θ ) , σ 2 ( θ )) �

  36. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( x i | θ , y ) Approximating π ( x i | y , θ ) This task is more challenging, since • dimension of x , n is large • and there are potential n marginals to compute, or at least O ( n ). An obvious simple and fast alternative, is to use the GMRF-approximation π ( x i | θ , y ) = N ( x i ; µ ( θ ) , σ 2 ( θ )) �

  37. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( x i | θ , y ) Approximating π ( x i | y , θ ) This task is more challenging, since • dimension of x , n is large • and there are potential n marginals to compute, or at least O ( n ). An obvious simple and fast alternative, is to use the GMRF-approximation π ( x i | θ , y ) = N ( x i ; µ ( θ ) , σ 2 ( θ )) �

  38. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( x i | θ , y ) Laplace approximation of π ( x i | θ , y ) • The Laplace approximation: � � π ( x , θ | y ) � π ( x i | y , θ ) ≈ � � π ( x − i | x i , y , θ ) � � x − i = x ∗ − i ( x i , θ ) • Again, approximation is very good, as x − i | x i , θ is ‘almost Gaussian’, • but it is expensive. In order to get the n marginals: • perform n optimisations, and • n factorisations of n − 1 × n − 1 matrices. Can be solved.

  39. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( x i | θ , y ) Laplace approximation of π ( x i | θ , y ) • The Laplace approximation: � � π ( x , θ | y ) � π ( x i | y , θ ) ≈ � � π ( x − i | x i , y , θ ) � � x − i = x ∗ − i ( x i , θ ) • Again, approximation is very good, as x − i | x i , θ is ‘almost Gaussian’, • but it is expensive. In order to get the n marginals: • perform n optimisations, and • n factorisations of n − 1 × n − 1 matrices. Can be solved.

  40. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( x i | θ , y ) Laplace approximation of π ( x i | θ , y ) • The Laplace approximation: � � π ( x , θ | y ) � π ( x i | y , θ ) ≈ � � π ( x − i | x i , y , θ ) � � x − i = x ∗ − i ( x i , θ ) • Again, approximation is very good, as x − i | x i , θ is ‘almost Gaussian’, • but it is expensive. In order to get the n marginals: • perform n optimisations, and • n factorisations of n − 1 × n − 1 matrices. Can be solved.

  41. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( x i | θ , y ) Simplified Laplace Approximation An series expansion of the LA for π ( x i | θ , y ): • computational much faster: O ( n log n ) for each i • correct the Gaussian approximation for error in shift and skewness π ( x i | θ , y ) = − 1 i + bx i + 1 2 x 2 6 d x 3 log � i + · · · • Fit a skew-Normal density 2 φ ( x )Φ( ax ) • sufficiently accurate for most applications

  42. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( x i | θ , y ) Simplified Laplace Approximation An series expansion of the LA for π ( x i | θ , y ): • computational much faster: O ( n log n ) for each i • correct the Gaussian approximation for error in shift and skewness π ( x i | θ , y ) = − 1 i + bx i + 1 2 x 2 6 d x 3 log � i + · · · • Fit a skew-Normal density 2 φ ( x )Φ( ax ) • sufficiently accurate for most applications

  43. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( x i | θ , y ) Simplified Laplace Approximation An series expansion of the LA for π ( x i | θ , y ): • computational much faster: O ( n log n ) for each i • correct the Gaussian approximation for error in shift and skewness π ( x i | θ , y ) = − 1 i + bx i + 1 2 x 2 6 d x 3 log � i + · · · • Fit a skew-Normal density 2 φ ( x )Φ( ax ) • sufficiently accurate for most applications

  44. Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π ( x i | θ , y ) Simplified Laplace Approximation An series expansion of the LA for π ( x i | θ , y ): • computational much faster: O ( n log n ) for each i • correct the Gaussian approximation for error in shift and skewness π ( x i | θ , y ) = − 1 i + bx i + 1 2 x 2 6 d x 3 log � i + · · · • Fit a skew-Normal density 2 φ ( x )Φ( ax ) • sufficiently accurate for most applications

  45. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary The integrated nested Laplace approximation (INLA) I π ( θ | y ) Step I Explore � • Locate the mode • Use the Hessian to construct new variables • Grid-search • Can be case-specific

  46. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary The integrated nested Laplace approximation (INLA) I π ( θ | y ) Step I Explore � • Locate the mode • Use the Hessian to construct new variables • Grid-search • Can be case-specific

  47. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary The integrated nested Laplace approximation (INLA) I π ( θ | y ) Step I Explore � • Locate the mode • Use the Hessian to construct new variables • Grid-search • Can be case-specific

  48. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary The integrated nested Laplace approximation (INLA) I π ( θ | y ) Step I Explore � • Locate the mode • Use the Hessian to construct new variables • Grid-search • Can be case-specific

  49. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary The integrated nested Laplace approximation (INLA) I π ( θ | y ) Step I Explore � • Locate the mode • Use the Hessian to construct new variables • Grid-search • Can be case-specific

  50. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary The integrated nested Laplace approximation (INLA) II Step II For each θ j • For each i , evaluate the Laplace approximation for selected values of x i • Build a Skew-Normal or log-spline corrected Gaussian N ( x i ; µ i , σ 2 i ) × exp(spline) to represent the conditional marginal density.

  51. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary The integrated nested Laplace approximation (INLA) II Step II For each θ j • For each i , evaluate the Laplace approximation for selected values of x i • Build a Skew-Normal or log-spline corrected Gaussian N ( x i ; µ i , σ 2 i ) × exp(spline) to represent the conditional marginal density.

  52. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary The integrated nested Laplace approximation (INLA) II Step II For each θ j • For each i , evaluate the Laplace approximation for selected values of x i • Build a Skew-Normal or log-spline corrected Gaussian N ( x i ; µ i , σ 2 i ) × exp(spline) to represent the conditional marginal density.

  53. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary The integrated nested Laplace approximation (INLA) III Step III Sum out θ j • For each i , sum out θ � π ( x i | y ) ∝ π ( x i | y , θ j ) × � π ( θ j | y ) � � j • Build a log-spline corrected Gaussian N ( x i ; µ i , σ 2 i ) × exp(spline) to represent � π ( x i | y ).

  54. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary The integrated nested Laplace approximation (INLA) III Step III Sum out θ j • For each i , sum out θ � π ( x i | y ) ∝ π ( x i | y , θ j ) × � π ( θ j | y ) � � j • Build a log-spline corrected Gaussian N ( x i ; µ i , σ 2 i ) × exp(spline) to represent � π ( x i | y ).

  55. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary The integrated nested Laplace approximation (INLA) III Step III Sum out θ j • For each i , sum out θ � π ( x i | y ) ∝ π ( x i | y , θ j ) × � π ( θ j | y ) � � j • Build a log-spline corrected Gaussian N ( x i ; µ i , σ 2 i ) × exp(spline) to represent � π ( x i | y ).

  56. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary Computing posterior marginals for θ j (I) Main idea • Use the integration-points and build an interpolant • Use numerical integration on that interpolant

  57. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary Computing posterior marginals for θ j (I) Main idea • Use the integration-points and build an interpolant • Use numerical integration on that interpolant

  58. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary Computing posterior marginals for θ j (II) Practical approach (high accuracy) • Rerun using a fine integration grid • Possibly with no rotation • Just sum up at grid points, then interpolate

  59. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary Computing posterior marginals for θ j (II) Practical approach (high accuracy) • Rerun using a fine integration grid • Possibly with no rotation • Just sum up at grid points, then interpolate

  60. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary Computing posterior marginals for θ j (II) Practical approach (high accuracy) • Rerun using a fine integration grid • Possibly with no rotation • Just sum up at grid points, then interpolate

  61. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary Computing posterior marginals for θ j (II) Practical approach (lower accuracy) • Use the Gaussian approximation at the mode θ ∗ • ...BUT, adjust the standard deviation in each direction • Then use numerical integration

  62. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary Computing posterior marginals for θ j (II) Practical approach (lower accuracy) • Use the Gaussian approximation at the mode θ ∗ • ...BUT, adjust the standard deviation in each direction • Then use numerical integration

  63. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary Computing posterior marginals for θ j (II) Practical approach (lower accuracy) • Use the Gaussian approximation at the mode θ ∗ • ...BUT, adjust the standard deviation in each direction • Then use numerical integration

  64. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary 1.0 0.8 dnorm(x)/dnorm(0) 0.6 0.4 0.2 0.0 −4 −2 0 2 4 x

  65. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary

  66. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Assessing the error How can we assess the error in the approximations? Tool 1: Compare a sequence of improved approximations 1. Gaussian approximation 2. Simplified Laplace 3. Laplace

  67. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Assessing the error How can we assess the error in the approximations? Tool 2: Estimate the error using Monte Carlo �� � − 1 π u ( θ | y ) ∝ E e π G [exp { r ( x ; θ , y ) } ] π ( θ | y ) where r () is the sum of the log-likelihood minus the second order Taylor expansion.

  68. Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Assessing the error How can we assess the error in the approximations? Tool 3: Estimate the “effective” number of parameters as defined in the Deviance Information Criteria : p D ( θ ) = D ( x ; θ ) − D ( x ; θ ) and compare this with the number of observations. Low ratio is good. This criteria has theoretical justification.

  69. Approximative Bayesian inference Examples Stochastic volatility Stochastic Volatility model 4 2 0 −2 0 200 400 600 800 1000 Log of the daily difference of the pound-dollar exchange rate from October 1st, 1981, to June 28th, 1985.

  70. Approximative Bayesian inference Examples Stochastic volatility Stochastic Volatility model Simple model x t | x 1 , . . . , x t − 1 , τ, φ ∼ N ( φ x t − 1 , 1 /τ ) where | φ | < 1 to ensure a stationary process. Observations are taken to be y t | x 1 , . . . , x t , µ ∼ N (0 , exp( µ + x t ))

  71. Approximative Bayesian inference Examples Stochastic volatility Stochastic Volatility model Simple model x t | x 1 , . . . , x t − 1 , τ, φ ∼ N ( φ x t − 1 , 1 /τ ) where | φ | < 1 to ensure a stationary process. Observations are taken to be y t | x 1 , . . . , x t , µ ∼ N (0 , exp( µ + x t ))

  72. Approximative Bayesian inference Examples Stochastic volatility Results Using just the first 50 data-points only, which makes the problem much harder.

Recommend


More recommend