multi parameter models gibbs sampling
play

Multi-parameter models - Gibbs Sampling Applied Bayesian Statistics - PowerPoint PPT Presentation

Multi-parameter models - Gibbs Sampling Applied Bayesian Statistics Dr. Earvin Balderama Department of Mathematics & Statistics Loyola University Chicago September 28, 2017 Gibbs Sampling 1 Last edited October 1, 2017 by


  1. Multi-parameter models - Gibbs Sampling Applied Bayesian Statistics Dr. Earvin Balderama Department of Mathematics & Statistics Loyola University Chicago September 28, 2017 Gibbs Sampling 1 Last edited October 1, 2017 by <ebalderama@luc.edu>

  2. Multi-parameter models Why MCMC methods? The goal is to find the posterior distribution. The posterior is used for inference about the parameter(s) of interest: compute summaries such as posterior means, variances, and quantiles. credible intervals, hypothesis testing, model diagnostics. We have seen a couple of ways of finding the posterior: Using conjugate priors that lead to a known family. 1 Evaluating the function on a grid. 2 But oftentimes we are working with a model with many parameters, so the above methods can get very difficult, or even impossible, to perform. Gibbs Sampling 2 Last edited October 1, 2017 by <ebalderama@luc.edu>

  3. Multi-parameter models Monte Carlo sampling Let θ = ( θ 1 , . . . , θ p ) be the p parameters in the model. In Monte Carlo methods, we draw samples of θ from a (possibly unfamiliar) posterior distribution f ( θ | Y ) , and use these samples θ ( 1 ) , θ ( 2 ) , . . . , θ ( S ) to approximate posterior summaries. Gibbs Sampling 3 Last edited October 1, 2017 by <ebalderama@luc.edu>

  4. Multi-parameter models Monte Carlo sampling Monte Carlo sampling is the predominant method of Bayesian inference because it can be used for high-dimensional models (i.e., with many parameters). Many software options for performing Monte Carlo sampling: R ( BLR , MCMClogit , or write your own function) SAS ( proc mcmc ) OpenBUGS/WinBUGS (or simply BUGS) JAGS ( rjags ) Stan ( rstan ) INLA Gibbs Sampling 4 Last edited October 1, 2017 by <ebalderama@luc.edu>

  5. Multi-parameter models MCMC The main idea is to break up the problem of sampling from the high-dimensional joint distribution into a series (chain) of samples from low-dimensional conditional distributions. Note: Rather than drawing one p -dimensional joint sample , we make p one-dimensional full conditional samples . Samples are drawn (updated) one-at-a-time for each parameter. The updates are done in a loop, so samples are not independent. Because samples depend on previous samples drawn, the collection of samples turns out to be a Markov distribution, leading to the name Markov chain Monte Carlo (MCMC) . The most common MCMC sampling algorithms are Gibbs 1 Metropolis 2 Metropolis-Hastings 3 Gibbs Sampling 5 Last edited October 1, 2017 by <ebalderama@luc.edu>

  6. Multi-parameter models Gibbs sampling Gibbs sampling was proposed in the early 1990s (Geman and Geman, 1 1984; Gelfand and Smith, 1990) and fundamentally changed Bayesian computing. Gibbs sampling is attractive because it can sample from 2 high-dimensional posteriors. The main idea is to break the problem of sampling from the 3 high-dimensional joint distribution into a series of samples from low-dimensional conditional distributions, e.g., rather than 1 p -dimensional joint sample, we make p 1-dimensional samples. Updates can also be done in blocks (groups of parameters). 4 Because the low-dimensional updates are done in a loop, samples are 5 not independent . The dependence turns out to be a Markov distribution, leading to the 6 name Markov chain Monte Carlo (MCMC). Gibbs Sampling 6 Last edited October 1, 2017 by <ebalderama@luc.edu>

  7. Multi-parameter models Gibbs sampling algorithm Set initial values θ ( 0 ) = � � θ ( 0 ) 1 , . . . , θ ( 0 ) 1 p For iteration t , 2  � Draw θ ( t ) � θ ( t − 1 ) , . . . , θ ( t − 1 ) , Y �  p 1 2     �  Draw θ ( t ) � θ ( t ) 1 , θ ( t − 1 ) , . . . , θ ( t − 1 )  , Y �  p  2 3     . . . � Draw θ ( t ) � θ ( t ) 1 , . . . , θ ( t )  p − 1 , Y  �  p        Set θ ( t ) = � � θ ( t ) 1 , . . . , θ ( t )    p After S iterations, we have θ ( 1 ) , . . . , θ ( S ) Gibbs Sampling 7 Last edited October 1, 2017 by <ebalderama@luc.edu>

  8. Multi-parameter models Gibbs sampling for the normal model The joint posterior of ( µ, σ 2 ) is µ, σ 2 | Y � � f Y | µ, σ 2 � µ | σ 2 � σ 2 � � � � ∝ f f f � � � � � − n � ( y i − µ ) 2 � ( µ − θ ) 2 � 1 � � σ 2 � − a − 1 exp − b � ∝ exp − exp − 2 σ 2 2 τ 2 σ 2 σ The full conditional distributions are � � Y , σ 2 ∼ Normal � � n ¯ y + m θ σ 2 µ n + m , � n + m � n σ 2 � 2 + a , SSE � � Y , µ ∼ InverseGamma + b � 2 Gibbs Sampling 8 Last edited October 1, 2017 by <ebalderama@luc.edu>

  9. Multi-parameter models Gibbs sampling for the normal model Set initial values θ ( 0 ) = � µ ( 0 ) , σ 2 ( 0 ) � � ¯ y , s 2 � = 1 Draw µ ( t ) � � σ 2 ( t − 1 ) , Y  �      Draw σ 2 ( t ) �  � µ ( t ) , Y � For iteration t , 2    Set θ ( t ) =  � µ ( t ) , σ 2 ( t ) �   After S iterations, we have θ ( 1 ) , . . . , θ ( S ) = � µ ( 1 ) , σ 2 ( 1 ) � � µ ( S ) , σ 2 ( S ) � , . . . , µ ( 1 ) , . . . , µ ( S ) � � µ = σ 2 = � σ 2 ( 1 ) , . . . , σ 2 ( S ) � Gibbs Sampling 9 Last edited October 1, 2017 by <ebalderama@luc.edu>

Recommend


More recommend