michel juillard bayesian estimation of gpm with dynare
play

Michel Juillard Bayesian Estimation of GPM with Dynare - PowerPoint PPT Presentation

M A -L L I , O I P R D E W O M O - S , O P D W P AC CR RO IN NK KA AG GE ES IL L RI IC CE ES S A AN ND D EF FL LA AT TI IO ON N OR RK KS SH HO OP J A 6 9 9, , 20 00 09 9 J 6 2 AN NU UA AR


  1. M A -L L I , O I P R D E W O M O - S , O P D W P AC CR RO IN NK KA AG GE ES IL L RI IC CE ES S A AN ND D EF FL LA AT TI IO ON N OR RK KS SH HO OP J A 6– –9 9, , 20 00 09 9 J 6 2 AN NU UA AR RY Y Bayesian Estimation of GPM with DYNARE Michel Juillard

  2. Bayesian Estimation of GPM with Dynare Macro–Financial Linkages, Oil Prices and Deflation IMF workshop Michel Juillard January 6, 2009

  3. Outline 1. Introduction to Bayesian estimation 2. Bayesian estimation in Dynare 3. Dealing with nonstationary variables 4. Example: A simple GPM model for the US 5. Dynare macro language 6. Example: A 6–country GPM model

  4. Introduction to Bayesian estimation ◮ Uncertainty and a priori knowledge about the model and its parameters are described by prior probabilities ◮ Confrontation to the data leads to a revision of these probabilities (posterior probabilities) ◮ Point estimates are obtained by minimizing a loss function (analogous to economic decision under uncertainty) ◮ Testing and model comparison is done by comparing posterior probabilities

  5. Bayesian ingredients ◮ Choosing prior density ◮ Computing posterior mode ◮ Simulating posterior distribution ◮ Computing point estimates and confidence regions ◮ Computing posterior probabilities

  6. Prior density p ( θ A | A ) where A represents the model and θ A , the parameters of that model. The prior density describes a priori beliefs, before considering the data.

  7. Likelihood function ◮ Conditional density p ( y | θ A , A ) ◮ Conditional density for dynamic timeseries models T � p ( Y T | θ A , A ) = p ( y 0 | θ A , A ) p ( y t | Y t − 1 , θ A , A ) t = 1 where Y T are the observations until period T ◮ Likelihood function L ( θ A | Y T , A ) = p ( Y T | θ A , A )

  8. Marginal density � p ( y | A ) p ( y , θ A | A ) d θ A = Θ A � p ( y | θ A , A ) p ( θ A | A ) d θ A = Θ A

  9. Posterior density ◮ Posterior density p ( θ A | Y T , A ) = p ( θ A | A ) p ( Y T | θ A , A ) p ( Y T | A ) ◮ Unnormalized posterior density or posterior density kernel p ( θ A | Y T , A ) ∝ p ( θ A | A ) p ( Y T | θ A , A )

  10. Posterior predictive density � p (˜ Y | Y T , A ) p (˜ Y , θ A | Y T , A ) d θ A = Θ A � p (˜ Y | θ A , Y T , A ) p ( θ A | Y T , A ) d θ A = Θ A

  11. Bayes risk function R ( a ) E [ L ( a , θ )] = � L ( a , θ A ) p ( θ A ) d θ A = Θ A where L ( a , θ ) is the loss function associated with decision a when parameters take value θ A .

  12. Estimation Action: deciding that the estimated value of θ A is � θ A ◮ Point estimate: � L ( � θ A , θ A ) p ( θ A | Y T , A ) d θ A � θ A = arg min e θ A Θ A ◮ Quadratic loss function: θ A = E ( θ A | Y T , A ) � ◮ Zero-one loss function: � θ A = posterior mode

  13. Credible sets � P ( θ ∈ C ) = p ( θ ) d θ = 1 − α C is a 100 ( 1 − α )% credible set for θ with respect to p ( θ ) . A 100 ( 1 − α )% highest probability density (HPD) credible set for θ with respect to p ( θ ) is a 100 ( 1 − α )% credible set with the property p ( θ 1 ) ≥ p ( θ 2 ) ∀ θ 1 ∈ C and ∀ θ 2 ∈ ¯ C

  14. Numerical integration � E ( h ( θ A )) h ( θ A ) p ( θ A | Y T , A ) d θ A = Θ A N � 1 h ( θ k ≈ A ) N k = 1 where θ k A is drawn from p ( θ A | Y T , A ) .

  15. Metropolis algorithm 1. Draw a starting point θ ◦ which p ( θ ) > 0 from a starting distribution p ◦ ( θ ) .

  16. Metropolis algorithm (continued) 2. For t = 1 , 2 , . . . 1. Draw a proposal θ ∗ from a jumping distribution J ( θ ∗ | θ t − 1 ) = N ( θ t − 1 , c Σ mode ) 2. Compute the acceptance ratio p ( θ ∗ ) r = p ( θ t − 1 ) 3. Set � θ ∗ with probability min ( r , 1 ) θ t = θ t − 1 otherwise.

  17. In practice . . . ◮ fix scale factor c so as to obtain a 25% average acceptance ratio ◮ discard first 50% of the draws

  18. Potential Scale Reduction Factor If we have simulated m independant sequences of n draws, a particular draw of scalar θ is noted θ ij with i = 1 , . . . , n and j = 1 , . . . , m . m n � � ¯ � 2 B θ · j − ¯ = θ ·· m − 1 j = 1 m n � � � � 2 1 1 W = θ ij − θ · j m n − 1 j = 1 i = 1 n − 1 W + 1 / n var + ( θ | Y T , A ) � = n B � var + ( θ | Y T , A ) � R ˆ = W

  19. Multivariate PSRF � � n − 1 1 + 1 V W + B / n ˆ = n m m n � � 1 W ( θ ij − θ · j )( θ ij − θ · j ) ′ = m ( n − 1 ) j = 1 i = 1 m � 1 B / n ( θ · j − θ ·· )( θ · j − θ ·· ) ′ = m − 1 j = 1 n − 1 + m + 1 R p ˆ = λ 1 n m λ 1 is the largest eigenvalue of W − 1 B / n

  20. Model comparison The ratio of posterior probabilities of two models is P ( A k | Y T ) = P ( A j ) P ( A j | Y T ) p ( Y T | A j ) P ( A k ) p ( Y T | A k ) In favor of the model A j versus the model A k : ◮ the prior odds ratio is P ( A j ) / P ( A k ) ◮ the Bayes factor is p ( Y T | A j ) / p ( Y T | A k ) ◮ the posterior odds ratio is P ( A j | Y T ) / P ( A k | Y T )

  21. Laplace approximation � p ( Y T , A ) p ( θ A | Y T , A ) p ( θ A | A ) d θ A = θ A k M | − 1 2 p ( θ M A | Y T , A ) p ( θ M p ( Y T | A ) A | A ) ˆ 2 | Σ θ = ( 2 π ) where θ M A is the posterior mode.

  22. Geweke (1999) modified harmonic mean � p ( Y T | A ) p ( θ A | Y T , A ) p ( θ A | A ) d θ A = θ A � � − 1 n f ( θ ( i ) � 1 A ) p ( Y T | A ) ˆ = n p ( θ ( i ) A | Y T , A ) p ( θ ( i ) A | A ) i = 1 � � − 1 k 2 | Σ θ | − 1 f ( θ ) p − 1 ( 2 π ) 2 ( θ − θ ) ′ Σ θ − 1 ( θ − θ ) 2 exp = � � − 1 ( θ − θ ) ≤ F − 1 ( θ − θ ) ′ Σ θ × k ( p ) χ 2 with p an arbitrary probability and k , the number of estimated parameters.

  23. Bayesian estimation in Dynare

  24. Priors in DYNARE N ( µ, σ ) R NORMAL_PDF G 2 ( µ, σ, p 3 ) [ p 3 , + ∞ ) GAMMA_PDF B ( µ, σ, p 3 , p 4 ) [ p 3 , p 4 ] BETA_PDF IG 1 ( µ, σ ) R + INV_GAMMA_PDF U ( p 3 , p 4 ) [ p 3 , p 4 ] UNIFORM_PDF By default, p 3 = 0, p 4 = 1.

  25. How to choose priors ◮ the shape should be consistent with the domain of definition of the parameter ◮ use values obtained in other studies (micro or macro) ◮ check the graph of the priors ◮ check the implication of your priors by running stoch_simul with parameters set at prior mean ◮ compare moments of endogenous variables in previous simulation with empirical moments of observed variables ◮ do sensitivity tests by widening your priors

  26. Estimation strategy ◮ After (log–)linearization around the deterministic steady state, the linear rational expectation model needs to be solved (AIM, Kind and Watson, Klein, Sims) ◮ The model can then be written in state space form ◮ It is an unobserved component model ◮ Its likelihood is computed via the Kalman filter ◮ These steps are common to Maximum Likelihood estimation or a Bayesian approach

  27. State space representation (I) After solution of a first order approximation of a DSGE model, we obtain a linear dynamic model of the form y s y t = ¯ y + g y ˆ t − 1 + g u u t y s the vector ˆ t − 1 contains the endogenous state variables, the predetermined variables among y t , with as many lags as required by the dynamic of the model.

  28. State space representation (II) The transition equation describes the dynamics of the state variables: y ( 1 ) = g ( 1 ) y ( 1 ) t − 1 + g ( 1 ) u u t ˆ y ˆ t where g ( 1 ) and g ( 1 ) are the appropriate submatrices of g x and x u is the union of the state variables y s g u , respectively. y ( 1 ) t , t including all necessary lags, and y ⋆ t , the observed variables. The g ( 1 ) matrix can have eigenvalues equal to one. y

  29. Other variables The variables that are neither predetermined nor observed, y ( 2 ) , play no role in the estimation of the parameters, and their t filtered or smoothed values can be recovered from the filtered y ( 1 ) or smoothed values of ˆ thanks to the following relationship: t y ( 2 ) = g ( 2 ) y ( 1 ) t − 1 + g ( 2 ) u u t ˆ x ˆ t

  30. Measurement equation We consider measurement equations of the type y ⋆ y + M ˆ y ( 1 ) + x t + ǫ t t = ¯ t where M is the selection matrix that recovers ˆ y ⋆ y ( 1 ) , x t t out of ˆ t is a deterministic component 1 and ǫ t is a vector of measurement errors. 1 Currently, Dynare only accomodates linear trends

  31. Variances In addition, we have, the two following covariance matrices: � � E u t u ′ Q = t � � E H ǫ t ǫ ′ = t

  32. Dealing with nonstationary variables

  33. Unit root processes ◮ find a natural representation in the state space form ◮ the deterministic components of random walk with drift is better included in the measurement equation

  34. Initialization of the Kalman filter ◮ stationary variables: unconditional mean and variance ◮ nonstationary variables: initial point is an additional parameter of the model (De Jong), arbitrary initial point and infinite variance (Durbin and Koopman). ◮ Durbin and Koopman strategy: compute the limit of the Kalman filter equations when initial variance tends toward infinity. ◮ Problem with cointegrated models.

Recommend


More recommend