projection inference for set identified svars
play

Projection Inference for Set-Identified Svars Bulat Gafarov (PSU), - PowerPoint PPT Presentation

Introduction Notation Results Implementation Conclusion Projection Inference for Set-Identified Svars Bulat Gafarov (PSU), Matthias Meier (University of Bonn), and Jos e-Luis Montiel-Olea (Columbia) September 21, 2016 1 / 38


  1. Introduction Notation Results Implementation Conclusion Projection Inference for Set-Identified Svars Bulat Gafarov (PSU), Matthias Meier (University of Bonn), and Jos´ e-Luis Montiel-Olea (Columbia) September 21, 2016 1 / 38

  2. Introduction Notation Results Implementation Conclusion Introduction: Set-id. SVARs ⋆ SVAR: Theoretical restrictions R imposed on a VAR. (Sims [1980, 1986]) Σ = E [ η t η ′ Y t = A 1 Y t − 1 + . . . + A p Y t − p + η t , t ] ⋆ Goal of the restrictions: ( A 1 , . . . A p , Σ) �→ R IRF k , i , j (response of variable i to a j-th ‘structural shock’ at horizon k ) ⋆ Map ‘ �→ R ’ can be 1-to-1 (point id.) or 1-to-many (set id.). (set i.d. SVARs have become popular in applied macro work) ⋆ Common practice: set-identify SVARs with ≥ / = restrictions. (Faust [1998]; Canova and De Nicolo [2002]; Uhlig[2005]) 2 / 38

  3. Introduction Notation Results Implementation Conclusion Motivation ⋆ Most empirical studies report Bayesian credible sets for IRF k , i , j (Bayesian Inference depends on the specification of prior beliefs) ⋆ Practical concern: prior beliefs are not ‘dominated’ by the data (results are sensitive to the choice of priors even if T → ∞ ) ⋆ Theoretical Critique: Coverage and ‘Robust’credibility → 0. (as T → ∞ ; Moon & Schorfheide [2012], Kitagawa [2012]) ⋆ Recent work on non-Bayesian Inference for set-id. SVARs. (MSG [2013]-Freq. Inference; GK [2014]-Robust Bayes) ⋆ Is there a simple way to conduct inference in set-id. SVARs? (that pleases both a frequentist and a robust Bayesian, and that is general and computationally feasible?) 3 / 38

  4. Introduction Notation Results Implementation Conclusion Description of the Inference Problem � � IRF k , i , j ∈ I R k , i , j ( µ ) ⊆ v k , i , j ( µ ) , v k , i , j ( µ ) , µ ≡ ( A , Σ) . 4 / 38

  5. Introduction Notation Results Implementation Conclusion This paper ⋆ Studies the properties of ‘projection inference’ for set i.d. SVARs. (Scheff´ e [1953]; Dufour [1990]; Dufour and Taamouti [2005]) ⋆ We collect IRF k , i , j ’s in a 1 − α Wald Ellipsoid for µ ≡ ( A , Σ) . (that is, we ‘project’ a nominal 1 − α Wald Ellipsoid) ⋆ Strategy: focus on the endpoints of the identified set for IRF k , i , j . (the maximum and minimum response, v k , i , j ( µ ), v k , i , j ( µ )) � � µ ∈ CS T (1 − α ) v k , i , j ( µ ) , inf sup v k , i , j ( µ ) µ ∈ CS T (1 − α ) ⋆ Our projection region has coverage and RB credibility ≥ 1 − α . (for any vector of IRFs! Thus providing simultaneous inference) 5 / 38

  6. Introduction Notation Results Implementation Conclusion Pros & Cons Pros: ⋆ Generality: can handle the typical application in applied work (+/0 restrictions on IRFs, long-run restrictions, elasticity bounds) ⋆ Feasibility: solve two nonlinear optimization problems per IRF k , i , j (we use state-of-the-art solution algorithms for these problems) Cons: ⋆ Projection is conservative for a frequentist and a Robust Bayesian (coverage and robust credibility are strictly above 1- α .) ⋆ We ‘calibrate’ projection to remove the excess of Robust Cred. (= 1 − α and not > 1 − α . Calibration based on KMS[2016]) 6 / 38

  7. Introduction Notation Results Implementation Conclusion Outline 1. Model and Main Definitions 2. Assumptions and Results 3. Implementation and Illustrative Example 4. Conclusion 7 / 38

  8. Introduction Notation Results Implementation Conclusion 1. Model and Main Definitions 8 / 38

  9. Introduction Notation Results Implementation Conclusion SVAR(p) ⋆ Structural VAR for the n -dimensional vector Y t : Σ ≡ BB ′ Y t = A 1 Y t − 1 + . . . + A p Y t − p + B ε t , ⋆ Vector of reduced form parameters is: µ = (vec( A 1 , A 2 , . . . , A p ) ′ , vech(Σ) ′ ) ′ ∈ R d ⋆ Coefficients of the Structural Impulse Response Function: IRF H = { IRF k h , i h , j h ( A , B ) } H IRF k h , i h , j h ( A , B ) = e ′ h =1 , i h C k h ( A ) B j h . � �� � 1 × n ⋆ Interested in simulatenous inference about λ H ≡ IRF H . (Inoue and Kilian [2013,2016] and L¨ utkepohl et. al [2016]) 9 / 38

  10. Introduction Notation Results Implementation Conclusion Restrictions R ( µ ) on B ⋆ Identified set for λ H : � λ ∈ R H � � � � λ h = IRF k h , i h , j h s.t. BB ′ = Σ , B ∈ R ( µ ) , ∀ h I R H ( µ ) ≡ ⋆ ± /0 restrictions on IRFs: e ′ i ′ C k ′ ( A ) B j ′ ≥ 0 (e.g. Sims [1980], Uhlig [2005]) ⋆ ± /0 long-run restrictions: e ′ i ′ ( I n − A (1)) − 1 B j ′ ≥ 0 (e.g. Blanchard, Quah [1989], Gali [1999]) ⋆ Elasticity bounds: ( e ′ i ′ B j ′ ) / ( e ′ i B j ′ ) ∈ [ c , d ] (e.g. Kilian, Murphy [2012], Baumeister, Hamilton [2015]) 10 / 38

  11. Introduction Notation Results Implementation Conclusion Bounds on the Identified Set: Max and Min Response ⋆ The endpoints of the identified set for each IRF k , i , j : s.t. BB ′ = Σ , B ∈ R ( µ ) v k , i , j ( µ ) ≡ sup IRF k , i , j ( A , B ) B s.t. BB ′ = Σ , B ∈ R ( µ ) v k , i , j ( µ ) ≡ inf B IRF k , i , j ( A , B ) ⋆ Nonlinear, possibly nondifferentiable transformations of µ . ⋆ Obviously ... � � I R H ( µ ) ⊆ × H v k h , i h , j h ( µ ) , v k h , i h , j h ( µ ) . h =1 ⋆ No need to assume the i.d. set is connected. 11 / 38

  12. Introduction Notation Results Implementation Conclusion Projection region for λ H ⋆ Let CS T (1 − α ; µ ) be the (typical) Wald ellipsoid for µ . ⋆ Let CS T (1 − α ; IRF k , i , j ) be the interval defined by: � � µ ∈ CS T (1 − α ; µ ) v k , i , j ( µ ) , inf sup v k , i , j ( µ ) µ ∈ CS T (1 − α ; µ ) ⋆ The projection region for λ H = { IRF k h , i h , j h ( A , B ) } H h =1 is: CS T (1 − α ; λ H ) ≡ CS T (1 − α ; IRF k 1 , i 1 , j 1 ) × . . . × CS T (1 − α ; IRF k H , i H , j H ) ⋆ We now present the properties of CS T (1 − α ; λ H ) as T → ∞ 12 / 38

  13. Introduction Notation Results Implementation Conclusion 2. Assumptions and Results 1 to 4 13 / 38

  14. Introduction Notation Results Implementation Conclusion Result 1: Frequentist Coverage ⋆ Let P be a DGP for the data. Parameterized by ( A , B , F ). ⋆ We want projection to be valid over a class P of DGPs: ⋆ A1: Suppose the class of DGPs P is such that � � lim inf T →∞ inf µ ( P ) ∈ CS T (1 − α ; µ ) ≥ 1 − α. P ∈P P ⋆ R1: Under Assumption A1: � � λ H ∈ CS T (1 − α ; λ H ) lim inf T →∞ inf inf ≥ 1 − α. H ( µ ( P )) P P ∈P λ H ∈I R 14 / 38

  15. Introduction Notation Results Implementation Conclusion Proof: Straightforward Projection Argument Suppose that H = 1. For any λ ∈ I R k , i , j ( µ ( P )) : � � �� λ ∈ P µ ∈ CS T (1 − α ) v k , i , j ( µ ) , inf sup v k , i , j ( µ ) µ ∈ CS T (1 − α ) ≥ � � �� v k , i , j ( µ ( P )) , v k , i , j ( µ ( P )) ∈ µ ∈ CS T (1 − α ) v k , i , j ( µ ) , inf sup v k , i , j ( µ ) P µ ∈ CS T (1 − α ) � � as I R k , i , j ( µ ( P )) ⊆ [ v k , i , j ( µ ( P )) , v k , i , j ( µ ( P ))] ≥ � � P µ ( P ) ∈ CS T (1 − α ) . 15 / 38

  16. Introduction Notation Results Implementation Conclusion Robust Bayes Framework ⋆ Let P ∗ be a prior for the structural parameters ( A , B ). (F is now a fixed known distribution; we use N (0 , I n )) ⋆ Represent the prior P ∗ in terms of ( P ∗ µ , P ∗ Q | µ ) , Q ≡ Σ − 1 / 2 B . (Orthogonal reduced-form parameterization Arias et. al [2014]) ⋆ Let P ( P ∗ µ ) denote the class of priors such that µ ∼ P ∗ µ . ⋆ The robust credibility of CS T (1 − α, λ H ) is defined as: µ ) P ∗ � � � � λ H ( A , B ) ∈ CS T (1 − α ; λ H ) inf � Y T P ∗ ∈P ( P ∗ 16 / 38

  17. Introduction Notation Results Implementation Conclusion Result 2: Robust Bayesian credibility ⋆ We can view robust credibility as a random variable (as it depends on the data Y T ) ⋆ A2 : Suppose that P ∗ is such that whenever Y T ∼ f ( Y T | µ 0 ): P ∗ ( µ ( A , B ) ∈ CS T (1 − α ; µ ) | Y T ) = 1 − α + o p ( Y T | µ 0 ) . ⋆ This is implied by the Bernstein von-Mises Theorem for µ . ⋆ R2 : Under Assumption 2: µ ) P ∗ � � � � λ H ∈ CS T (1 − α ; λ H ) inf � Y T ≥ 1 − α + o p ( Y T | µ 0 ) P ∗ ∈P ( P ∗ ⋆ Proof : Another embarrassingly simple projection argument! 17 / 38

  18. Introduction Notation Results Implementation Conclusion Calibrated Projection ⋆ Yes: We know that projection inference is conservative! (both in terms of frequentist coverage and a robust credibility) ⋆ In theory, it is conceptually simple to remove ‘projection bias’ (project a smaller Wald ellipsoid as suggested by KMS[2016]) ⋆ In practice, removing the excess of robust Bayesian credibility is much easier than removing the excess of frequentist coverage. ⋆ We suggest an algorithm to ‘calibrate’ robust credibility. ⋆ The algorithm also removes the excess of frequentist coverage (provided the bounds of i.d. set are differentiable) 18 / 38

Recommend


More recommend