Doubts and Variability Authors: Rhys Bidder and Matthew E. Smith Presentation: Dan Greenwald March 25, 2014 Presentation: Dan Greenwald Doubts and Variability March 25, 2014 1 / 20
Introduction Motivation ◮ Paper considers asset-pricing implications of model uncertainty. ◮ Estimates underlying endowment process, and considers multiplier preferences given these “true” models. ◮ Investigates effect of model uncertainty on Hansen-Jagannathan bounds in the presence of stochastic volatility. ◮ Characterizes worst-case probability distribution and detection error probabilities from the robust agent’s perspective. Presentation: Dan Greenwald Doubts and Variability March 25, 2014 2 / 20
Introduction Agenda ◮ First, estimate parameters of the consumption growth process using MCMC sampler. ◮ Given these estimates, and a solution to the agent’s optimization problem, we can do all the asset pricing, etc. ◮ However, may also be interested in features of the robust control problem: 1. What are the properties of the worst case probability distribution? 2. What is the link between the consumption growth process and detection error probability? ◮ Calculating these objects will require further MCMC sampling, given parameters of endowment process. Presentation: Dan Greenwald Doubts and Variability March 25, 2014 3 / 20
Estimating Endowments Consumption Process ◮ Homoskedastic version: ∆ log( C t +1 ) = φ + σε t +1 ε t +1 ∼ N (0 , 1) ◮ Stochastic volatility version: ∆ log( C t +1 ) = φ + σ exp( v t +1 ) ε 1 , t +1 v t +1 = λ v t + τε 2 , t +1 � ε 1 , t +1 � ∼ N (0 , I ) ε 2 , t +1 ◮ Consumption is observable, so we can estimate the endowment process without making any assumptions on preferences. Presentation: Dan Greenwald Doubts and Variability March 25, 2014 4 / 20
Estimating Endowments Estimating the Consumption Process ◮ Estimation using Bayesian methods. ◮ Priors: Parameter Description Prior φ Mean Consumption Growth Uniform [0, 1] σ Non-Stoch. Consumption Growth Vol. Uniform [0, 1] τ SV Innovation Volatility Uniform [0, 1] λ SV Persistence Uniform [-1, 1] ◮ Estimation method: ◮ Homoskedastic: Random Walk Metropolis-Hastings Algorithm. ◮ Alternatives: could have used conjugate prior and sampled directly, or done importance sampling here. ◮ SV: Particle Marginal Metropolis-Hastings Algorithm. Presentation: Dan Greenwald Doubts and Variability March 25, 2014 5 / 20
Estimating Endowments Review of Bayesian Econometrics ◮ For notation, let ξ = ( φ, σ ) ′ be the vector of parameters, and let y denote the data (∆ log( C 1 ) , . . . , ∆ log( C T )). ◮ Want to draw from the posterior distribution p ( ξ | y ). ◮ By Bayes’ rule, we have p ( ξ | y ) ∝ p ( y | ξ ) p ( ξ ). ◮ Prior p ( ξ ) is known by construction. ◮ Likelihood p ( y | ξ ) is known given data: � T � − 1 p ( y | ξ ) = (2 π ) − T / 2 σ − T exp � 2 σ − 2 (log( C t ) − φ ) 2 . t =1 Presentation: Dan Greenwald Doubts and Variability March 25, 2014 6 / 20
Estimating Endowments Metropolis-Hastings Algorithm 1. Given current draw ξ j , choose candidate ξ ∗ from a proposal density q ( ξ ∗ ; ξ j ). ◮ Random walk proposal: ξ ∗ = ξ j + η , for E [ η ] = 0. 2. Calculate acceptance probability � p ( ξ ∗ | y ) / q ( ξ ∗ ; ξ j ) � � p ( y | ξ ∗ ) p ( ξ ∗ ) / q ( ξ ∗ ; ξ j ) � α = min p ( ξ j | y ) / q ( ξ j ; ξ ∗ ) , 1 = min p ( y | ξ j ) p ( ξ j ) / q ( ξ j ; ξ ∗ ) , 1 . ◮ If proposal distribution is symmetric, then � p ( y | ξ ∗ ) p ( ξ ∗ ) � α = min p ( y | ξ j ) p ( ξ j ) , 1 3. Set ξ j +1 = ξ ∗ with probability α , set ξ j +1 = ξ j with probability 1 − α . Presentation: Dan Greenwald Doubts and Variability March 25, 2014 7 / 20
Estimating Endowments Particle Marginal Metropolis-Hastings Algorithm ◮ In the previous case, we assumed that the likelihood p ( y | ξ ) is known. ◮ However, in the SV specification, this is no longer the case. ◮ Instead, we can calculate an approximation ˆ p ( y | ξ ) using a particle filter. ◮ We can then proceed as before using ˆ p ( y | ξ j ) and ˆ p ( y | ξ ∗ ) in place of p ( y | ξ j ) and p ( y | ξ ∗ ). Presentation: Dan Greenwald Doubts and Variability March 25, 2014 8 / 20
Estimating Endowments SIR Particle Filter ◮ A good basic particle filtering algorithm is Sampling Importance Resampling (SIR). ◮ For notation, let y t be observable data, and let x t be latent states. Assume that p ( y t | x T ) = g ( y t | x t ), that p ( x t | x t − 1 , . . . , x 1 ) = f ( x t | x t − 1 ), and that p ( x 1 ) = µ ( x 1 ). ◮ At t = 1: ◮ Initialize x i 1 ∼ q 1 ( x 1 | y 1 ) for i = 1 , . . . , N , from some proposal density q 1 . 1 = µ ( x i 1 ) g ( y 1 | x i 1 ) ◮ Compute weights w i , normalized weights W i 1 ∝ w i 1 . q 1 ( x i 1 | y 1 ) ◮ Resample { W i 1 , x i x i 1 } to obtain N equally weighted particles ¯ 1 . Presentation: Dan Greenwald Doubts and Variability March 25, 2014 9 / 20
Estimating Endowments SIR Particle Filter ◮ At t ≥ 2: ◮ Sample x i x i t ∼ q t ( x t | y t , ¯ t − 1 ). t = g ( y t | x i t ) f ( x i x i t | ¯ t − 1 ) ◮ Compute incremental weights α i and normalized q ( x i x i t | y t , ¯ t − 1 ) weights W i t ∝ α i t . ◮ Resample { W i t , x i x i t } to obtain N equally weighted particles ¯ t . ◮ Given output of algorithm, can approximate N � p ( y t | y t − 1 , ξ ) = W i t − 1 α i ˆ t i =1 p ( y t | y t − 1 , ξ ) · · · ˆ p ( y | ξ ) = ˆ ˆ p ( y 2 | y 1 , ξ )ˆ p ( y 1 | ξ ) . ◮ For the SV problem, x t = v t , y t = ∆ log( C t ), use true transition probabilities for v t as the proposal q . ◮ See Doucet and Johansen (2008) for further improvements to particle filter, Andrieu, Doucet and Holenstein (2010) for more information about PMCMC. Presentation: Dan Greenwald Doubts and Variability March 25, 2014 10 / 20
Robust Analysis Multiplier Preferences ◮ Notation: current state is x , next period’s state is x ′ ( ε ′ ; x ). ◮ Bellman equation: � � [ m ( ε ; x ) W ( x ′ ( ε ′ ; x )) W ( x ) = log( C ( x )) + min β m ( ε ; x ) ≥ 0 � + θ m ( ε ; x ) log( m ( ε ; x ))] p ( ε ) d ε ◮ Bellman equation at minimizing m : � − W ( x ′ ( ε ′ ; x )) �� � � W ( x ) = log( C ( x )) − βθ log exp p ( ε ) d ε θ Presentation: Dan Greenwald Doubts and Variability March 25, 2014 11 / 20
Robust Analysis Asset Pricing ◮ Stochastic discount factor: � � − W t +1 � − 1 exp � C t +1 θ . Λ t , t +1 = β � � �� C t − W t +1 exp E t θ ◮ Decomposition: Λ t , t +1 = Λ R t , t +1 Λ U t , t +1 � − 1 � C t +1 Λ R t , t +1 = β C t � � − W t +1 exp θ Λ U t , t +1 = � � �� − W t +1 exp E t θ Presentation: Dan Greenwald Doubts and Variability March 25, 2014 12 / 20
Robust Analysis Asset Pricing ◮ Authors use third-order perturbations to solve for the value function and the stochastic discount factor Λ t , t +1 . ◮ Therefore, given the earlier estimates of the endowment process, we can price any asset, check HJ bounds, etc. ◮ Rest of the paper will characterize the robust agent’s problem (worst case distribution, detection error probabilities). Presentation: Dan Greenwald Doubts and Variability March 25, 2014 13 / 20
Robust Analysis Distorted Expectations ◮ Reformulation of asset pricing equation: 1 = E t [Λ t , t +1 R t +1 ] � � − W ( x ′ ( ε ′ ; x )) exp � − 1 � C ( x ′ ( ε ′ ; x )) � θ p ( ε ) d ε = R ( ε ) · β C ( x ) � � �� − W ( x ′ ( ε ′ ; x )) exp E t θ � − 1 � C ( x ′ ( ε ′ ; x )) � = R ( ε ) · β ˜ p ( ε ; x ) d ε C ( x ) = ˜ Λ R � � E t t , t +1 R t +1 ◮ Distorted probability measure: � � − W ( x ′ ( ε ′ ; x )) exp θ p ( ε ) p ( ε ; x ) = ˜ � � �� − W ( x ′ ( ε ′ ; x )) exp E t θ Presentation: Dan Greenwald Doubts and Variability March 25, 2014 14 / 20
Robust Analysis Distorted Expectations ◮ Therefore, the agent prices assets as if he or she had log expected utility preferences, but under the probability distribution ˜ p . ◮ Distribution ˜ p is known as the worst-case distribution. ◮ This is itself an object of interest: what is the consumption process that the agent has in mind when pricing assets? ◮ This density does not have a standard form, so we will once again use Monte Carlo methods to sample from it. ◮ For notation, let s be the deterministic variables in the state x , so that s ′ = f ( ε, s ). (Here s t = v t ). Presentation: Dan Greenwald Doubts and Variability March 25, 2014 15 / 20
Robust Analysis Sampling the Worst Case Distribution ◮ Method 1: Random Walk Metropolis-Hastings ◮ Given { ε i t − 1 , s i t − 1 } ) N i =1 : 1. Set s i t = f ( ε i t − 1 , s i t − 1 ). 2. For i = 1 , . . . , N : 3. Draw ε ∗ t ∼ q ( ε ∗ , ε i t − 1 ) for some proposal density q . p ( ε ∗ t ) / q ( ε ∗ t , ε i � 1 , ˜ t − 1 ) � 4. Set ε i t = ε ∗ t with probability min , and set p ( ε i − 1 ) / q ( ε i − 1 ˜ , ε ∗ t ) t t t = ε i − 1 ε i otherwise (note: incorrect in paper!). t 5. Increment t . ◮ Can use p distribution as proposal: q ∼ N (0 , I ). ◮ Alternative to Metropolis-Hastings: could instead use p as a proposal to do importance sampling. Presentation: Dan Greenwald Doubts and Variability March 25, 2014 16 / 20
Recommend
More recommend