garch models
play

GARCH models Magnus Wiktorsson SW-[?]ARCH An advanced extension is - PowerPoint PPT Presentation

GARCH models Magnus Wiktorsson SW-[?]ARCH An advanced extension is the switching ARCH model. ARCH, GARCH or EGARCH (the later two are non-trivial, due to their non-Markovian structure) parameters. The conditional variance is given by a


  1. GARCH models Magnus Wiktorsson

  2. SW-[?]ARCH An advanced extension is the switching ARCH model. ARCH, GARCH or EGARCH (the later two are non-trivial, due to their non-Markovian structure) parameters. ▶ The conditional variance is given by a standard ▶ The model is given by √ g ( S t ) σ 2 r t = t z t , ▶ where g ( 1 ) = 1 and ( g ( n ) , n ≥ 2 ) are free

  3. B d 2 B d d B k 1 5 d 0 5. The fractional differentiation can be computed as Fractionally Integrated GARCH (3) k 0 k d k 1 with the process having finite variance if t (4) B model (1) and the IGARCH representation is given by (2) Yes, that is the FIGARCH model B 1 t 1 ▶ Recall the ARMA representation of the GARCH ( 1 − ψ ( B )) ϵ 2 t = ω + ( 1 − β ( B )) ν t Φ( B )( 1 − B ) ϵ 2 t = ω + ( 1 − β ( B )) ν t . ▶ Can we have something in-between?

  4. B d d B k Fractionally Integrated GARCH with the process having finite variance if 1 k d k 0 k 1 as The fractional differentiation can be computed (3) (2) the FIGARCH model and the IGARCH representation is given by (1) model (4) ▶ Recall the ARMA representation of the GARCH ( 1 − ψ ( B )) ϵ 2 t = ω + ( 1 − β ( B )) ν t Φ( B )( 1 − B ) ϵ 2 t = ω + ( 1 − β ( B )) ν t . ▶ Can we have something in-between? Yes, that is Φ( B )( 1 − B ) d ϵ 2 t = ω + ( 1 − β ( B )) ν t − . 5 < d < 0 . 5.

  5. Fractionally Integrated GARCH (2) as with the process having finite variance if (3) the FIGARCH model (4) model and the IGARCH representation is given by (1) ▶ Recall the ARMA representation of the GARCH ( 1 − ψ ( B )) ϵ 2 t = ω + ( 1 − β ( B )) ν t Φ( B )( 1 − B ) ϵ 2 t = ω + ( 1 − β ( B )) ν t . ▶ Can we have something in-between? Yes, that is Φ( B )( 1 − B ) d ϵ 2 t = ω + ( 1 − β ( B )) ν t − . 5 < d < 0 . 5. ▶ The fractional differentiation can be computed ∞ Γ( k − d ) ( 1 − B ) d = ∑ Γ( k + 1 )Γ( − d ) B k . k = 0

  6. GARCH in Mean Asset pricing models may include variance terms as explanatory factors (think CAPM). This can be captured by GARCH in Mean models. √ r t = µ t + δ f ( σ 2 t ) + σ 2 t z t .

  7. Multivariate models What about multivariate models? Returns: t Z t Most are overparametrized. I recommend starting with the CCC-MVGARCH R t = H 1 / 2 ▶ Huge number of models. ▶ VEC-MVGARCH (1988) ▶ BEKK-MVGARCH (1995) ▶ CCC-MVGARCH (1990) ▶ DCC-MVGARCH (2002) ▶ STCC-MVGARCH(2005)

  8. Multivariate models What about multivariate models? Returns: t Z t R t = H 1 / 2 ▶ Huge number of models. ▶ VEC-MVGARCH (1988) ▶ BEKK-MVGARCH (1995) ▶ CCC-MVGARCH (1990) ▶ DCC-MVGARCH (2002) ▶ STCC-MVGARCH(2005) ▶ Most are overparametrized. ▶ I recommend starting with the CCC-MVGARCH

  9. log-Likelihood The log-likelihood for a general Multivariate GARCH r T T 2 (5) T 2 model is given by ∑ ∑ ℓ T ( θ ) = − 1 ln | det ( 2 π H t ) | − 1 t H − 1 t r t . t = 1 t = 1

  10. VEC-MVGARCH j Large number of parameters! Cons: j B j vech H t 1 j q t Uses the vech operator. j r T A j vech r t 1 j p C vech H t Model given by Difficult to impose positive definiteness on H t

  11. VEC-MVGARCH Uses the vech operator. Model given by p q Cons: Large number of parameters! Difficult to impose positive definiteness on H t ∑ ∑ vech ( H t ) = C + A j vech ( r t − j r T t − j ) + B j vech ( H t − j ) j = 1 j = 1

  12. VEC-MVGARCH Uses the vech operator. Model given by p q Cons: ∑ ∑ vech ( H t ) = C + A j vech ( r t − j r T t − j ) + B j vech ( H t − j ) j = 1 j = 1 ▶ Large number of parameters! ▶ Difficult to impose positive definiteness on H t

  13. BEKK-MVGARCH A T Fewer parameters (but still too many!) Pros: B T K q Uses Cholesky decomposition Positive definite by construction p K Model given by H t = CC T + ∑ ∑ ∑ ∑ t − j A k , j + k , j H t − j B k , j k , j r t − j r T j = 1 k = 1 j = 1 k = 1

  14. BEKK-MVGARCH K Model given by p Pros: K B T A T Uses Cholesky decomposition q H t = CC T + ∑ ∑ ∑ ∑ t − j A k , j + k , j H t − j B k , j k , j r t − j r T j = 1 k = 1 j = 1 k = 1 ▶ Fewer parameters (but still too many!) ▶ Positive definite by construction

  15. t k is any standard univariate [X]ARCH CCC-MVGARCH Constant Conditional Correlation Here model. Easy to optimize the likelihood for the CCC-MVGARCH, not so easy for other models. ▶ H t = ∆ t P c ∆ t where ▶ ∆ = diag ( σ t , k ) ▶ P c is a constant correlation matrix.

  16. CCC-MVGARCH Constant Conditional Correlation model. Easy to optimize the likelihood for the CCC-MVGARCH, not so easy for other models. ▶ H t = ∆ t P c ∆ t where ▶ ∆ = diag ( σ t , k ) ▶ P c is a constant correlation matrix. ▶ Here σ t , k is any standard univariate [X]ARCH

  17. STCC-GARCH and DCC-GARCH The main limitation of the CCC-GARCH is the fixed correlation. (6) where (8) is some smooth transition function. ▶ The DCC-GARCH uses P t = ( I ⊙ Q t ) − 1 / 2 Q t ( I ⊙ Q t ) − 1 / 2 Q t = ( 1 − a − b ) S + a ϵ t − 1 ϵ T t − 1 + b Q t − 1 , a + b < 1 . (7) ▶ An alternative is the STCC-GARCH P t = ( 1 − G ( s t )) P ( 1 ) + G ( s t ) P ( 2 ) , where P ( 1 ) , P ( 2 ) are correlation matrices and G ( · )

  18. Some wellknown Swedish assets CCC-MVGARCH model to this. On the second computer exercise you will try to fit a 600 ABB Astrazeneca B Boliden Investor B 500 Lundin MTG B Nordea Tele2 400 B 300 200 100 0 2005 2006 2007 2008 2009 2010

  19. Related concepts transforms fixed parameters into variable quantities (ex RLS) Generalized Autoregressive Score framework (Creal et al., 2013) ▶ Recursive parameter estimation methods ▶ This was taken one step further (?) in the GAS -

  20. Generalized Autoregressive Score Assume the data is given by (9) likelihood is increasing. r t = σ t z t How does σ t vary? ▶ GARCH/EGARCH/... ▶ Or updating the parameters such that the

  21. Generalized Autoregressive Score Let the data be generated from the observation density (10) where values of y ) y t ∼ p ( y t | f t , θ, F t − 1 ) ▶ f t are time varying parameters ▶ θ are static parameters ▶ F t − 1 is some set of information (ex. lagged

  22. Furthermore, assume that the time varying of the past, typically p q parameters have an autoregressive dynamics (11) (12) ∑ ∑ f t + 1 = ω + A i s t − i + 1 + B j f t − j + 1 i = 1 j = 1 with ω , A i , B j being parameters and s t some function ∂ log p ( y t | f t , θ, F t − 1 ) s t = S t ∂ f t with S t being some matrix, e.g. S t = ( I F ) − 1 .

  23. log p y t f t Gaussian example y 2 2 f 2 1 I F (13) t f 2 f t t 1 2 Let the model by heteroscedastic white noise f t 1 t That leads to and define the time varying parameter as t y t = σ t z t , f t = σ 2 t .

  24. Gaussian example Let the model by heteroscedastic white noise 2 f 2 1 (13) t f 2 2 and define the time varying parameter as That leads to t y t = σ t z t , f t = σ 2 t . ∂ log p ( y t | f t , θ, F t − 1 ) ( y 2 t − f t ) = 1 ∂ f t I F =

  25. That leads to the parameter dynamics f 2 However, the student- t version is not identical to the model! Yes, it is the GARCH(1,1) Does this look familiar? This simplifies into t 2 t 2 f 2 GARCH(1,1) version ) − 1 1 ( 1 ( y 2 t − f t ) f t + 1 = ω + A 1 + B 1 f t f t + 1 = ω + A 1 y 2 t + ( B 1 − A 1 ) f t

  26. That leads to the parameter dynamics f 2 However, the student- t version is not identical to the model! Does this look familiar? Yes, it is the GARCH(1,1) This simplifies into t 2 t 2 f 2 GARCH(1,1) version ) − 1 1 ( 1 ( y 2 t − f t ) f t + 1 = ω + A 1 + B 1 f t f t + 1 = ω + A 1 y 2 t + ( B 1 − A 1 ) f t

  27. That leads to the parameter dynamics f 2 However, the student- t version is not identical to the model! Does this look familiar? Yes, it is the GARCH(1,1) This simplifies into t 2 t 2 f 2 GARCH(1,1) version ) − 1 1 ( 1 ( y 2 t − f t ) f t + 1 = ω + A 1 + B 1 f t f t + 1 = ω + A 1 y 2 t + ( B 1 − A 1 ) f t

  28. Stochastic Volatility (SV) Let r t be a stochastic process. ▶ The log returns (observed) are given by r t = exp ( V t / 2 ) z t . ▶ The volatility V t is a hidden AR process V t = α + β V t − 1 + e t . ▶ Or more general A ( · ) V t = e t . ▶ More flexible than e.g. EGARCH models! ▶ Multivariate extensions.

  29. A simulation of Taylor (1982) exp(x/2) 0.4 0.3 0.2 0.1 0 100 200 300 400 500 600 700 800 900 1000 returns 1 0.5 0 -0.5 100 200 300 400 500 600 700 800 900 1000 With α = − 0 . 2 , β = 0 . 95 and σ = 0 . 2.

  30. Long Memory Stochastic Volatility (LMSV) The autocorr. of volatility decays slower than exp. rate integrated AR process ▶ The returns (observed) are given by r t = exp ( V t / 2 ) z t . ▶ The volatility V t is a hidden, fractionally A ( · )( 1 − q − 1 ) b V t = e t , where b ∈ ( 0 , 0 . 5 ) . ▶ This gives long memory!

  31. Long Memory Stochastic Volatility (LMSV) by a large AR process. where ▶ The long memory model can be approximated ▶ It can be shown that ∞ ( 1 − q − 1 ) b = ∑ π j q − j , j = 0 Γ( j − b ) π j = Γ( j + 1 )Γ( − b ) .

Recommend


More recommend