Lecture on advanced volatility models Erik Lindström
Stochastic Volatility (SV) Let r t be a stochastic process. ◮ The log returns (observed) are given by r t = exp ( V t / 2 ) z t . ◮ The volatility V t is a hidden AR process V t = α + β V t − 1 + e t . ◮ Or more general A ( · ) V t = e t . ◮ More flexible than e.g. EGARCH models! ◮ Multivariate extensions.
A simulation of Taylor (1982) exp(x/2) 0.4 0.3 0.2 0.1 0 100 200 300 400 500 600 700 800 900 1000 returns 1 0.5 0 -0.5 100 200 300 400 500 600 700 800 900 1000 With α = − 0 . 2 , β = 0 . 95 and σ = 0 . 2.
Long Memory Stochastic Volatility (LMSV) The autocorr. of volatility decays slower than exp. rate integrated AR process ◮ The returns (observed) are given by r t = exp ( V t / 2 ) z t . ◮ The volatility V t is a hidden, fractionally A ( · )( 1 − q − 1 ) b V t = e t , where b ∈ ( 0 , 0 . 5 ) . ◮ This gives long memory!
Long Memory Stochastic Volatility (LMSV) by a large AR process. where ◮ The long memory model can be approximated ◮ It can be shown that ∞ ( 1 − q − 1 ) b = ∑ π j q − j , j = 0 Γ( j − b ) π j = Γ( j + 1 )Γ( − b ) .
t ) Quasi Likelihood inference t 0 in the real 0 r t Practical consideration: Kalman filter! Estimate volatility and parameters using a e t 1 x t x t t x t log z 2 log exp V t t log r 2 y t log z 2 t This leads to (with world. ◮ The parameters in the SV model can be found by studying y t = log ( r 2 t ) and x t = V t .
Quasi Likelihood inference Kalman filter! 0 in the real 0 r t Practical consideration: world. ◮ The parameters in the SV model can be found by studying y t = log ( r 2 t ) and x t = V t . ◮ This leads to (with η t = log ( z 2 t ) ) y t = log ( r 2 t ) = log ( exp ( V t )) + log ( z 2 t ) = x t + η t x t = α + β x t − 1 + e t . ◮ Estimate volatility and parameters using a
Quasi Likelihood inference Kalman filter! world. ◮ The parameters in the SV model can be found by studying y t = log ( r 2 t ) and x t = V t . ◮ This leads to (with η t = log ( z 2 t ) ) y t = log ( r 2 t ) = log ( exp ( V t )) + log ( z 2 t ) = x t + η t x t = α + β x t − 1 + e t . ◮ Estimate volatility and parameters using a ◮ Practical consideration: P ( r t = 0 ) > 0 in the real
Stochastic Volatility in continuous time t affine form. t t A popular application of stoch. volatility models is d V t inversion of a Fourier integral. d S t mainly due to computational properties option valuation. ◮ Several parameterizations. ◮ The Heston model is the most used model, V t S t d W ( S ) √ = µ S t d t + V t d W ( V ) √ = κ ( θ − V t ) d t + σ d W ( S ) t d W ( V ) = ρ d t ◮ Note that the drift and squared diffusion have ◮ This reduces the task of computing prices to
Continuous time volatility time model. data from any time scale, and does not assume that data is equidistantly sampled. at high frequency. variation. ◮ We can compute the volatility in a continuous ◮ Advantage: A continuous time model can use ◮ Can derive a limit theory when data is sampled ◮ This is based on the general theory on quadratic
Quadratic variation N ◮ Let { S } be a general semimartingale. ◮ Let π N = { 0 = τ 0 < τ 1 < . . . < τ N = T } be a partition of [ 0 , T ] , and denote ∆ = τ n − τ n − 1 , where ∆ = T / N . ◮ Define ( S ( τ n ) − S ( τ n − 1 )) 2 . ∑ Q N = n = 1 ◮ What are the properties of Q N ? ◮ Q N converges to the quadratic variation .
2 N 4 T 2 2 N Quadratic variation, cont 2 T . p Q N Chebyshev's inequality then states that 0 2 N N 2 2 2 Q N 2 N 2 Q N What are the properties of Q N ? N 2 T . Let S t = σ W t . ◮ Then ( S ( τ n ) − S ( τ n − 1 )) 2 . ∑ Q N = n = 1 ◮ Note that ( S ( τ n ) − S ( τ n − 1 )) 2 ∼ σ 2 ∆ χ 2 ( 1 ) . ◮ Remember E [ χ 2 ( p )] = p , V [ χ 2 ( p )] = 2 p .
Quadratic variation, cont p Q N N 2 N Let S t = σ W t . ◮ Then ( S ( τ n ) − S ( τ n − 1 )) 2 . ∑ Q N = n = 1 ◮ Note that ( S ( τ n ) − S ( τ n − 1 )) 2 ∼ σ 2 ∆ χ 2 ( 1 ) . ◮ Remember E [ χ 2 ( p )] = p , V [ χ 2 ( p )] = 2 p . ◮ What are the properties of Q N ? ◮ E [ Q N ] = σ 2 ∆ E [ χ 2 ( N )] = σ 2 ∆ N = σ 2 T . ) 2 V [ χ 2 ( N )] = ( ) ( ◮ V [ Q N ] = σ 2 ∆ σ 4 T 2 2 N → 0 ◮ Chebyshev's inequality then states that → σ 2 T .
Quadratic variation of daily log returns for the Black-Scholes model 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 0 50 100 150 200 250 300 350 400 450 500
t X t d t t X t d W t where Z is a Poisson process N t with random jumps of size J i the quadratic variation yields 2 s X s d s J 2 0 i N t Q N Quadratic variation, cont d Z t d X t For a jump diffusion the quadratic variation converge to i ◮ For a diffusion process d X t = µ ( t , X t ) d t + σ ( t , X t ) d W t , ∫ Q N → σ 2 ( s , X s ) d s .
Quadratic variation, cont N t the quadratic variation converge to J 2 ◮ For a diffusion process d X t = µ ( t , X t ) d t + σ ( t , X t ) d W t , ∫ Q N → σ 2 ( s , X s ) d s . ◮ For a jump diffusion d X t = µ ( t , X t ) d t + σ ( t , X t ) d W t + d Z t , where { Z } is a Poisson process N t with random jumps of size J i the quadratic variation yields ∫ ∑ Q N → σ 2 ( s , X s ) d s + i . i = 0
Realized variation 2 and Bipower variation is used to estimate the semimartingale). diffusion process (and even for a general N size of the jump component. N ◮ The quadratic (realized) variation is estimated as ( S ( τ n ) − S ( τ n − 1 )) 2 . ∑ QV N = n = 1 ◮ The Bipower variation is estimated as BPV N = π ∑ | S ( τ n + 1 ) − S ( τ n ) || S ( τ n ) − S ( τ n − 1 ) | . n = 1 ◮ It can be shown that the Bipower variation ∫ converge to BPV N → σ 2 ( s , X s ) d s , for a jump ◮ The difference between the realized variation
Example: Realised variation for daily log return of Black-Scholes QV BPV 0.15 0.1 0.05 100 200 300 400 500 600 700 800 900 −3 QV−BPV (jumps ?) x 10 2 1 0 −1 −2 −3 100 200 300 400 500 600 700 800 900
Example: Realised variation for daily log return of OMXS30 1 QV BPV 0.8 0.6 0.4 0.2 1995 2000 2005 2010 QV−BPV (jumps ?) 0.04 0.03 0.02 0.01 0 1995 2000 2005 2010
Practical considerations thing. Practice suggests otherwise, cf. stylized facts. Problem is micro structure noise. Several strategies for correcting for this. ◮ Theory suggests that ∆ → 0 would be a good
Practical considerations thing. Problem is micro structure noise. Several strategies for correcting for this. ◮ Theory suggests that ∆ → 0 would be a good ◮ Practice suggests otherwise, cf. stylized facts.
Practical considerations thing. ◮ Theory suggests that ∆ → 0 would be a good ◮ Practice suggests otherwise, cf. stylized facts. ◮ Problem is micro structure noise. ◮ Several strategies for correcting for this.
Recommend
More recommend