hedging and calibration for log normal rough volatility
play

Hedging and Calibration for Log-normal Rough Volatility Models - PowerPoint PPT Presentation

Hedging and Calibration for Log-normal Rough Volatility Models Masaaki Fukasawa Osaka University Celebrating Jim Gatherals 60th Birthday, 2017, New York When I first met Jim ... in Osaka, the end of 2012, When I first met Jim ... in


  1. Hedging and Calibration for Log-normal Rough Volatility Models Masaaki Fukasawa Osaka University Celebrating Jim Gatheral’s 60th Birthday, 2017, New York

  2. When I first met Jim ... • in Osaka, the end of 2012,

  3. When I first met Jim ... • in Osaka, the end of 2012, • Jim told me he noticed my paper (2011), including small vol-of-vol expansion of fractional stochastic volatility.

  4. When I first met Jim ... • in Osaka, the end of 2012, • Jim told me he noticed my paper (2011), including small vol-of-vol expansion of fractional stochastic volatility. • He praised me for the idea of explaining the volatility skew “power law” by the “long memory” property of volatility.

  5. When I first met Jim ... • in Osaka, the end of 2012, • Jim told me he noticed my paper (2011), including small vol-of-vol expansion of fractional stochastic volatility. • He praised me for the idea of explaining the volatility skew “power law” by the “long memory” property of volatility. • I explained, unfortunately, my result implied the long memory is no use and we need a fractional BM of “short memory”.

  6. When I first met Jim ... • in Osaka, the end of 2012, • Jim told me he noticed my paper (2011), including small vol-of-vol expansion of fractional stochastic volatility. • He praised me for the idea of explaining the volatility skew “power law” by the “long memory” property of volatility. • I explained, unfortunately, my result implied the long memory is no use and we need a fractional BM of “short memory”. • Jim was really disappointed, saying something like that short memory is not realistic, it’s nonsense, meaningless ...

  7. When I first met Jim ... • in Osaka, the end of 2012, • Jim told me he noticed my paper (2011), including small vol-of-vol expansion of fractional stochastic volatility. • He praised me for the idea of explaining the volatility skew “power law” by the “long memory” property of volatility. • I explained, unfortunately, my result implied the long memory is no use and we need a fractional BM of “short memory”. • Jim was really disappointed, saying something like that short memory is not realistic, it’s nonsense, meaningless ... • I was embarrassed, had to make an excuse for the model (this was just for a toy example, etc, etc). Now this is a good memory for me.

  8. The volatility skew power law A figure from “Volatility is rough” by Gatheral et al. (2014). Figure 1.2: The black dots are non-parametric estimates of the S&P ATM volatility skews as of June 20, 2013; the red curve is the power-law fit ψ ( τ ) = A τ − 0 . 4 .

  9. Volatility is rough Gatheral, Jaisson and Rosenbaum (2014) showed that • log realized variance increments exhibit a scaling property, • a simple model d log V t = η d W H d ⟨ log S ⟩ t = V t d t , t is consistent to the scaling property with H ≈ . 1 as well as a stylized fact that the volatility is log normal, • in particular, both the historical and implied volatilities suggest the same fractional volatility model H ≈ . 1, • the model provides a good prediction performance, • and the volatility paths from the model exhibit fake long memory properties.

  10. fBm path: H = 0 . 1 , 0 . 5 , 0 . 9 0 −5 −10 −15 −20 0 20 40 60 80 100

  11. Long memory and short memory • The long memory property of asset return volatility originally meant a slow decay of the autocorrelation of squared returns. • A mathematical definition is rigid; a stochastic process is of long memory iff its autocorrelation is not summable. • In the case of fractional Gaussian noise X j = W H j ∆ − W H ( j − 1)∆ , E [ X j + k X j ] = ∆ 2 H 2 ( | k + 1 | 2 H − 2 | k | 2 H + | k − 1 | 2 H ) ∼ ∆ 2 H H (2 H − 1) k 2 H − 2 , so it is of long memory iff H > 1 / 2. • In contrast, the case H < 1 / 2 is referred as being of short memory. It has by no means shorter memory than the case H = 1 / 2 that has no memory. The decay is actually slow. • Set free from the long memory spell, goodbye bad memories.

  12. Pricing under rough volatility Bayer, Friz and Gatheral (2016) elegantly solved a pricing problem with “information from the big-bang”: • A fractional Brownian motion W H is not Markov. • The time t price of a payoff H is E [ H |F t ] by no-arbitrage. • The natural filtration of W H is σ ( W H t − W H s ; s ∈ ( −∞ , t ]). Rewrite the model under a martingale measure; for θ > t {∫ θ ∫ θ } V u d B u − 1 √ S θ = S t exp V u d u , 2 t t V θ = V t exp( η ( W H θ − W H t )) ∫ θ η 2 { ( θ − u ) H − 1 / 2 d W u − ˜ } 4 H ( θ − t ) 2 H = V t ( θ ) exp η ˜ t ∫ θ ∫ θ ∫ θ and notice E [ d ⟨ log S ⟩ u |F t ] = E [ V u |F t ] d u = V t ( u ) d u . t t t

  13. The rough Bergomi model is Markov The curve τ �→ V t ( t + τ ), where V t ( θ ) = ∫ t η 2 { ( θ − u ) H − 1 / 2 − ( t − u ) H − 1 / 2 ) d W u + ˜ } 4 H ( θ − t ) 2 H V t exp ˜ η −∞ is called the forward variance curve. When t > s , ∫ t { ( θ − u ) H − 1 / 2 d W u V t ( θ ) = V s ( θ ) exp η ˜ s η 2 − ˜ } 4 H (( θ − s ) 2 H − ( θ − t ) 2 H ) . Therefore the ∞ dimensional process { ( S t , V t ( t + · )) } t ≥ 0 is Markov with (0 , ∞ ) × C ([0 , ∞ )) as its state space.

  14. An extension: log-normal rough volatility models The rough Bergomi model of BFG can be written as {∫ θ ∫ θ V u d B u − 1 } √ S θ = S t exp , V u d u 2 t t {∫ θ ∫ θ k ( θ, u ) d W u − 1 } k ( θ, u ) 2 d u V θ = V t ( θ ) exp , 2 t t {∫ t ∫ t k ( θ, u ) d W u − 1 } k ( θ, u ) 2 d u V t ( θ ) = V s ( θ ) exp 2 s s η ( θ − u ) H − 1 / 2 and d ⟨ B , W ⟩ t = ρ d t . for θ > t > s with k ( θ, u ) = ˜ Notice the forward variance curve follows time-inhomogeneous Black-Scholes; for each θ , d V t ( θ ) = V t ( θ ) k ( θ, t ) d W t , t < θ.

  15. Log-contract price dynamics ∫ θ E [ − 2 log S θ |F t ] = − 2 log S t + E [ d ⟨ log S ⟩ u |F t ] t ∫ θ = − 2 log S t + V t ( u ) d u t ∫ t ∫ t ∫ θ d S u = − 2 log S 0 − 2 + V u d u + V t ( u ) d u . S u 0 0 t Therefore, P θ t = E [ − 2 log S θ |F t ] follows ∫ θ t = − 2 d S t d P θ + d V t ( u ) d u S t t {∫ θ } = − 2 d S t + V t ( u ) k ( u , t ) d u d W t S t t {∫ θ ∂ P u } = − 2 d S t t + ∂ u k ( u , t ) d u d W t . S t t

  16. Hedging under rough volatility Let P θ be a log-contract price process with maturity Theorem. θ . Then, any square-integrable payoff with maturity τ ≤ θ can be perfectly replicated by a dynamic portfolio of ( S , P θ ). √ 1 − ρ 2 W ⊥ . Then, the martingale Write B = ρ W + Proof. representation theorem tells that for any X there exists ( H , H ⊥ ) such that ∫ τ ∫ τ H ⊥ t d W ⊥ X = E [ X |F 0 ] + H t d W t + t . 0 0 (Use the Clark-Ocone to compute it). We have { d S t } 1 d W ⊥ = √ V t S t − ρ d W t t √ 1 − ρ 2 {∫ θ } − 1 { ∂ P u } t + 2 d S t d P θ t d W t = ∂ u k ( u , t ) d u . S t t

  17. An example Consider to hedge a log-contract with maturity τ by one with θ > τ . Using again {∫ θ ∂ P u t = − 2 d S t } d P θ t + ∂ u k ( u , t ) d u d W t , S t t we have {∫ τ ∂ P u } t = − 2 d S t d P τ t + ∂ u k ( u , t ) d u d W t S t t ∫ τ ∂ P u ∂ u k ( u , t ) d u = − 2 d S t t { t + 2 d S t } d P θ t + . ∫ θ ∂ P u S t S t ∂ u k ( u , t ) d u t t Consistent to real market data ? A related ongoing work: Horvath, Jacquier and Tankov.

  18. How to calibrate ? Monte Carlo → The next talk ! Asymptotic analyses under flat (or specific) forward variances: • Al` os et al (2007) • Fukasawa (2011) • Bayer, Friz and Gatheral (2016) • Forde and Zhang (2017) • Jacquier, Pakkanen, Stone • Bayer, Friz, Gulisashvili, Horvath, Stemper • Akahori, Song, Wang • Funahashi and Kijima (2017) and more. Asymptotic analyses under a general forward variance curve: • Fukasawa (2017) • Garnier and Solna • El Euch, Fukasawa, Gatheral and Rosenbaum (in preparation)

  19. The ATM implied volatility skew and curvature El Euch, Fukasawa, Gatheral and Rosenbaum: as θ → 0, } √ ∫ θ ( 3 κ 2 { ) 1 3 θ 2 H V t ( t + τ ) d τ + o ( θ 2 H ) , σ t (0 , θ ) = 1 + − κ 4 2 θ 0 � ∂ = κ 3 θ H − 1 / 2 + o ( θ 2 H − 1 / 2 ) , � ∂ k σ t ( k , θ ) � � k =0 ∂ 2 = 2 κ 4 − 3 κ 2 � θ 2 H − 1 + κ 3 θ H − 1 / 2 + o ( θ 2 H − 1 ) , 3 � √ V t ∂ k 2 σ t ( k , θ ) � � k =0 under the rough Bergomi model with | ρ | < 1 and forward variance curve of H -H¨ older, where ρ ˜ η κ 3 = 2( H + 1 / 2)( H + 3 / 2) , (1 + 2 ρ 2 ) ˜ 4( H + 1)(2 H + 1) 2 + ρ 2 ˜ η 2 β ( H + 3 / 2 , H + 3 / 2) η 2 κ 4 = . (2 H + 1) 2

  20. η ˜ H = . 05, ρ = − . 9, √ = . 5, V (0) = . 04, θ = 1, flat 2 H 2 H θ H < 1. η ˜ √ 0.30 0.25 0.20 −0.2 −0.1 0.0 0.1 0.2 k

  21. η ˜ H = . 05, ρ = − . 9, √ = 2 . 3, V (0) = . 04, θ = 1, flat 2 H 2 H θ H > 1. η ˜ √ 0.35 0.30 0.25 0.20 0.15 −0.2 −0.1 0.0 0.1 0.2 k

  22. An intermediate formula Let t = 0 for simplicity. Theorem. [ ] � ∂ ∼ − ρ X θ � ∂ k σ 0 ( k , θ ) √ E � √ θ ⟨ X ⟩ θ � k =0 as θ → 0, where ∫ θ √ X θ = V s d W s , 0 {∫ s ∫ s k ( s , u ) d W u − 1 } k ( s , u ) 2 d u V s = V 0 ( s ) exp . 2 0 0 Note: we still need Monte-Carlo, but it is free from ρ . This approximation is surprisingly accurate !

Recommend


More recommend