option pricing a new adaptive monte carlo method
play

Option pricing: a new adaptive Monte carlo method Nadia Oudjane and - PowerPoint PPT Presentation

Option pricing: a new adaptive Monte carlo method Nadia Oudjane and Jean-Michel Marin 1. Motivation: Option pricing 2. Importance Sampling for variance reduction 3. Particle methods to approximate the optimal importance law 4. Simulation


  1. Option pricing: a new adaptive Monte carlo method Nadia Oudjane and Jean-Michel Marin 1. Motivation: Option pricing 2. Importance Sampling for variance reduction 3. Particle methods to approximate the optimal importance law 4. Simulation results 1

  2. 1. Motivations: Option pricing 2 General framework ( X n ) n ≥ 0 ∈ E = R d ◮ We consider a Markov chain with • Initial distribution µ 0 = L ( X 0 ) • Transition kernels Q k = L ( X k | X k − 1 ) • Joint distributions µ 0: k = L ( X 0 , · · · , X k ) = µ 0: k − 1 × Q k µ 0: n ( H ) ◮ The Goal is to Compute efficiently the expectation µ 0: n ( H ) = E [ H ( X 0: n )] = E [ H ( X 0 , · · · , X n )] H : E n +1 → R for a given

  3. 1. Motivations: Option pricing 3 Example of application: Pricing a european call ( S k ) k ≥ 0 ◮ We consider a price process modelized by S k = V k ( X k ) = S 0 exp( M k + σX k ) where ( X k ) k ≥ 0 is a Markov chain and ( M k ) k ≥ 0 is such that S is martingale • Black-Scholes = > No spikes 80 X k = W t k where W is the BM (0 , 1) 70 • NIG L´ evy model = > Spikes 60 50 X k = L t k where L is the Normal 40 Inverse Gaussian L´ evy process 30 20 0 50 100 150 200 250 ◮ The price of the European call with maturity t n and strike K is given by E [ H ( X 0: n )] = E [( V n ( X n ) − K ) + ] ◮ Crude Monte Carlo is inefficient when K ≫ S 0 = > variance reduction

  4. 2. Importance Sampling for variance reduction 4 Importance Sampling for variance reduction µ − → ν ◮ Change of measure X ) dµ µ ( H ) = E [ H ( X )] = E [ H ( ˜ dν ( ˜ ˜ X )] , X ∼ µ X ∼ ν where and ◮ Monte Carlo approximation M E [ H ( X )] ≈ 1 X i ) dµ � H ( ˜ dν ( ˜ ( ˜ X 1 , · · · , ˜ X i ) , X N ) i.i.d. ∼ ν where M i =1 → ν ∗ µ − achieves zero variance if H ≥ 0 ◮ Optimal change of measure ν ∗ = Hµ Hµ def µ ( H ) = = H · µ E [ H ( X )] ◮ ν ∗ depends on µ ( H ) ⇒ How to approximate ν ∗ ?

  5. 2. Importance Sampling for variance reduction 5 Progressive correction H k : E k +1 → R for all 0 ≤ k ≤ n ◮ We introduce some functions x 0: n ∈ E n +1 . H 0 ( x 0 ) = 1 , H n ( x 0: n ) = H ( x 0: n ) , and for all G k : E k +1 → R ◮ We introduce some potential functions H k ( x 0: k ) x 0: k ∈ E k +1 . G 0 ( x 0 ) = 1 , G k ( x 0: k ) = H k − 1 ( x 0: k − 1 ) , and for all on ( E k +1 ) 0 ≤ k ≤ n ( ν 0: k ) 0 ≤ k ≤ n ◮ We introduce the sequence of measures k ν 0: k = G 0: k · µ 0: k = G 0: k µ 0: k � G 0: k = µ 0: k ( G 0: k ) , G p . where p =0 ◮ G 0: n = H = > ν 0: n is the optimal importance distribution for µ 0: n ( H ) ν 0: n = H · µ 0: n = ν ∗ 0: n

  6. 2. Importance Sampling for variance reduction 6 Evolution of ( ν 0: k ) 0 ≤ k ≤ n ( ν 0: k ) 0 ≤ k ≤ n ◮ Evolution of (1) (2) − − − − − → η 0: k = ν k − 1 × Q k − − − − − → ν 0: k = G k · η 0: k ν 0: k − 1 Mutation Correction ◮ [Del Moral & Garnier 2005] consider the sequence of measures on ( E k +1 ) 0 ≤ k ≤ n such that for all test function φ on E k +1 , ( γ 0: k ) 0 ≤ k ≤ n k � γ 0: k ( φ ) = E [ G p ( X 0: p ) φ ( X 0: k )] , p =0 = > γ 0: n (1) = µ 0: n ( H ) = E [ H ( X 0: n )] ◮ Link between γ 0: n (1) and ( η 0: k ) 0 ≤ k ≤ n n � γ 0: n (1) = η 0: k ( G k ) k =0

  7. 3. Particle methods to approximate the optimal importance law 7 Approximation of ν 0: n by particle methods η 0: k = ν 0: k − 1 × Q k ◮ The idea is to replace by its empirical measure N 0: k = S N ( ν 0: k − 1 × Q k ) = 1 � η N δ X i N 0: k i =1 where ( X 1 0: k , · · · , X N 0: k ) are i.i.d. ∼ ν 0: k − 1 × Q k − 1 ( ν N 0: k ) 0 ≤ k ≤ n ◮ Particle approximation (1) (2) ν N η N 0: k = S N ( ν N Correction ν N 0: k = G k · η N − − − − − − − − − − − − → 0: k − 1 × Q k ) − − − − − → 0: k − 1 0: k Selection and mutation ( γ N 0: k ) 0 ≤ k ≤ n ◮ Particle approximation n � γ N 0: k = G k η N 0: k γ N γ N η N 0: k − 1 (1) 0: n (1) = 0: k ( G k ) hence k =0

  8. 3. Particle methods to approximate the optimal importance law 8 Algorithm ◮ Initialization: Set ν N 0 = ν 0 = µ 0 ◮ Selection: Generate independantly N ( ˜ 0: k , · · · , ˜ � X 1 X N ν N ω i 0: k ) ∼ 0: k = k δ X i i.i.d. 0: k i =1 ◮ Mutation: Generate independantly for each i ∈ { 1 , · · · , N } , N 0: k +1 = 1 Q k +1 ( ˜ � X i X i η N ∼ k , · ) δ X i then set k +1 N 0: k +1 i =1 ◮ Weighting: For each particle i ∈ { 1 , · · · , N } , compute N G k +1 ( X i 0: k +1 ) � ω i ν N ω i k +1 = 0: k +1 = k +1 δ X i then set � N j =1 G k +1 ( X j 0: k +1 0: k +1 ) i =1

  9. 3. Particle methods to approximate the optimal importance law 9 Density estimation ◮ At the end of the algorithm, we get ν N 0: n ≈ ν ∗ 0: n . But Importance sampling requires a smooth approximation of ν ∗ K ◮ Kernel of order 2 � � � K ≥ 0 K = 1 x i K = 0 | x i x j | K < ∞ K h ( x ) = 1 h d K ( x K h h ) ◮ Rescaled kernel ν N = ν N,h = ω i δ ξ i ω i K h ( · − ξ i ) Density estimation � � − − − − − − − − − → ◮ K h ∗ · E � ν N,h C 0: n − ν ∗ ◮ Optimal choice of h = > 0: n � 1 ≤ 4 2( d +4) N W3 W4 W5 <−−−−−−−− WEIGHTS W2 <−−−−−−−− DENSITY ESTIMATE W1 <−−−−−−−− KERNELS <−−−−−−−− SAMPLE <−−−−−−−− SAMPLE

  10. 3. Particle methods to approximate the optimal importance law 10 Adaptive choice of the sequence ( H k ) 0 ≤ k ≤ n [C´ erou & al. 2006] [Hommem-de-Mello & Rubinstein 2002] [Musso & al. 2001] H ( x 0: n ) = ( V n ( x n ) − K ) + ◮ In the case of european call pricing  H n ( x 0: n ) = ( V n ( x n ) − K ) + ,      H k ( x 0: k ) = max (( V k ( x k ) − K k ) , ε ) , for all 1 ≤ k ≤ n − 1 , where  • ε > 0 ensures the positivity of H k for 1 ≤ k ≤ n − 1 • K k is a r.v. depending on ( V 1 k = V k ( X 1 0: k ) , · · · , V N = V k ( X N 0: k )) and on k parameter ρ ∈ (0 , 1) : K k = V ([ ρN ]) V (1) ≤ · · · ≤ V ( N ) ; where k k k

  11. 3. Particle methods to approximate the optimal importance law 11 Variance of the estimator ◮ Importance sampling M µ ( H ) = E [ H ( X )] ≈ IS M,N = 1 X i ) dµ 0: n H ( ˜ ( ˜ � X i ) , dν N,h M 0: n i =1 ( ˜ X 1 , · · · , ˜ i.i.d. ∼ ν N,h X M ) where 0: n µ ( H ) 2 dν ∗ �� � � � ν ∗ − ν N,h � L 1 V ar ( IS M,N ) � � ≤ E � � dν N,h M � � ∞ µ ( H ) 2 C ′ ≤ 2( d +4) . 4 MN

  12. 4. Simulation results 12 Some simulation results ◮ Pricing of a European call Maturiry : 1 year ; Volatility 20%/year ◮ Pricing of a European call N = 200 , M = 10 N , ρ = 20% K Variance ratio BS 60 45 65 194 70 690 75 4862 80 16190

  13. 4. Simulation results 13 References ◮ [Del Moral & Garnier 05] Del Moral, P. and Garnier, J. Genealogical particle analysis of rare events , Annals of Applied Probability, 2005. ◮ [Musso & al 01] Musso, C. and Oudjane, N. and Le Gland, F. , Improving regularized particle filters , in Sequential Monte Carlo Methods in Practice, A. Doucet N. de Freitas and N. Gordon editors, Statistics for Engineering and Information Science, 2001. ◮ [Cerou & al 06] Cerou, F. and Del Moral, P. and Le Gland, F. and Guyader, P. and Lezaud, H. Topart, Some recent improvements to importance splitting , Proceedings of the 6th International Workshop on Rare Event Simulation, Bamberg, October 9-10, 2006. ◮ [Homem-de-Mello & Rubinstein 02] Homem-de-Mello, T. and Rubinstein, R.Y. Estimation of rare event probabilities using cross-entropy , Proceedings of the Winter Simulation Conference, 2002.

Recommend


More recommend