Diffusion approximation of L´ evy processes Jonas Kiessling March 15, 2010
Joint work with R. Tempone.
◮ We want to calculate E [ g ( X T )] using Monte Carlo, when X t is some infinite activity L´ evy process. ◮ Problem: We can only simulate an approximate finite activity process X t .
Questions: ◮ How do we choose X t ? ◮ What is the model error: E = E [ g ( X T )] − E [ g ( X T )] ?
Outline 1. Motivation 2. Classical results 3. Problems with classical results 4. New results – problems resolved 5. Adaptive schemes
Motivation Infinite activity L´ evy processes are becoming increasingly popular in option pricing. They have many desirable properties, such as heavy tails, discontinuous trajectories and good ability to reproduce observed option prices.
Setup ◮ In this talk we assume, for notational ease, that all processes are 1 dimensional. All results extend to higher dimensions.
Recall ◮ Associated to a L´ evy process X t is a jump measure ν . ◮ The quantity ν ( A ) , A ⊂ R is the expected number of jumps of size A . ◮ If ν ( R ) < ∞ then X t is said to have finite activity.
Recall A finite activity L´ evy process X t is a compound poisson process with added diffusion: N t � X t = γ t + σ W t + J i i =1 where ◮ γ is the drift, W t standard Brownian motion. ◮ The J i are i.i.d. with law ν ( d x ) /ν ( R ). ◮ N t is Poisson with parameter t ν ( R ). ◮ ν ( R ) the jump intensity.
Definition The work of simulating X t is the expected number of jumps: Work ( X t ) = E [ N t ] = t ν ( R )
If X t is an infinite activity L´ evy process, ν ( R ) = ∞ , then for every ǫ > 0 X t = X ǫ t + R ǫ t where t has finite activity with jump measure ν ǫ = 1 | x | >ǫ ν : ◮ X ǫ N t X ǫ t = γ ǫ t + σ W t + � J i 1 | J | >ǫ . ◮ R ǫ t is a pure jump process with jump measure 1 | x | <ǫ ν and E [ R ǫ t ] = 0 .
First approximation Fix an infinite activity L´ evy process X t and some ǫ > 0. X t ≈ X ǫ R ǫ t ≈ 0 i.e. t Note that Work ( X ǫ t ) = t ν ( | x | > ǫ )
Theorem (Jensen’s inequality) If | g ′ ( x ) | ≤ C then √ � � � E [ g ( X T )] − E [ g ( X ǫ E = T )] � ≤ C σ ( ǫ ) T � � where � σ 2 ( ǫ ) = x 2 ν ( d x ) | x | <ǫ = Var R ǫ T / T
Second approximation [Assmussen & Rosinski ’01] If there are ”enough” jumps, then R ǫ t /σ ( ǫ ) → W t as ǫ → 0, in distribution. Definition For some fixed ǫ > 0 we define X t = X ǫ t + σ ( ǫ ) W t that is, we approximate: R ǫ t ≈ σ ( ǫ ) W t .
Theorem (Berry-Essen type result) If | g ′ ( x ) | ≤ C then � � � | x | 3 ν ( d x ) /σ 2 ( ǫ ) . E = � E [ g ( X T )] − E [ g ( X T )] � ≤ 16 . 5 C � � | x | <ǫ
Problems with classical results ◮ Many contracts have payoff with unbounded derivative, e.g. digital options � 1 if x > 0 g ( x ) = 0 if x < 0 ◮ These error estimates are independent of the initial value of X t . It is reasonable to assume that an option far into the money is less sensitive to approximations then an option at the money.
Results Let X t be a L´ evy process such that there is a β ∈ (0 , 2) such that � x 2 ν ( d x ) = O ( ǫ β ) as ǫ → 0 , | x | <ǫ then Theorem (K. & Tempone) The model error can be expressed as E = E [ g ( X T )] − E [ g ( X T )] = T � x 3 ν ( d x ) E [ g (3) ( X T )] + O ( ǫ 2+ ǫ ) . 6 | x | <ǫ
Example Suppose that X t is a pure jump process with E [ X t ] = 0 and jump measure given by ν ( d x ) = 1 x 2 1 0 < x < 1 . Suppose further that the payoff g ( x ) is given by � 1 if x > 0 g ( x ) = 0 if x < 0 From the above Theorem: E ≈ T 12 ǫ 2 E [ δ ′′ ( X T )] .
◮ To first order, E [ δ ′′ ( X T )] is independent of the choice of ǫ . ◮ To estimate E [ δ ′′ ( X T )] we let ǫ = 1, i.e. all jumps have been replaced by diffusion. ◮ δ ′′ ( x ) is approximated with a difference quotient. ◮ Note that the work of simulating X T is equal to � 1 � Work ( X T ) = T ǫ − 1
Estimated error vs. true error − 1 10 − 2 10 Error − 3 10 − 4 10 − 5 10 − 2 − 1 0 10 10 10 ! Figure: Here the leading order error term is compared with the true error, estimated with Monte Carlo and a small value of ǫ . In this picture the true error is displayed with a dashed line. The solid line represents the error estimated from the leading term. The dotted lines represent bounds of the statistical error corresponding to one standard deviation.
More results We can also derive error estimates for ◮ Barrier options. ◮ Adaptive schemes.
The goal of an adaptive scheme is to achieve same level of accuracy with less work.
A simple adaptive scheme ◮ Recall that the model error is proportional to E [ g (3) ( X T )]. ◮ Fix a critical region L ⊂ R . ◮ Fix ǫ 1 > ǫ 2 > 0. ( a ) ◮ Define the adaptive approximation X of X T by: T X ǫ 1 if X ǫ 1 T + σ ( ǫ 1 ) W T T / ∈ L ( a ) X T = X ǫ 2 if X ǫ 1 T + σ ( ǫ 2 ) W T T ∈ L
Error estimates & work Theorem (K. & Tempone) The model error is ( a ) E = E [ g ( X T )] − E [ g ( X T )] = T � � � � x 3 ν ( d x ) E ∈ L g (3) ( X T ) 1 X ǫ 1 T / 6 | x | <ǫ 1 � � �� x 3 ν ( d x ) E T ∈ L g (3) ( X T ) + 1 X ǫ 1 | x | <ǫ 2
The work of simulating the adaptive approximation becomes: ( a ) � � ν ( | x | > ǫ 1 ) + P ( X ǫ 1 Work ( X T ) = T T ∈ L ) ν ( ǫ 2 < | x | < ǫ 1 )
Adaptive vs. standard approximation, an example ◮ Assume same setup as before, i.e. pure jump process X t with jump measure 1 / x 2 1 x > 0 . We let the contract be a digital option. ◮ We compare a particular choice of the adaptive approximation with the non-adaptive approximation by, for each tolerance TOL, comparing the work.
Work comparison adaptive vs. non–adaptive Work as function of tolerance, adaptive and non − adaptive approximations 9 8 WORK (expected number of jumps per path) 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 TOL − 3 x 10
Recommend
More recommend