1 La Londe, 11 September 2007 How to find semimartingale decompositions relative to enlarged filtrations
Introduction 1 Let S be a semimartingale relative to ( F t ) G t ⊃ F t enlargement Questions: • Is S a ( G t ) -semimartingale also? • If yes, how do the new semimartingale decompositions look like? • ( H · F S ) defined = ⇒ ( H · G S ) defined also?
Introduction 2 Application to mathematical finance Financial markets with insiders intrinsic perspective of the price for insider or normal investor � t S t = M t + α s d � M, M � s 0 • optimal investment strategy θ ∗ t = xα t E ( α · M ) t • maximal expected logarithmic utility � T u ( x ) = log( x ) + 1 α 2 s d � M, M � s 2 E 0 = ⇒ investment & utility depend on the semimartingale dec.
Introduction 3 Semimartingale decompositions: Let M be an ( F t ) -martingale How to find a ( G t ) -decomposition M = N + A ? Definition 1 A ( G t ) -predictable process µ such that � · M − µ t d � M, M � t is a ( G t ) − local martingale 0 is called information drift of ( G t ) with respect to M .
Initial enlargements 1 Initial enlargements G t = F t ∨ σ ( L ) ( L random variable) probability conditioned on the new information P ( ·| L ) 1. Observation: The enlargement ( F t ) → ( G t ) corresponds to the random change of probability P → P ( ·| L ) If there are no singularities P ( ·| L ) ≪ P on F t for all t ( Jacod’s condition) , then martingale rel. to ( F t ) = ⇒ M semimartingale rel. to P ( ·| L ) M
Initial enlargements 2 2. Observation Girsanov’s theorem = ⇒ semimartingale decompositions For all x let t ( ω ) := P ( dω | L = x ) � p x � P ( dω ) � F t be the conditional density. ( F t , P ) − martingale M M − 1 p x · � M, p x � = ⇒ ( F t , P ( ·| L = x )) − martingale M − 1 p x · � M, p x � = ⇒ ( G t , P ( ·| L = x )) − martingale M − 1 p L · � M, p L � = ⇒ ( G t , P ) − martingale
Initial enlargements 3 M − 1 p L · � M, p L � ( G t , P ) − martingale Theorem 1 If � t p x t = p x α x 0 + s dM s + ortho. martingale 0 then � t α L ( ω ) s M t − d � M, M � s is a ( G t ) − local martingale. p L ( ω ) 0 s Remarks: 1) inf. drift = α L ( ω ) = variational derivative of the logarithm of p L s p L ( ω ) s 2) All we need is: α x ( s ) P L ( dx ) p x ≪ s P L ( dx ) = P ( L ∈ dx |F s )
Initial enlargements 4 Information drift via Malliavin calculus On the Wiener space: information drift = logarithmic Malliavin trace of the conditional probability relative to the new information Theorem 2 (Imkeller, Pontier, Weisz 2000) If D t P ( L ∈ dx |F t ) ≪ P ( L ∈ dx |F t ) then the ( G t ) -information drift is given by D t p L ( ω ) ( ω ) t . p L ( ω ) ( ω ) t
General enlargements (continuous case) 1 General enlargements (continuous case) Arbitrary enlargement ( G t ) ⊃ ( F t ) Aim: General representation of the information drift of a continuous martingale M wrt ( G t ) Assumption: There exist ( F 0 t ) and ( G 0 t ) countably generated s. th. ( F t ) and ( G t ) are the smallest extensions with the usual conditions. ⇒ reg. conditional probability P t ( ω, A ) relative to F 0 = t exists Martingale property = ⇒ � t k s ( · , A ) dM s + L A P t ( · , A ) = P ( A ) + t , 0 where � L A , M � = 0 .
General enlargements (continuous case) 2 Condition (Abs): k t ( ω, · ) is a signed measure on G 0 t − and satisfies � � � � k t ( ω, · ) ≪ P t ( ω, · ) � � � � G 0 G 0 t − t − for d � M, M � ⊗ P -a.a ( ω, t ) . Lemma 1 There exists an ( F t ⊗ G t ) − predictable process γ such that for d � M, M � ⊗ P -a.a. ( ω, t ) γ t ( ω, ω ′ ) = k t ( ω, dω ′ ) � � . � P t ( ω, dω ′ ) � G 0 t − Theorem 3 (A., Dereich, Imkeller 2005) The information drift of M relative to ( G t ) is given by α t ( ω ) = γ t ( ω, ω )
General enlargements (continuous case) 3 Question: When is (Abs) satisfied? How strong is the assumption (Abs)? Theorem 4 There exists a square-integrable information drift = ⇒ (Abs) Proof: requires that σ -fields are countably generated Questions: 1. Practical relevance? 2. What about martingales with jumps?
General enlargements for pure jump martingales 1 Purely discontinuous martingales � t � X t = ψ ( s, z ) [ µ − π ]( ds, dz ) 0 R 0 µ = Poisson random measure with compensator π ψ predictable and integrable Predictable representation property If M square integrable ( F t ) -martingale, then there exists a predictable ϕ ∈ L 2 ( π ⊗ P ) such that � t � M t = M 0 + ϕ ( s, z ) [ µ − π ]( ds, dz ) . 0 R 0
General enlargements for pure jump martingales 2 Arbitrary enlargement ( G t ) ⊃ ( F t ) Conditional new information � t � P t ( · , A ) = P ( A ) + k s ( z, A )[ µ − π ]( ds, dz ) . 0 R 0 ν = Levy measure � Condition (Abs): R 0 ψ t ( ω, z ) k t ( ω, z, · ) dν ( z ) is a signed measure on G 0 t − and satisfies � � � � � ψ t ( ω, z ) k t ( ω, z, · ) dν ( z ) ≪ P t ( ω, · ) , � � � � G 0 G 0 R 0 t − t − for P ⊗ l -a.a.( ω, t ) .
General enlargements for pure jump martingales 3 Theorem 5 There exists an ( F t ⊗ G t ) − predictable δ such that for d � M, M � ⊗ P -a.a. ( ω, t ) R 0 ψ t ( ω, z ) k t ( ω, z, dω ′ ) dν ( z ) � � δ t ( ω, ω ′ ) = � � P t ( ω, dω ′ ) � G 0 t − Moreover, η t ( ω ) = δ t ( ω, ω ) is the information drift of X , i.e. � t X t − is a ( G t ) -local martingale η s ds 0 .
General enlargements for pure jump martingales 4 Calculating examples General scheme: • If G 0 t = F 0 t ∨ H 0 t , then it is enough to determine the density along ( H 0 t ) , i.e. R 0 ψ t ( ω, z ) k t ( ω, z, dω ′ ) dν ( z ) � � δ t ( ω, ω ′ ) = � . � P t ( ω, dω ′ ) � H 0 t − • Determine the density by using a generalized Clark-Ocone formula: k t ( ω, z, A ) = predictable projection of D t,z P t + ( ω, A )
General enlargements for pure jump martingales 5 A Clark-Ocone formula for Poisson random measures Canonical space: Ω = set of all integer valued signed measures ω on [0 , 1] × R \ { 0 } s.th. • ω ( { ( t, z ) } ) ∈ { 0 , 1 } , • ω ( A × B ) < ∞ if π ( A × B ) = λ ( A ) ν ( B ) < ∞ . random measure µ ( ω ; A × B ) := ω ( A × B ) P = measure on Ω such that µ is a Poisson r.m. with compensator π = λ ⊗ ν
General enlargements for pure jump martingales 6 Picard’s difference operator Definition: ǫ − ( t,z ) and ǫ + ( t,z ) : Ω → Ω defined by ǫ − ( t,z ) ω ( A × B ) := ω ( A × B ∩ { ( t, z ) } c ) , ǫ + ( t,z ) ω ( A × B ) := ǫ − ( t,z ) ω ( A × B ) + 1 A ( t ) 1 B ( z ) . D ( t,z ) F := F ◦ ǫ + ( t,z ) − F Theorem 6 Let F be bounded and F 1 -measurable. Then � 1 � [ D ( t,z ) F ] p [ µ − π ]( dt, dz ) , F = E ( F ) + 0 R 0 where [ D ( · ,z ) F ] p is the predictable projection of D ( · ,z ) F .
General enlargements for pure jump martingales 7 Generating information drifts � t � RECALL: P t ( · , A ) = P ( A ) + R 0 k s ( z, A ) [ µ − π ]( ds, dz ) 0 Theorem 7 Let A ∈ F . Then [ D ( t,z ) ( P t + ( ω, A ))] p k t ( z, A ) = P t − ( ǫ + = ( t,z ) ω, A ) − P t − ( ω, A )
General enlargements for pure jump martingales 8 Example: � t � X t = ψ ( s, z )[ µ − π ]( dr, dz ) 0 R 0 ( F 0 t ) = filtration generated by µ G 0 t = F 0 t ∨ σ ( | X 1 | ) (initial enlargement) Suppose P ( X 1 − X t ∈ dx ) ≪ Lebesgue measure and f ( t, x ) = P ( X 1 − X t ∈ dx ) dx
General enlargements for pure jump martingales 9 Then � c P t ( · , | X 1 | ≤ c ) = [ f ( t, y − X t ) + f ( t, − y − X t )] dy 0 and � c P t + ( ǫ + ( t,z ) ω, | X 1 | ≤ c )) = [ f ( t, y − X t ( ω ) − z )+ f ( t, − y − X t ( ω ) − z )] dy 0 Consequently, � c k t ( z, | X 1 | ≤ c ) = [ f ( t, y − X t − − z ) + f ( t, − y − X t − − z )] dy 0 − P t − ( · , | X 1 | ≤ c ) , � R 0 ψ ( t,z ) k t ( ω,z,dω ′ ) dν ( z ) → δ t ( ω, ω ′ ) = � − � P t ( ω,dω ′ ) σ ( | X 1 | )
General enlargements for pure jump martingales 10 Lemma 2 The information drift η t of X relative to ( G t ) is given by � f ( t, | X 1 | − X t − − z ) + f ( t, −| X 1 | − X t − − z ) � � − 1 ψ ( t, z ) ν ( dz ) f ( t, | X 1 | − X t − ) + f ( t, −| X 1 | − X t − ) R 0 Remark: � a) If R 0 | ψ ( t, z ) | dν ( z ) < ∞ = ⇒ separate terms b) This scheme works for many examples
Conclusion 1 Conclusion • enlargements of filtrations can be seen as random changes of measure • variational calculus allows to derive explicit semimartingale decompositions with respect to enlarged filtrations • on Wiener space: information drift = logarithmic Malliavin trace of the conditional probability relative to the enlarging information • on a Poisson space: information drift = logarithmic Picard trace of the conditional probability relative to the enlarging information
Thanks 1 Thanks for your attention!
Recommend
More recommend