Mathematical Foundations for Finance Exercise 8 Martin Stefanik ETH Zurich
Normal Distribution 2 1 Due to the definition of Brownian motion, normal distribution plays an of X respectively , we have that cannot be expressed in a closed form. 1 / 6 1 important role in the computations that involve it. The density of N ( 0 , 1 ) for will be denoted by φ and takes the following form: ( ) φ ( x ) = √ exp − x 2 . 2 π The cumulative distribution function (cdf) of N ( 0 , 1 ) will be denoted Φ and We can define N ( µ, σ 2 ) as the distribution of X = σ Z + µ for a µ ∈ R and σ > 0. This means that for F X ( x ) and f X ( x ) , denoting the CDF and the density Z ≤ x − µ ( x − µ [ ] ) F X ( x ) = P [ σ Z + µ ≤ x ] = P = Φ , σ σ ( x − µ ( x − µ ( − ( x − µ ) 2 ) ) ) f X ( x ) = d dx Φ = 1 = √ 2 πσ 2 exp σ φ . σ σ 2 σ 2
Normal Distribution This can be easily seen since the exponential function grows faster than Sums of independent normal random variables retain normality. Affine transformations of normal random variables retain normality. is odd. any power. 2 / 6 (defined on the same probability space). Then 1. E Let X ∼ N ( µ 1 , σ 2 1 ) and Y ∼ N ( µ 2 , σ 2 2 ) be independent random variables [ | X | k ] < ∞ for all k ∈ N [ ( X − µ 1 ) k ] = 0 for all odd k ∈ N 2. E ( x ) This follows from the symmetry of φ – the function x k φ σ 3. aX + b ∼ N ( a µ 1 + b , σ 2 1 a 2 ) for any a , b ∈ R 4. X ± Y ∼ N ( µ 1 ± µ 2 , σ 2 1 + σ 2 2 )
Normal Distribution The last two properties can be easily shown using the so-called moment which is frequently useful when it comes to normal distribution. and its derivation demonstrates the technique of completing the square, 3 / 6 generating function (MGF), defined for a random variable X by whenever this expectation exists. M X ( t ) = E [exp( tX )] , t ∈ R , The MGF of X ∼ N ( µ, σ 2 ) is ( ) M X ( t ) = exp µ t + 1 2 σ 2 t 2 Additionally, we will often work with random variables of the form Y = exp( X ) with X ∼ N ( µ, σ 2 ) (i.e. Y is a lognormal r.v.), so the knowledge of the MGF of normal distribution comes handy when computing E [ Y ] , as we have that E [ Y ] = M X ( 1 ) .
Square Bracket Process Theorem 1 quadratic variation . martingale. It is just unique among all processes satisfying the conditions in the above theorem. its importance in stochastic analysis – it comes up in the famous Itô’s formula for instance. 4 / 6 For any local martingale M = ( M t ) t ≥ 0 null at 0, there exists a unique adapted increasing RCLL process [ M ] = ([ M ] t ) t ≥ 0 null at zero with ∆[ M ] = (∆ M ) 2 having the property that M 2 − [ M ] is a local martingale. • We call this process [ M ] the square bracket process or optional • Important: [ M ] is not a unique process such that M 2 − [ M ] is a local • This process and its properties are of significant importance because of
Covariation Process Definition 2 (Covariation process) Let M and N be two local martingales that are both null at 0. We define the covariation process of M and N by importance in stochastic analysis. Naturally, it occurs when we are dealing with multivariate stochastic processes. 5 / 6 4 ([ M + N ] − [ M − N ]) . [ M , N ] = 1 • [ M , N ] is the unique adapted RCLL process that is null at 0, of finite variation and with ∆[ M , N ] = ∆ M ∆ N . • Note that [ M ] and [ N ] are increasing and therefore of finite variation. [ M , N ] is not necessarily increasing anymore, but still of finite variation like [ M ] and [ N ] . Compare this with variance and covariance. • Similarly to the optional quadratic variation, covariation is of significant
Sharp Bracket Process integrable and it admits a unique increasing predictable process and filtration) are again local martingales, we also have that is a local martingale. bracket process of M . aforementioned properties. In particular we do not have that 6 / 6 If the local martingale M is locally square-integrable, then [ M ] is locally ⟨ M ⟩ = ( ⟨ M ⟩ ) t ≥ 0 null at zero such that [ M ] − ⟨ M ⟩ is a local martingale. • Since sums of local martingales (w.r.t. the same probability measure M 2 − ⟨ M ⟩ = ( M 2 − [ M ]) + ([ M ] − ⟨ M ⟩ ) • The process ⟨ M ⟩ is called the predictable compensator or the sharp • As before, ⟨ M ⟩ is only unique among the processes with the ∆ ⟨ M ⟩ = (∆ M ) 2 .
Thank you for your attention!
Recommend
More recommend