lecture on parameter estimation for stochastic
play

Lecture on Parameter Estimation for Stochastic Differential - PowerPoint PPT Presentation

Lecture on Parameter Estimation for Stochastic Differential Equations Erik Lindstrm FMS161/MASM18 Financial Statistics Erik Lindstrm Lecture on Parameter Estimation for Stochastic Differential Equations Recap We are interested in the


  1. Lecture on Parameter Estimation for Stochastic Differential Equations Erik Lindström FMS161/MASM18 Financial Statistics Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  2. Recap ◮ We are interested in the parameters θ in the Stochastic Integral Equations � t � t X ( t ) = X ( 0 )+ 0 µ θ ( s , X ( s )) d s + 0 σ θ ( s , X ( s )) d W ( s ) (1) Why? ◮ Model validation ◮ Risk management ◮ Advanced hedging (Greeks 9.2.2 and quadratic hedging 9.2.2.1 ( P / Q )) Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  3. Recap ◮ We are interested in the parameters θ in the Stochastic Integral Equations � t � t X ( t ) = X ( 0 )+ 0 µ θ ( s , X ( s )) d s + 0 σ θ ( s , X ( s )) d W ( s ) (1) Why? ◮ Model validation ◮ Risk management ◮ Advanced hedging (Greeks 9.2.2 and quadratic hedging 9.2.2.1 ( P / Q )) Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  4. Recap ◮ We are interested in the parameters θ in the Stochastic Integral Equations � t � t X ( t ) = X ( 0 )+ 0 µ θ ( s , X ( s )) d s + 0 σ θ ( s , X ( s )) d W ( s ) (1) Why? ◮ Model validation ◮ Risk management ◮ Advanced hedging (Greeks 9.2.2 and quadratic hedging 9.2.2.1 ( P / Q )) Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  5. Recap ◮ We are interested in the parameters θ in the Stochastic Integral Equations � t � t X ( t ) = X ( 0 )+ 0 µ θ ( s , X ( s )) d s + 0 σ θ ( s , X ( s )) d W ( s ) (1) Why? ◮ Model validation ◮ Risk management ◮ Advanced hedging (Greeks 9.2.2 and quadratic hedging 9.2.2.1 ( P / Q )) Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  6. Some asymptotics Consider the arithmetic Brownian motion d X ( t ) = µ d t + σ d W ( t ) (2) The drift is estimated by computing the mean, and compensating for the sampling δ = t n + 1 − t n N − 1 µ = 1 ∑ ˆ X ( t n + 1 ) − X ( t n ) . (3) δ N n = 0 Expanding this expression reveals that the MLE is given by µ = X ( t N ) − X ( t 0 ) = µ + σ W ( t N ) − W ( t 0 ) ˆ . (4) t N − t 0 t N − t 0 The MLE for the diffusion ( σ ) parameter is given by N − 1 → σ 2 χ 2 ( N − 1 ) 1 σ 2 = µδ ) 2 d ∑ ˆ ( X ( t n + 1 ) − X ( t n ) − ˆ (5) δ ( N − 1 ) N − 1 n = 0 Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  7. Some asymptotics Consider the arithmetic Brownian motion d X ( t ) = µ d t + σ d W ( t ) (2) The drift is estimated by computing the mean, and compensating for the sampling δ = t n + 1 − t n N − 1 µ = 1 ∑ ˆ X ( t n + 1 ) − X ( t n ) . (3) δ N n = 0 Expanding this expression reveals that the MLE is given by µ = X ( t N ) − X ( t 0 ) = µ + σ W ( t N ) − W ( t 0 ) ˆ . (4) t N − t 0 t N − t 0 The MLE for the diffusion ( σ ) parameter is given by N − 1 → σ 2 χ 2 ( N − 1 ) 1 σ 2 = µδ ) 2 d ∑ ˆ ( X ( t n + 1 ) − X ( t n ) − ˆ (5) δ ( N − 1 ) N − 1 n = 0 Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  8. Some asymptotics Consider the arithmetic Brownian motion d X ( t ) = µ d t + σ d W ( t ) (2) The drift is estimated by computing the mean, and compensating for the sampling δ = t n + 1 − t n N − 1 µ = 1 ∑ ˆ X ( t n + 1 ) − X ( t n ) . (3) δ N n = 0 Expanding this expression reveals that the MLE is given by µ = X ( t N ) − X ( t 0 ) = µ + σ W ( t N ) − W ( t 0 ) ˆ . (4) t N − t 0 t N − t 0 The MLE for the diffusion ( σ ) parameter is given by N − 1 → σ 2 χ 2 ( N − 1 ) 1 σ 2 = µδ ) 2 d ∑ ˆ ( X ( t n + 1 ) − X ( t n ) − ˆ (5) δ ( N − 1 ) N − 1 n = 0 Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  9. Some asymptotics Consider the arithmetic Brownian motion d X ( t ) = µ d t + σ d W ( t ) (2) The drift is estimated by computing the mean, and compensating for the sampling δ = t n + 1 − t n N − 1 µ = 1 ∑ ˆ X ( t n + 1 ) − X ( t n ) . (3) δ N n = 0 Expanding this expression reveals that the MLE is given by µ = X ( t N ) − X ( t 0 ) = µ + σ W ( t N ) − W ( t 0 ) ˆ . (4) t N − t 0 t N − t 0 The MLE for the diffusion ( σ ) parameter is given by N − 1 → σ 2 χ 2 ( N − 1 ) 1 σ 2 = µδ ) 2 d ∑ ˆ ( X ( t n + 1 ) − X ( t n ) − ˆ (5) δ ( N − 1 ) N − 1 n = 0 Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  10. A simple method Many data sets are sampled at high frequency, making the bias due to discretization of the SDEs some of the schemes in Chapter 12 acceptable. The simplest discretization, the Explicit Euler method, would for the stochastic differential equation d X ( t ) = µ ( t , X ( t )) d t + σ ( t , X ( t )) d W ( t ) (6) correspond the Discretized Maximum Likelihood (DML) estimator given by N − 1 ˆ ∑ θ DML = argmax log φ ( X ( t n + 1 ) , X ( t n )+ µ ( t n , X ( t n ))∆ , Σ( t n , X ( t n ))∆) θ ∈ Θ n = 1 (7) where φ ( x , m , P ) is the density for a multivariate Normal distribution with argument x , mean m and covariance P and Σ( t , X ( t )) = σ ( t , X ( t )) σ ( t , X ( t )) T . (8) Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  11. A simple method Many data sets are sampled at high frequency, making the bias due to discretization of the SDEs some of the schemes in Chapter 12 acceptable. The simplest discretization, the Explicit Euler method, would for the stochastic differential equation d X ( t ) = µ ( t , X ( t )) d t + σ ( t , X ( t )) d W ( t ) (6) correspond the Discretized Maximum Likelihood (DML) estimator given by N − 1 ˆ ∑ θ DML = argmax log φ ( X ( t n + 1 ) , X ( t n )+ µ ( t n , X ( t n ))∆ , Σ( t n , X ( t n ))∆) θ ∈ Θ n = 1 (7) where φ ( x , m , P ) is the density for a multivariate Normal distribution with argument x , mean m and covariance P and Σ( t , X ( t )) = σ ( t , X ( t )) σ ( t , X ( t )) T . (8) Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  12. Consistency ◮ The DMLE is generally NOT consistent. ◮ Approximate ML estimators (13.5) are, provided enough computational resources are allocated ◮ Simulation based estimators ◮ Fokker-Planck based estimators ◮ Series expansions. ◮ GMM-type estimators (13.6) are consistent if the moments are correctly specified (which is a non-trivial problem!) Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  13. Consistency ◮ The DMLE is generally NOT consistent. ◮ Approximate ML estimators (13.5) are, provided enough computational resources are allocated ◮ Simulation based estimators ◮ Fokker-Planck based estimators ◮ Series expansions. ◮ GMM-type estimators (13.6) are consistent if the moments are correctly specified (which is a non-trivial problem!) Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  14. Consistency ◮ The DMLE is generally NOT consistent. ◮ Approximate ML estimators (13.5) are, provided enough computational resources are allocated ◮ Simulation based estimators ◮ Fokker-Planck based estimators ◮ Series expansions. ◮ GMM-type estimators (13.6) are consistent if the moments are correctly specified (which is a non-trivial problem!) Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  15. Simultion based estimators ◮ Discretely observed SDEs are Markov processes ◮ Then it follows that p θ ( x t | x s ) = E θ [ p θ ( x t | x τ ) | F ( s )] , t > τ > s (9) This is the Pedersen algorithm. ◮ Improved by Durham-Gallant (2002) and Lindström (2012) ◮ Works very well for Multivariate models! ◮ and is easily (...) extended to Levy driven SDEs. Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  16. Simultion based estimators ◮ Discretely observed SDEs are Markov processes ◮ Then it follows that p θ ( x t | x s ) = E θ [ p θ ( x t | x τ ) | F ( s )] , t > τ > s (9) This is the Pedersen algorithm. ◮ Improved by Durham-Gallant (2002) and Lindström (2012) ◮ Works very well for Multivariate models! ◮ and is easily (...) extended to Levy driven SDEs. Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  17. Simultion based estimators ◮ Discretely observed SDEs are Markov processes ◮ Then it follows that p θ ( x t | x s ) = E θ [ p θ ( x t | x τ ) | F ( s )] , t > τ > s (9) This is the Pedersen algorithm. ◮ Improved by Durham-Gallant (2002) and Lindström (2012) ◮ Works very well for Multivariate models! ◮ and is easily (...) extended to Levy driven SDEs. Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  18. Simultion based estimators ◮ Discretely observed SDEs are Markov processes ◮ Then it follows that p θ ( x t | x s ) = E θ [ p θ ( x t | x τ ) | F ( s )] , t > τ > s (9) This is the Pedersen algorithm. ◮ Improved by Durham-Gallant (2002) and Lindström (2012) ◮ Works very well for Multivariate models! ◮ and is easily (...) extended to Levy driven SDEs. Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

Recommend


More recommend