Limit theorems and statistical inference for ergodic solutions of L´ evy driven SDE’s Alexei M.Kulik Institute of mathematics, Kyiv, Ukraine, kulik.alex.m@gmail.com Tokyo, 3 September 2013 Limit theorems and statistical inference 1/26 Alexei M.Kulik
I. A statistical model based on discrete observations of a L´ evy driven SDE This part of the talk is based on the joint research with D.Ivanenko. Consider a solution X θ to an SDE driven by a L´ evy process Z : dX θ t = a θ ( X θ t ) dt + dZ t , X 0 = x 0 . (1) Denote by P θ n the distribution of the sample ( X h , . . . , X nh ) , and consider the statistical experiments E n = � � R n , B ( R n ) , P θ n , θ ∈ Θ n ≥ 1 . , The state space for X, Z is R , the parameter set Θ is an open interval in R . In our model: the noise is an infinite intensity L´ evy process without a diffusion part ; we consider the fixed frequency case : on the contrary to high frequency models , where h n → 0 , we assume h > 0 to be fixed; we are mainly focused on the asymptotic properties of the MLE because we are aiming to get an asymptotically efficient estimator. Limit theorems and statistical inference 2/26 Alexei M.Kulik
The likelihood function of the model The likelihood function of our Markov model has the form n � p θ ( x 1 , . . . , x n ) ∈ R n L n ( θ ; x 1 , . . . , x n ) = h ( x k − 1 , x k ) , k =1 (the initial value x 0 is assumed to be known), where p θ t ( x, y ) is the transition probability density of X θ . Both the likelihood function and likelihood ratio Z n ( θ 0 , θ ; x 1 , . . . , x n ) = L n ( θ ; x 1 , . . . , x n ) L n ( θ 0 ; x 1 , . . . , x n ) are implicit, because analytical expressions for p θ t ( x, y ) or their ratio are not available. Limit theorems and statistical inference 3/26 Alexei M.Kulik
Specific feature of the model: likelihood function may be non-trivially degenerated In general, L n ( θ ; · ) may equal zero on a non-empty set N θ n ⊂ R n . Moreover, this set can depend non-trivially on θ . To see that, consider an example of an Ornstein-Uhlenbeck process driven by a one-sided α -stable process with α < 1 : � t � ∞ dX θ t = − θX θ t dt + dZ t , Z t = uν ( ds, du ) . 0 0 Then by a support theorem for L´ evy driven SDE’s (Simon 2000), the (topological) support of P θ n is � � S θ ( x 1 , . . . , x n ) : x k ≥ e − θh x k − 1 , k = 1 , . . . , n n = , which depends non-trivially on θ . Because n = closure( R n \ N θ S θ n ) , this indicates that N θ n depends non-trivially on θ , as well. Henceforth, our model can not be considered as a model with a C 1 log-likelihood function. Limit theorems and statistical inference 4/26 Alexei M.Kulik
Main result: conditions on the noise smoothness near the origin: for some u 0 > 0 , the restriction of µ on [ − u 0 , u 0 ] has a positive density σ ∈ C 2 ([ − u 0 , 0) ∪ (0 , u 0 ]) and there exists C 0 such that | σ ′ ( u ) | ≤ C 0 | u | − 1 σ ( u ) , | σ ′′ ( u ) | ≤ C 0 u − 2 σ ( u ) , | u | ∈ (0 , u 0 ]; sufficiently high intensity of “small jumps”: � − 1 � log 1 � � µ { u : | u | ≥ ε } → ∞ , ε → 0; ε moment bound for “large jumps”: for some ε > 0 , � u 4+ ε µ ( du ) < ∞ . | u |≥ 1 An exapmle: tempered α -stable measure µ ( du ) = r ( u ) u − α − 1 du . Limit theorems and statistical inference 5/26 Alexei M.Kulik
Main result: conditions on the coefficients regularity and bounds: a ∈ C 3 , 2 ( R × Θ) have bounded derivatives ∂ 2 ∂ 2 ∂ 3 ∂ 3 ∂ 3 ∂ 4 ∂ 4 ∂ 5 ∂ x a, xx a, xθ a, xxx a, xxθ a, xθθ a, xxxθ a, xxθθ a, xxxθθ a, and | a θ ( x ) | + | ∂ θ a θ ( x ) | + | ∂ 2 θθ a θ ( x ) | ≤ C (1 + | x | ); “drift condition”: for any compact set K ⊂ Θ , a θ ( x ) lim sup < 0 uniformly w.r.t. θ ∈ K. x | x |→ + ∞ An example: perturbed OU process, α ∈ C 3 , 2 a θ ( x ) = − θx + α θ ( x ) , ( R × Θ) , Θ = ( θ 1 , θ 2 ) , θ 1 > 0 . b Limit theorems and statistical inference 6/26 Alexei M.Kulik
Main result Theorem Every experiment E n , n ≥ 1 is regular (see below), and there exists h = ∂ θ p θ I n ( θ ) � 2 � h ( X θ,st , X θ,st = σ 2 ( θ ) = E g θ g θ h lim ) , . 0 h p θ n n →∞ h In addition, if the model is locally identifiable in the sense that σ 2 ( θ ) > 0 , θ ∈ Θ , and is globally identifiable , i.e. for every θ 1 � = θ 2 there exists x = x ( θ 1 , θ 2 ) : P θ 1 h ( x, · ) � = P θ 2 h ( x, · ) , then the MLE ˆ θ n is consistent , asymptotically normal with N (0 , σ 2 ( θ )) limit distribution, and is asymptotically efficient w.r.t. any loss function w ∈ W p , i.e. w ( x, y ) = v ( | x − y | ) with convex v of at most polynomial growths at ∞ . Limit theorems and statistical inference 7/26 Alexei M.Kulik
The method Because of lack of C 1 -smoothness of the log-likelihood function, it was almost inevitable for us to choose as the main tool the Ibragimov-Khas’minskii approach (Ibragimov-Khas’minskii 1981), which basically consists of three following stages. ⇒ Rao-Cramer inequality Ground stage Regularity property ⇒ Lower bounds for efficiency 1-st stage LAN property w.r.t. cost functions from W p 2-st stage Uniform LAN property; H¨ older continuity and growth bounds for associated Hellinger processes ⇒ Asymptotic normality and efficiency of MLE Limit theorems and statistical inference 8/26 Alexei M.Kulik
Malliavin-calculus based integral representations for transition densities and their derivatives It is well known that in the framework of the Malliavin calculus a representation DX t p θ t ( x, y ) = E θ x δ (Ξ t )1 I X t >y , Ξ t = � DX t � 2 H can be obtained via an integration-by-parts procedure from the formal relation p θ t ( x, y ) = − ∂ y E θ x 1 I X t >y . Nualart 1995. Similar heuristics leads to integral representations for the derivatives of p θ t ( x, y ) . ∂ θ p θ t = ( ∂ θ X 1 t ( x, y ) t ) DX t = E t,θ x,y δ (Ξ 1 Ξ 1 t ) , � DX t � 2 p θ t ( x, y ) H Gobet 2001, 2002; Corcuera, Kohatsu-Higa 2011. Yoshida 1992, 1996. Limit theorems and statistical inference 9/26 Alexei M.Kulik
Integral representations (continued) To get the integration-by-part framework on the Poisson probability space, we use the approach close to the one introduced in Bismut 1981, modified and simplified in order to give integral representations explicitly. Let ν be the Poisson point measure involved into Itˆ o-L´ evy representation for the L´ evy process Z : � t � t � � � � Z t = u ν ( ds, du ) − dsµ ( du ) + uν ( ds, du ) . 0 | u |≤ 1 0 | u | > 1 Then � t � t � � f ′ ( u ) ̺ ( u ) ν ( ds, du ) , D f ( u ) ν ( ds, du ) = 0 R 0 R is a function which equals ̺ ( u ) = u 2 in some neighbourhood of where ̺ ∈ C ∞ 0 the point u = 0 . ∂ ε Q ε ( u ) | ε =0 = ̺ ( u ) . ( τ, u ) � ( τ, Q ε ( u )) , Limit theorems and statistical inference 10/26 Alexei M.Kulik
Integral representations (continued) Theorem There exists continuous and bounded p θ h ( x, y ) , ∂ θ p θ h ( x, y ) , ∂ 2 θθ p θ h ( x, y ) , and ∂ θ p θ h ( x, y ) p θ h ( x, y ) = E θ =: g θ h ( x, y ) = E h,θ x,y δ (Ξ 1 x δ (Ξ h )1 I X h >y , h ) , p θ h ( x, y ) ∂ 2 θθ p θ h ( x, y ) =: f θ h ( x, y ) = E h,θ x,y δ (Ξ 2 h ) p θ h ( x, y ) with explicitly given Ξ h , Ξ 1 h , Ξ 2 h such that � | δ (Ξ h ) | p + | δ (Ξ 1 h ) | p + | δ (Ξ 2 h ) | p � ≤ C (1 + | x | p ) , E x p < 4 + ε. Consequently, for every p < 4 + ε p p � �� � � � p θ t ( x, y ) ≤ C (1 + | x − y | ) − p , E θ � g θ � f θ ≤ C (1 + | x | ) p . t ( x, X t ) + t ( x, X t ) � � � � x � � Limit theorems and statistical inference 11/26 Alexei M.Kulik
Regularity of the model Recall that an experiment is said to be regular , if for λ d -a.a. ( x 1 , . . . , x n ) ∈ R n the mapping θ �→ L n ( θ ; x 1 , . . . , x n ) is continuous; � L n ( θ ; · ) ∈ L 2 ( R n ) is continuously differentiable. the mapping θ �→ For a regular experiment, the Fisher information is given by √ L n � � 2 G n = 2 ∂ θ � � d x = E G 2 n ( θ ; X θ h , . . . , X θ √ L n I n ( θ ) = 4 ∂ θ L n ( θ ; x ) nh ) , . R n Using the above bounds and approximating the function x �→ √ x by C 1 -functions properly, we get that the model is regular and n L n ( θ ; · ) = 1 � � � g θ 2 G n ( θ ; · ) ∂ θ L n ( θ ; · ) , ∂ θ G n ( θ ; x 1 , . . . , x n ) = h ( x k − 1 , x k ) . k =1 Since g θ h ( X θ ( k − 1) h , X θ kh ) , k = 1 , . . . , n is a martingale-difference sequence w.r.t. P θ n , the Fisher information of the model equals n � 2 � � g θ h ( X θ ( k − 1) h , X θ I n ( θ ) = E kh ) . k =1 Limit theorems and statistical inference 12/26 Alexei M.Kulik
Recommend
More recommend