H. Madsen, Time Series Analysis, Chapmann Hall Time Series Analysis Henrik Madsen hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby Henrik Madsen 1
H. Madsen, Time Series Analysis, Chapmann Hall Outline of the lecture Stochastic processes, 1st part: Stochastic processes in general: Sec 5.1, 5.2, 5.3 [except 5.3.2], 5.4. MA, AR, and ARMA-processes, Sec. 5.5 Henrik Madsen 2
H. Madsen, Time Series Analysis, Chapmann Hall Stochastic Processes – in general Function: X ( t, ω ) Time: t ∈ T Realization: ω ∈ Ω Index set: T Sample Space: Ω (sometimes called ensemble ) X ( t = t 0 , · ) is a random variable X ( · , ω ) is a time series (i.e. one realization) In this course we consider the case where time is discrete and and measurements are continuous Henrik Madsen 3
H. Madsen, Time Series Analysis, Chapmann Hall Stochastic Processes – illustration X ( t = 38 , · ) X ( · , ω 1 ) 20 X ( · , ω 1 ) 10 X ( · , ω 3 ) X ( · , ω 3 ) 0 X ( · , ω 2 ) X ( · , ω 2 ) −10 0 20 40 60 80 100 Henrik Madsen 4
H. Madsen, Time Series Analysis, Chapmann Hall Complete Characterization n -dimensional probability distribution: f X ( t 1 ) ,...,X ( t n ) ( x 1 , . . . , x n ) Family of probability distribution functions, i.e.: For all n = 1 , 2 , 3 , . . . and all t is called the family of finite-dimensional probability distribution functions for the process . This family completely characterize the stochastic process. Henrik Madsen 5
H. Madsen, Time Series Analysis, Chapmann Hall 2’nd order moment representation Mean function: � ∞ µ ( t ) = E [ X ( t )] = xf X ( t ) ( x ) dx, −∞ Autocovariance function: � � γ XX ( t 1 , t 2 ) = γ ( t 1 , t 2 ) = Cov X ( t 1 ) , X ( t 2 ) � � = E ( X ( t 1 ) − µ ( t 1 ))( X ( t 2 ) − µ ( t 2 )) The variance function is obtained from γ ( t 1 , t 2 ) when t 1 = t 2 = t : σ 2 ( t ) = V [ X ( t )] = E ( X ( t ) − µ ( t )) 2 � � Henrik Madsen 6
H. Madsen, Time Series Analysis, Chapmann Hall Stationarity A process { X ( t ) } is said to be strongly stationary if all finite-dimensional distributions are invariant for changes in time, i.e. for every n , and for any set ( t 1 , t 2 , . . . , t n ) and for any h it holds f X ( t 1 ) , ··· ,X ( t n ) ( x 1 , · · · , x n ) = f X ( t 1 + h ) , ··· ,X ( t n + h ) ( x 1 , · · · , x n ) A process { X ( t ) } is said to be weakly stationary of order k if all the first k moments are invariant to changes in time A weakly stationary process of order 2 is simply called weakly stationary or just stationary : σ 2 ( t ) = σ 2 µ ( t ) = µ γ ( t 1 , t 2 ) = γ ( t 1 − t 2 ) Henrik Madsen 7
H. Madsen, Time Series Analysis, Chapmann Hall Ergodicity In time series analysis we normally assume that we have access to one realization only We therefore need to be able to determine characteristics of the random variable X t from one realization x t It is often enough to require the process to be mean-ergodic: � T � 1 E [ X ( t )] = x ( t, ω ) f ( ω ) dω = lim x ( t, ω ) dt 2 T T →∞ Ω − T i.e. if the mean of the ensemble equals the mean over time Some intuitive examples, not directly related to time series: http://news.softpedia.com/news/What-is-ergodicity-15686.shtml Henrik Madsen 8
H. Madsen, Time Series Analysis, Chapmann Hall Special processes Normal processes (also called Gaussian processes ): All finite dimensional distribution functions are (multivariate) normal distributions Markov processes : The conditional distribution depends only of the latest state of the process: P { X ( t n ) ≤ x | X ( t n − 1 ) , · · · , X ( t 1 ) } = P { X ( t n ) ≤ x | X ( t n − 1 ) } Deterministic processes : Can be predicted without uncertainty from past observations Pure stochastic processes : Can be written as a (infinite) linear combination of uncorrelated random variables Decomposition: X t = S t + D t Henrik Madsen 9
H. Madsen, Time Series Analysis, Chapmann Hall Autocovariance and autocorrelation For stationary processes: Only dependent of the time difference τ = t 2 − t 1 Autocovariance: γ ( τ ) = γ XX ( τ ) = Cov [ X ( t ) , X ( t + τ )] = E [ X ( t ) X ( t + τ )] Autocorrelation: ρ ( τ ) = ρ XX ( τ ) = γ XX ( τ ) /γ XX (0) = γ XX ( τ ) /σ 2 X Some properties of the autocovariance function: γ ( τ ) = γ ( − τ ) | γ ( τ ) | = γ (0) Henrik Madsen 10
H. Madsen, Time Series Analysis, Chapmann Hall Linear processes A linear process { Y t } is a process that can be written on the form ∞ � Y t − µ = ψ i ε t − i i =0 where µ is the mean value of the process and { ε t } is white noise, i.e. a sequence of i.i.d. random variables. { ε t } can be scaled so that ψ 0 = 1 Without loss of generality we assume µ = 0 Henrik Madsen 11
H. Madsen, Time Series Analysis, Chapmann Hall ψ - and π -weights Transfer function and linear process: ∞ � ψ i B i ψ ( B ) = 1 + Y t = ψ ( B ) ε t i =1 Inverse operator (if it exists) and the linear process: ∞ � π i B i π ( B ) = 1 + π ( B ) Y t = ε t , i =1 Autocovariance using ψ -weights: � ∞ ∞ ∞ � � � = σ 2 � γ ( k ) = Cov ψ i ε t − i , ψ i ε t + k − i ψ i ψ i + k ε i =0 i =0 i =0 Henrik Madsen 12
H. Madsen, Time Series Analysis, Chapmann Hall Autocovariance Generating Function Let us define autocovariance generating function : ∞ � γ ( k ) z − k , Γ( z ) = (1) k = −∞ which is the z–transformation of the autocovariance function. Henrik Madsen 13
H. Madsen, Time Series Analysis, Chapmann Hall Autocovariance Generating Function We obtain (since ψ i = 0 for i < 0 ) ∞ ∞ � � σ 2 ψ i ψ i + k z − k Γ( z ) = ε i =0 k = −∞ ∞ ∞ σ 2 � � ψ i z i ψ j z − j = ε i =0 j =0 σ 2 ε ψ ( z − 1 ) ψ ( z ) . = Γ( z ) = σ 2 ε ψ ( z − 1 ) ψ ( z ) = σ 2 ε π − 1 ( z − 1 ) π − 1 ( z ) . (2) Henrik Madsen 14
H. Madsen, Time Series Analysis, Chapmann Hall Stationarity and invertibility The linear process Y t = ψ ( B ) ε t is stationary if ∞ � ψ i z − i ψ ( z ) = i =0 converges for | z | ≥ 1 (i.e. old values of ε t is weighted down) The linear process π ( B ) Y t = ε t is said to be invertible if ∞ � π i z − i π ( z ) = i =0 converges for | z | ≥ 1 (i.e. ε t can be calculated from recent values of Y t ) Henrik Madsen 15
H. Madsen, Time Series Analysis, Chapmann Hall Stationary processes in the frequency domain It has been shown that the autocovariance function is non-negative definite. Following a theorem of Bochner such a non-negative definite function can be written as a Stieltjes integral � ∞ e iωτ dF ( ω ) γ ( τ ) = (3) −∞ for a process in continuous time, or � π e iωτ dF ( ω ) γ ( τ ) = (4) − π for a process in discrete time. Henrik Madsen 16
H. Madsen, Time Series Analysis, Chapmann Hall Processes in the frequency domain For a purely stochastic process we have the following relations between the spectrum and the autocovariance function � ∞ 1 −∞ e − iωτ γ ( τ ) dτ f ( ω ) = 2 π (continuous time) (5) � ∞ −∞ e iωτ f ( ω ) dω γ ( τ ) = 1 � ∞ k = −∞ γ ( k )e − iωk f ( ω ) = 2 π (discrete time) (6) � π − π e ikω f ( ω ) dω γ ( k ) = Henrik Madsen 17
H. Madsen, Time Series Analysis, Chapmann Hall Processes in the frequency domain We have seen that any stationary process can be formulated as a sum of a purely stochastic process and a purely deterministic process. Similar, the spectral density can be written F ( ω ) = F S ( ω ) + F D ( ω ) , (7) where F S ( ω ) is an even continuous function and F D ( ω ) is a step function. Henrik Madsen 18
H. Madsen, Time Series Analysis, Chapmann Hall Processes in the frequency domain For a pure deterministic process k � Y t = A i cos( ω i t + φ i ) , (8) i =1 F S will become 0, and thus F ( ω ) will become a step function with steps at the frequencies ± ω i , i = 1 , . . . , k . In this case F can be written as � F ( ω ) = F D ( ω ) = f ( ω i ) (9) ω i ≤ ω and { f ( ω i ); i = 1 , . . . , k } is often called the line spectrum . Henrik Madsen 19
H. Madsen, Time Series Analysis, Chapmann Hall Linear process as a statistical model? Y t = ε t + ψ 1 ε t − 1 + ψ 2 ε t − 2 + + ψ 3 ε t − 3 + . . . Observations: Y 1 , Y 2 , Y 3 , . . . , Y N Task: Find an infinite number of parameters from N observations! Solution: Restrict the sequence 1 , ψ 1 , ψ 2 , ψ 3 , . . . Henrik Madsen 20
H. Madsen, Time Series Analysis, Chapmann Hall MA ( q ) , AR ( p ) , and ARMA ( p, q ) processes Y t = ε t + θ 1 ε t − 1 + · · · + θ q ε t − q Y t + φ 1 Y t − 1 + · · · + φ p Y t − p = ε t Y t + φ 1 Y t − 1 + · · · + φ p Y t − p = ε t + θ 1 ε t − 1 + · · · + θ q ε t − q { ε t } is white noise Y t = θ ( B ) ε t φ ( B ) Y t = ε t φ ( B ) Y t = θ ( B ) ε t φ ( B ) and θ ( B ) are polynomials in the backward shift operator B, (B X t = X t − 1 , B 2 X t = X t − 2 ) Henrik Madsen 21
Recommend
More recommend