A regime switching time series model of daily river flows Krisztina Vasas 26.08.2005 . 14th European Young Statisticians Meeting, Debrecen
Characteristics of river flow series – Hydrologists do not like simulated river flows , because they often do not feature • the highly skewed marginal distribution of the empirical series • easily separable ascending and descending periods ( regimes ) • the asymmetric shape of the hydrograph (i.e. that short ascending periods are followed by long descending ones ) – Moreover , in empirical river flow series : • The ascending and descending regimes have random durations • The discharges have random increments • The dynamics are different in the two distinct regimes
Why do we like regime switching models ? • A linear model – even if generated with skewed and seasonal driving force – is not appropriate for approximating – the high quantiles – the probability density – the assymmetric shape of the empirical river flow series • A possible alternative is to use a light - tailed conditionally heteroscedastic model – While it solves the first two problems , the assymetry of regime lengths remains unresolved • A natural approach : regime switching models • Obvious regime specification : periods with increasing and decreasing discharges
The model of Lu and Berliner (1998) • Three states ( they call them : normal , rising and falling ) • Every state has an autoregressive structure with normally distributed noise • The lagged precipitation is included as an additional regressor in the rising regime • We do not have precipitation data , and if precipitation is not included , the distribution of this model is symmetric • A possible way out: non - symmetric generating noise in the autoregressive regime switching model
A regime switching model + ε = , 0 Y if I − = 1 1 , t t t Y ( ) − + + ε = t , 1 a Y c c if I − 1 2 , t t t • I t indicates the actual regime - type : – 0 if t belongs to the ascending regime – 1 if t belongs to the descending one . • I t is a Markov - chain of hidden states with the transition matrix p p = 00 01 P p p 10 11 • The generating noises are conditionally independent valued : – ε 1 , t ~ Γ ( α , λ ) in the ascending regime ( shocks to the system ) – ε 2,t ~ N(0, σ 2 ) in the descending regime
Stationarity of the model • The model can be written as a stochastic difference equation: = + Y a Y b − 1 t t t t • where = χ = + χ 1 a a { } { } = 0 1 t I I t t = ε χ = + ε χ b { } { } = 1 , 0 2 , 1 t t I t I t t • It follows from Brandt (1986) that a unique stationary solution is : ∞ ∑ = + L Y b a a a b − − − − 1 1 t t t t t i t i = 1 i • as ( ( ) ) ( ( ) ) + < < ∞ log 0 , and log E a E b 0 0
Model estimation • Parameters of the model : θ = (p 00 , p 11 , α , λ , η , a, c), w here η =1/ σ 2 . • The latent variable s ( I t ) make model estimation complicated . • Obvious choices : – Maximum likelihood (The likelihood can be written down , but only recursively because of the latent variables ) – Markov Chain Monte Carlo (MCMC) – Efficient Method of Moment s (EMM) .
Posterior distribution in the Bayesian framework ( ) { } ( { } ) { } { } ∫ θ = θ T T | , | f Y f I Y d I = = t t 1 t t 1 t t ( ) ( ) { } ( ) { } { } { } { } θ ∝ θ θ θ = T T T , | | , | ( ) f I Y f Y I f I f = = = t t t t t 1 1 1 t t t } [ ] { [ ] { T ( ) } = ∏ χ = χ = − − − − ⋅ θ I 0 I 1 n n n n ( ( )) ( ) f Y Y t f Y c a Y c t p p p p f 00 01 10 11 Γ α λ − − ( , ) 1 σ 2 1 00 01 10 11 t t t t ( 0 , ) N = 1 t where for example n 00 =#(I t- 1 =0 and I t =0).
Markov Chain Monte Carlo method • We need to sample from the joint posterior density to get the estimation of the parameters • The method for this is to create a Markov - chain with ( ) { } { } 1 θ T stationary distribution . , | f I Y = t t t • The hidden states ( I t ) is updated as well as the structural parameters • Gibbs - sampling is used as much as possible • By the help of conjugate priors the full conditionals can be computed for six out of the seven parameters and for the hidden states • A Metropolis - Hastings step is necessary for the shape parameter ( α ) of the increments in the ascending regime
Metropolis - Hastings • Sample a candidate point Y from a proposal distribution q(y| θ (t) ) at each iteration t • Accept the candidate point with probability ( ) ( ) ( ) ( ) π θ t | y q y ( ) α θ = t ( ) ( ) , min , 1 y ( ) ( ) π θ θ t t | q y then θ (t+1) =y and any other case θ (t+1) = θ (t) .
Gibbs sampling • If you can write down the full conditionals: ( ) π θ θ θ θ θ ( ) K K , , , , , , − + π θ θ θ θ θ = 1 1 1 j j j n | , K , , , K , ( ) − + ∫ 1 1 1 j j j n π θ θ θ θ θ d θ K K , , , , , , − + 1 1 1 j j j n j • the sampling can be realized in that way: ( ) ( ) + θ π θ θ θ θ 1 t ( ) ( ) ( ) t t t K ~ | , , , 1 1 2 3 n ( ) ( ) + + θ π θ θ θ 1 t (t 1) ( ) ( ) t t K ~ | θ , , , 2 2 1 3 n M ( ) ( ) + + + + θ π θ θ θ 1 (t 1) ( 1 ) ( 1 ) t t t K ~ | θ , , , − 1 2 1 n n n
Choice of priors • α ~ Γ ( α u , λ u ) • λ ~ Γ (r, β ) • η ~ Γ ( q, ρ) • a ~ N( µ, τ ) • c~ N( ν,κ ) • p 00 ~ β (u 1 ,v 1 ) • p 11 ~ β (u 2 ,v 2 )
Full conditionals for λ and η • distribution of λ: ( { }{ } ) ∑ λ α Γ α + − + β | , , ~ ( , ( ) ) Y I n r Y Y − 0 1 t t t t = 0 I t • distribution of η: ( { } { } ) n ( ( ) ) ∑ η Γ + − − − + ρ 2 1 | , , , ~ , Y I a c q Y c a Y c − 1 t t t t 2 = 1 I t
Full conditionals for a and c • distribution of c: − ∑ ( ) ( ) − + υκ 2 1 a Y aY − 1 t t 1 ( { } { } ) = 1 I | , , ~ , c Y I a N t ( ) ( ) t t − + κ − + κ 2 2 1 1 n a n a 1 1 • distribution of a: ( )( ) ∑ µτ + η − − Y c Y c − 1 t t 1 ( { } { } ) = 1 I | , , ~ , a Y I c N t ( ) ( ) ∑ ∑ t t τ + η − τ + η − 2 2 Y c Y c − − 1 1 t t = = 1 1 I I t t
Full conditionals for the transition probabilities • distribution of p 00 : ( { } ) ( ) β + + | ~ , p I n u n v 00 t 00 1 01 1 • distribution of p 11 : ( { } ) ( ) β + + | ~ , p I n u n v 11 11 1 10 1 t
Full conditionals for the hidden states ( ) Γ − 2 ( ) p Y Y = = = = − 00 1 0 | 0 , 0 , , t t P I I I Y Y ( ) ( )( ) ( ) − + − Γ − + − − − − − 1 1 1 t t t t t 2 1 1 ( ) p Y Y p p N Y c a Y c − − 00 1 00 11 1 t t t t ( ) − − − 2 ( ) ( ) p N Y c a Y c = = = = − 11 1 t t 1 | 1 , 1 , , P I I I Y Y ( )( ) ( ) ( ) − + − 1 1 1 − − Γ − + − − − t t t t t 2 1 1 ( ) p p Y Y p N Y c a Y c − − 00 11 t t 1 11 t t 1
Updating the parameter α • In the Metropolis - Hastings step for α we use a normal distribution for updating : α * ~ N( α,δ 2 ) • Symmetry implies a simple acceptance ratio : ( ) π α * max 1 , ( ) π α • where π denotes the posterior distribution
Results for River Tisza • Observation period : 10 years (3650 days) • 30000 updates for all parameters • Only the last 15000 members of the chain will participate in the analysis • Various starting values and hyperparameters were tried to get information about the stability of parameters
Results for α and λ • Posterior mean of α is 1.003, indicating that the increments of the rising regimes are close to exponential • Average height of the increment in the rising regime : α/λ =106.68 m 3 /s
Results for η • Posterior mean of the standard deviation of the noise in the descending regime is 25.83 m 3 /s
Results for a and c • Posterior mean of a is 0.815, indicating high level of persistence even in the descending regime • Posterior mean of c is 104.9
Results for the transition probabilities • Average duration ( calculated from the posterior means of the transition probabilities ) – of the ascending period is 2.38 days – of the descending period is 14.12 days • indicating much longer descending than ascending regimes
Convergence properties • All parameters apart from α stationarize after at most 1000 iterations • Parameter α reaches equilibrium after a longer period ( to determine its stationary distribution more accurately , more iterations might be needed ) • Acceptance ratio for α in the Metropolis - Hastings algorithm is 0.51
Recommend
More recommend