Klimova Ekaterina Institute of Computational Technologies Russian Academy of Sciences, Siberian Branch Computing aspects of an environment estimation on the basis of the observational data klimova@ict.nsc.ru
Introduction Components of the new forecast system (New Forecast Paradigm): a. Ultimate goal . The extension of the traditional forecast process: reduction of forecast uncertainty, and also providing forecast uncertainty. b. Forecast process . In the new paradigm, not only the best estimate of predicted system, but also uncertainty is propagated. c. Observing system . Estimation of random instrument and representativeness error variance, as well as estimation of systematic errors . d. Data assimilation . Reduction of analyses error and assessment of uncertainty in the analyses. This information is critical input for the generation of initial ensemble perturbations. e. Numerical modeling . Reducing systematic and random error related to model formulation. A quantitative assessment and simulation of model related random and systematic errors. f. Ensemble forecasting . In the new forecast process, ensemble forecasting occupies a central place in the entire process following the observing, data assimilation and numerical modeling components. g. Statistical pre-processing . Z.Toth et al. Completing the forecast: assessing and communicating forecast uncertainty. – ECMWF Workshop on Ensemble Prediction 7-9 November 2007.
Introduction • An Ensemble of Data assimilations (EDA) system was introduced at ECMWF. • The EDA consists of an ensemble of ten 4D-Var assimilations that differ by perturbing observations, sea surface temperature fields and model physics. • The main justification for implementing the EDA is that it quantifies analysis uncertainty. • It can be used to estimate flow-dependent background errors in the deterministic 4D-Var assimilation system. L.Isaksen et al. Ensemble of Data assimilation at ECMWF. – Technical memorandum N636, December 2010 :
Kalman Filter Observations 0 y k Forecast Analyses a f 0 f ( ) x x K y M x f a x A x k k k k k k k k 1 k 1 T f a P A P A Q f T f T 1 K P M ( M P M R ) k k 1 k 1 k 1 k 1 k k k k k k k
Ensemble Kalman filter Ensembles of ˆ 0 ( i ) ( i ) x x x , i 1 , , N f 0 initial fields and forecasts: f ( i ) a ( i ) ( i ) x A ( x ) , k 1 k k k N 1 Estimation of f T f ( i ) f ( i ) f ( i ) f ( i ) T P M ( x x )( M x M x ) , k k k k k k k k covariance 1 N 1 i matrix: N 1 f T f ( i ) f ( i ) f ( i ) f ( i ) T M P M ( M x M x )( M x M x ) , k k k k k k k k k k k N 1 i 1 1 f T f T K P M ( M P M R ) , k k k k k k k Ensemble of a ( i ) f ( i ) 0 ( i ) f ( i ) « analyses »: x x K ( y M x ), k k k k k k 0 ( i ) 0 ( i ) y y r k k k N 1 ( i ) x x k 1 k 1 N i 1
Ensemble pi-algorithm The forecast step can be written the following way : η f a x ( ) ( x ( )) ( ) t M t t , k 1 k k t f x ( t ) where is a vector of forecasted values at moment of time k 1 k 1 a x ( ) t is the vector of values, obtained after a step of analysis at moment of time k t , k η M is a model operator, ( ) t is Gaussian white noise with covariance matrix Q . k k
Ensemble pi-algorithm The step of analysis is expressed as: a f a T 1 f x ( ) t x ( ) t P H R ( y H ( x ( ))), t k k k k t k k a P is an analysis error covariance matrix, where k R is an observation error covariance matrix, k H is an operator (generally speaking, nonlinear), transferring values in the grid points to the observations point, H - is an linearized operator, y is an observations vector at moment of time k t t k
Ensemble pi-algorithm So let present the algorithm in the equivalent way: η P H R y x η a T 1 x ( t ) M ( ( )) x t ( ) t ( H M ( ( ( )) t ( ))), t k 1 k k k 1 k 1 t k k 1 k a f f T f T 1 where P ( I KH P ) , K P H ( HP H R ) , 1 1 1 1 1 k k k k k f P is a forecast error covariance matrix. k 1 Written this way the formula unites the steps of analysis and forecast, that allow one to neglect indexes “ a” and “ f” further on. x suffices the following equation: Let true value t x ( t ) M ( x ( )), t t k 1 t k x ( ) t x . t 0 0 The observation data can be expressed as: ε k y H ( x ( )) t , t t k k ε is random observation error k where with zero-order expectation value and covariance matrix R . k
Ensemble pi-algorithm Let the estimation error be determined as k 1 dx x ( t ) x ( t ) . t k 1 k 1 The error suffices the following equation: k 1 η P H R T 1 x ε k 1 x η dx M ( x ( )) t M ( ( )) x t ( ) t ( H M ( ( ( )) t H M ( ( ( ) t ( ))). t t k k k k 1 k 1 t k k k If to estimate P using the formula (Yaglom, 1987) k 1 N 1 T T k 1 k 1 k 1 k 1 P dx dx dx dx , k 1 n n n n N 1 n 1 one obtains a version of the ensemble Kalman filter. 1 k Taking this formula for P into account one obtains a system of equations relative to dx k 1 n
Ensemble pi-algorithm Now let one consider a modification of the algorithm described above, so it can be applied for forecasting of ensembles. It is known that such a forecast requires setting an ensemble of initial fields a T a x is equal to { x } in such a way that ensemble average x , while covariances ( x x )( x x ) P , n n n n n n a a x is the result of the step of analysis of the Kalman filter, where P is a analysis error covariance matrix. The following ensemble of initial fields suffices the first condition: η P H R y x η T 1 n x ( t ) M ( x ( )) t ( ) t ( H M ( ( ( )) t ( ))), t n k 1 n k n k k 1 k 1 t n k n k 1 k if accepted that: η x H M ( ( x ( )) t ( ))) t H M ( ( ( ))). t n k n k n k At this the ensemble average will be the estimation obtained using the Kalman filter, while its deviation from ensemble member is considered as the estimation error. To describe the errors with the formulas of the classic Kalman filter one has to set a perturbed observations ensemble: ε n k y y . t t n k k
Ensemble pi-algorithm Analyses step: Π D ( k 1) T T T T η T X F , F M ( x ( )) t ( ), t n 2 2 2 n k n k n 1 n T Π dx 1 H R 1 y ε 1 x η T k T k n ( H M ( ( ) ( ))). t 2 m k 1 t n k n k m N 1 k 1 Where k : F is a matrix with columns { f n n , 1, , N } η x k f M ( x ( )) t ( ) t M ( ( )) t n n k n k n k k F is a matrix with columns { f n n , 1, , N } : k η ε k 1 x η f H M ( ( x ( )) t ( )) t H M ( ( ( )) t ( )) t n n k n k n n k n k Π F T T 1 T D ( I ) , 1 N dx dx 1 1 D 1 T Π C I I ( 0.25 ) 2 0.5 . 1 N dx dx K K 1 Ε C C T T 1 C F H R HF ( ) . 1 2 N 1
« Local » ensemble pi-algorithm N 1 T i i The formula is approximation, so, if dimension of sample N is small P dx dx N 1 i 1 covariance function property is not carried out. In a number of works it is offered for repayment , where - function from "false" covariances on the big distances to use formula P P ( ) ( ) e 2 . distance between the points, usually looking like It is well known, that P is also covariance matrix. Let’s consider the analyses s tep: i i T 1 i i x x D[(HD) R (y Hx )] a f f T Rectangular matrix D(HD) is a covariance matrix of forecast errors in a grid cells and points of . observations. We will multiply the elements of matrix by ( )
Recommend
More recommend