modified es mda algorithms for data assimilation and
play

Modified ES-MDA Algorithms for Data Assimilation and Uncertainty - PowerPoint PPT Presentation

The University of Tulsa Petroleum Reservoir Exploitation Projects Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification Javad Rafiee and Al Reynolds 12th EnKF Workshop June 14, 2017 Outline Ensemble Smoother with


  1. The University of Tulsa Petroleum Reservoir Exploitation Projects Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification Javad Rafiee and Al Reynolds 12th EnKF Workshop June 14, 2017

  2. Outline Ensemble Smoother with Multiple Data Assimilation (ES-MDA) Discrepancy principle and choice of inflation factors in ES-MDA Convergence (after Geir Evensen) June 14, 2017 (2 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  3. ES-MDA Define � m f , i � 1 ∆ M f , i = m f , i m f , i ,..., m f , i 1 − ¯ N e − ¯ � , (1) N e − 1 and � d f , i � 1 ∆ D f , i = d f , i d f , i ,..., d f , i 1 − ¯ N e − ¯ � , (2) N e − 1 �� �� d f , i = � 1 / N e m f , i = � 1 / N e where ¯ j d f , i j m f , i and ¯ j . j June 14, 2017 (3 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  4. ES-MDA Algorithm Choose the number of data assimilations, N a , and the 1 coefficients, α i for i = 1,..., N a . Generate initial ensemble { m f ,1 j } N e 2 j = 1 For i = 1,..., N a : 3 (a) Run the ensemble from time zero, (b) For each ensemble member, perturb the observation vector with the inflated measurement error covariance matrix, i.e., d i uc , j ∼ � ( d obs , α i C D ) . (c) Use the update equation to update the ensemble. � − 1 � � j +∆ M f , i (∆ D f , i ) T � ∆ D f , i (∆ D f , i ) T + α i C D = m f , i uc,j − d f , i m a , i d i j j m f , i + 1 = m a , i j j � N a 1 Comment: Requires α k = 1. k = 1 June 14, 2017 (4 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  5. Dimensionless Sensitivity The dimensionless sensitivities control the change in model parameters that occurs when assimilating data (Zhang et al., 2003; Tavakoli and Reynolds, 2010). The standard dimensionless sensitivity is defined as � D ≡ C − 1 / 2 m f , i ) C 1 / 2 G i G ( ¯ M , (3) D where G ( m ) is the sensitivity matrix for d f ( m ) where ∂ d f i ( m ) � g i , j = . (4) ∂ m j Dimensionless sensitivity matrix components are ∂ d f σ m , j i g i , j = . (5) σ d , i ∂ m j The direct analogue of the standard dimensionless sensitivity matrix in ensemble based methods is given by ∆ D f , i ≈ C − 1 / 2 D ≡ C − 1 / 2 m f , i )∆ M f , i . G i G ( ¯ (6) D D June 14, 2017 (5 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  6. ES-MDA Update Equation Recall the ES-MDA update equation � − 1 � � + ∆ M f , i (∆ D f , i ) T � ∆ D f , i (∆ D f , i ) T + α i C D = m f , i uc,j − d f , i m a , i d i j j j (7) Using the definition of the dimensionless sensitivity D ≡ C − 1 / 2 ( G i ∆ D i ) , we can write ES-MDA update equation as D � � D ) T � � − 1 C − 1 / 2 D ) T + α i I N d = m f , i uc , j − d f , i m a , i j +∆ M f , i ( G i G i D ( G i d i . (8) j D j for j = 1,..., N e . June 14, 2017 (6 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  7. Why do we need damping? ES similar to doing one GN iteration with full step using the same average sensitivity coefficient to update each ensemble method with the forecast as the initial guess. O ( m ) = 1 M + 1 2 � d f ( m ) − d obs � 2 m � 2 2 � m − ¯ C − 1 C − 1 D GN based on approximating O ( m ) by a quadratic but far from a minimum quadratic approximation good only in small region around current model. TR better than line search. Proof of convergence of GN requires the possibility of taking a full (unit) step. Juris Rommelsee, PhD thesis, TU Delft (2009). June 14, 2017 (7 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  8. Least Squares Problem Similar to Eq. 8, one can update the mean of m directly as D ) T � � − 1 C − 1 / 2 � d f , i � m a , i = ¯ m f , i + ∆ M f , i ( G i D ) T + α i I N d d obs − ¯ G i D ( G i ¯ . (9) D m a , i is the solution of the regularized least squares It is easy to show that ¯ problem given by � 1 � � � � 2 + α i x a , i = argmin � G i 2 � x � 2 D x − y , (10) 2 x where (∆ M f , i ) + � m f , i � x = m − ¯ , (11) � d f , i � C − 1 / 2 d obs − ¯ y = , (12) D where (∆ M f , i ) + is the pseudo-inverse of ∆ M f , i . June 14, 2017 (8 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  9. Discrepancy Principle Assume � d f , i � � y � = � C − 1 / 2 d obs − ¯ � > η , (13) D where η is the noise level given by � � η 2 = � C − 1 / 2 d obs − d f ( m true ) � 2 ≈ N d . (14) D Based on the discrepancy principle the minimum regularization parameter, α i , should be selected such that D x a , i − y � = � C − 1 / 2 d a − d obs ) � . η = � G i ( ¯ (15) D June 14, 2017 (9 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  10. Discrepancy Principle From Eqs. 13 and 15 we can write � d f , i �� � d f , i � � � − 1 C − 1 / 2 � � � D ) T + α i I N d � C − 1 / 2 d obs − ¯ d obs − ¯ G i D ( G i � > η = α i � � . (16) D D Therefore, for some ρ ∈ ( 0,1 ) � d f , i �� � d f , i � � � − 1 C − 1 / 2 � � � D ) T + α i I N d ρ � C − 1 / 2 d obs − ¯ G i D ( G i d obs − ¯ � = α i � � . (17) D D Hanke (1997) proposed RLM: � d f , i �� � d f , i �� � � � − 1 C − 1 / 2 � 2 2 � � � � D ) T + α i I N d � C − 1 / 2 ρ 2 d obs − ¯ ≤ α 2 G i D ( G i d obs − ¯ . (18) � � � D i D Iglesias (2015) used Eq. 18 for choosing inflation factors in his version of ES-MDA (IR-ES). Le et al. (2015) used a much stricter condition based on Eq. 18 for choosing inflation factors in ES-MDA-RLM. June 14, 2017 (10 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  11. An Analytical Procedure for Calculation of Inflation Factors Recall that � d f , i �� � d f , i �� � � � − 1 C − 1 / 2 � 2 2 � � � � D ) T + α i I N d � C − 1 / 2 d obs − ¯ d obs − ¯ ρ 2 ≤ α 2 G i D ( G i . (18) � � � D i D � d f , i � Using the definitions of y = C − 1 / 2 D ) T + α i I N d , d obs − ¯ and C ≡ G i D ( G i D � � � C − 1 y � 2 ρ 2 ≤ α 2 � � . (19) i � 2 � y � � � C − 1 y � 2 1 1 γ 2 � � ≥ min j = min � 2 = (20) � � � 2 � 2 � y j j λ 2 λ 2 1 + α i j + α i where γ j ’s are the eigenvalues of C − 1 and λ j ’s are the singular values of G i D . June 14, 2017 (11 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  12. An Approximate Method for Inflation Factors Instead of enforcing 1 ρ 2 ≤ α 2 � 2 , � i λ 2 1 + α i we use 1 ρ 2 ≤ α 2 � 2 , (21) � i 2 + α i λ ρ 2 α i = 1 − ρ λ (22) where λ is the average singular value of G i D given by � N λ = 1 λ j where N = min { N d , N e } . (23) N j = 1 Motivation: Discrepancy principle overestimates the optimal inflation factor in the linear case. 2 . We use ρ = 0.5, so α i = λ June 14, 2017 (12 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  13. ES-MDA with Geometric Inflation Factors Specify the number of data assimilation steps ( N a ). Assume that the inflation factors form a monotonically decreasing geometric sequence: α i + 1 = β i α 1 , (24) Determine   2 � N  1 2 =  α 1 = λ λ j . (25) N j = 1 June 14, 2017 (13 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  14. ES-MDA with Geometric Inflation Factors Recall that ES-MDA requires that N a N a � � 1 1 1 = = β i − 1 α 1 α i i = 1 i = 1 Solve 1 − ( 1 /β ) N a − 1 = α 1 , (26) 1 − ( 1 /β ) for β . We call the proposed method ES-MDA-GEO. June 14, 2017 (14 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  15. Comments on “Convergence” of ES-MDA Classifying ES-MDA as an iterative ES may be augmentable; � N a 1 stops when α k = 1. k = 1 Criterion based on ensuring methods samples correctly in the linear Gaussian case as ensemble size goes to infinity. Analogue of Hanke’s suggestion for RLM, should terminate ES-MDA when � d f , i � 1 � 2 < τ 2 � C − 1 / 2 d obs − ¯ D N d where τ > 1 /ρ = 2. This means, terminate when the normalized objective function is less that 4. GE: Does ES-MDA converge as N a → ∞ ? To what? June 14, 2017 (15 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

  16. Numerical Examples The performance of ES-MDA-GEO is compared to IR-ES, ES-MDA-RLM and ES-MDA-EQL. To investigate the performance of the methods, we define the following measures:   1 / 2 N e N m � � RMSE = 1  1 ( m true, k − m j , k ) 2  , (27) N e N m j = 1 k = 1 N m � 1 σ = σ k , (28) N m k = 1 � N e 1 j − d obs ) T C − 1 ( d f D ( d f O Nd = j − d obs ) . (29) N e N d j = 1 June 14, 2017 (16 / 29) Reynolds Modified ES-MDA Algorithms for Data Assimilation and Uncertainty Quantification

Recommend


More recommend