back to the future the back and forth nudging algorithm
play

Back to the future : the Back and Forth Nudging algorithm - PowerPoint PPT Presentation

Didier Auroux Mathematics Institute of Toulouse, France Near future : University of Nice auroux@math.univ-toulouse.fr auroux@unice.fr Work in collaboration with : J. Blum (U. Nice), M. Nodet (U. Grenoble) Back to the future : the Back and


  1. Didier Auroux Mathematics Institute of Toulouse, France Near future : University of Nice auroux@math.univ-toulouse.fr auroux@unice.fr Work in collaboration with : J. Blum (U. Nice), M. Nodet (U. Grenoble) Back to the future : the “Back and Forth Nudging” algorithm Conference on Applied Inverse Problems, Vienna, Austria. July 20-24, 2009

  2. Motivations Environmental and geophysical studies : forecast the natural evolution � retrieve at best the current state (or initial condition) of the environment. Geophysical fluids (atmosphere, oceans, . . .) : turbulent systems = ⇒ high sensitivity to the initial condition = ⇒ need for a precise identification (much more than observations) Environmental problems (ground pollution, air pollution, hurricanes, . . .) : problems of huge dimension, generally poorly modelized or observed Data assimilation consists in combining in an optimal way the observations of a system and the knowledge of the physical laws which govern it. Main goal : identify the initial condition, or estimate some unknown parame- ters, and obtain reliable forecasts of the system evolution. AIP 2009, Vienna, July 23 2009 1/28

  3. Data assimilation Observations combination model + observations ⇓ Model identification of the initial condition t of a geophysical system Fundamental for a chaotic system (atmosphere, ocean, . . .) Issue : These systems are generally irreversible Goal : Combine models and data Typical inverse problem : retrieve the system state from sparse and noisy observations AIP 2009, Vienna, July 23 2009 2/28

  4. 4D-VAR X b Observations X 0 Model governed by a system of ODE :  dX   dt = F ( X ) , 0 < t < T   X (0) = X 0 t X obs ( t ) : observations of the system, H : observation operator, X b : background state (a priori estimation of the initial condition), B and R : covariance matrices of background and observation errors. 1 2( X 0 − X b ) T B − 1 ( X 0 − X b ) J ( X 0 ) = � T 1 [ X obs ( t ) − H ( X ( t ))] T R − 1 [ X obs ( t ) − H ( X ( t ))] dt + 2 0 AIP 2009, Vienna, July 23 2009 3/28

  5. Optimality system Optimization under constraints : � � � T P, dX L ( X 0 , X, P ) = J ( X 0 ) + dt − F ( X ) dt 0  dX  dt = F ( X ) Direct model :  X (0) = X 0  � ∂F � T  − dP  P + H T R − 1 [ X obs ( t ) − H ( X ( t ))] dt = ∂X Adjoint model :   P ( T ) = 0 Gradient of the cost-function : ∂J = B − 1 ( X 0 − X b ) − P (0) ∂X 0 Optimal solution : X 0 = X b + BP (0) [Le Dimet-Talagrand (86)] AIP 2009, Vienna, July 23 2009 4/28

  6. Forward nudging  dX   dt = F ( X )+ K ( X obs − H ( X )) , 0 < t < T,   X (0) = X 0 , where K is the nudging (or gain) matrix. In the linear case (where F is a matrix), the forward nudging is called Luenberger or asymptotic observer. – Meteorology : Hoke-Anthes (1976) – Oceanography (QG model) : Verron-Holland (1989) – Atmosphere (meso-scale) : Stauffer-Seaman (1990) – Optimal determination of the nudging coeffcients : Zou-Navon-Le Dimet (1992), Stauffer-Bao (1993), Vidard-Le Dimet-Piacentini (2003) AIP 2009, Vienna, July 23 2009 5/28

  7. Forward nudging : linear case Luenberger observer, or asymptotic observer (Luenberger, 1966)  dX   dt = FX + K ( X obs − HX ) ,   d ˆ  X  dt = F ˆ X obs = H ˆ X, X. d dt ( X − ˆ X ) = ( F − KH )( X − ˆ X ) If F − KH is a Hurwitz matrix, i.e. its spectrum is strictly included in the half-plane { λ ∈ C ; Re ( λ ) < 0 } , then X → ˆ X when t → + ∞ . AIP 2009, Vienna, July 23 2009 6/28

  8. Backward nudging How to recover the initial state from the final solution ? Backward model :  d ˜ X   dt = F ( ˜ X ) , T > t > 0 ,   X ( T ) = ˜ ˜ X T . If we apply nudging to this backward model :  d ˜ X   dt = F ( ˜ X ) − K ( X obs − H ˜ X ) , T > t > 0 ,   X ( T ) = ˜ ˜ X T . AIP 2009, Vienna, July 23 2009 7/28

  9. BFN : Back and Forth Nudging algorithm Iterative algorithm (forward and backward resolutions) : ˜ X 0 (0) = X b (first guess)  dX k   = F ( X k )+ K ( X obs − H ( X k )) dt   X k (0) = ˜ X k − 1 (0)  d ˜ X k   = F ( ˜ X k ) − K ′ ( X obs − H ( ˜ X k )) dt   ˜ X k ( T ) = X k ( T ) If X k and ˜ X k converge towards the same limit X , and if K = K ′ , then X satisfies the state equation and fits to the observations. AIP 2009, Vienna, July 23 2009 8/28

  10. Choice of the direct nudging matrix K Implicit discretization of the direct model equation with nudging : X n +1 − X n = FX n +1 + K ( X obs − HX n +1 ) . ∆ t Variational interpretation : direct nudging is a compromise between the mini- mization of the energy of the system and the quadratic distance to the obser- vations : � 1 � 2 � X − X n , X − X n � − ∆ t 2 � FX, X � + ∆ t 2 � R − 1 ( X obs − HX ) , X obs − HX � min , X by chosing K = H T R − 1 where R is the covariance matrix of the errors of observation. [Auroux-Blum (08)] AIP 2009, Vienna, July 23 2009 9/28

  11. Choice of the backward nudging matrix K ′ The feedback term has a double role : • stabilization of the backward resolution of the model (irreversible system) • feedback to the observations If the system is observable, i.e. rank [ H, HF, . . . , HF N − 1 ] = N , then there exists a matrix K ′ such that − F − K ′ H is a Hurwitz matrix (pole assignment method). Simpler solution : one can define K ′ = k ′ H T , where k ′ is e.g. the smallest value making the backward numerical integration stable. AIP 2009, Vienna, July 23 2009 10/28

  12. Example of convergence results Viscous linear transport equation :   ∂ t u − ν∂ xx u + a ( x ) ∂ x u = − K ( u − u obs ) , u ( x, t = 0) = u 0 ( x )  u = K ′ (˜ ∂ t ˜ u − ν∂ xx ˜ u + a ( x ) ∂ x ˜ u − u obs ) , u ( x, t = T ) = u T ( x ) ˜ We set w ( t ) = u ( t ) − u obs ( t ) and ˜ w ( t ) = ˜ u ( t ) − u obs ( t ) the errors. • If K and K ′ are constant, then ∀ t ∈ [0 , T ] : � w ( t ) = e ( − K − K ′ )( T − t ) w ( t ) (still true if the observation period does not cover [0 , T ]) • If the domain is not fully observed, then the problem is ill-posed. w k (0) = e − [( K + K ′ ) kT ] w 0 (0) Error after k iterations : � exponential decrease of the error, thanks to : • K + K ′ : infinite feedback to the observations (not physical) • T : asymptotic observer (Luenberger) • k : infinite number of iterations (BFN) AIP 2009, Vienna, July 23 2009 11/28

  13. Shallow water model τ x ∂ t u − ( f + ζ ) v + ∂ x B = ρ 0 h − ru + ν ∆ u τ y ∂ t v + ( f + ζ ) u + ∂ y B = ρ 0 h − rv + ν ∆ v ∂ t h + ∂ x ( hu ) + ∂ y ( hv ) = 0 • ζ = ∂ x v − ∂ y u is the relative vorticity ; • B = g ∗ h + 1 2( u 2 + v 2 ) is the Bernoulli potential ; • g ∗ = 0 . 02 m.s − 2 is the reduced gravity ; • f = f 0 + βy is the Coriolis parameter (in the β -plane approximation), with f 0 = 7 . 10 − 5 s − 1 and β = 2 . 10 − 11 m − 1 .s − 1 ; • τ = ( τ x , τ y ) is the forcing term of the model (e.g. the wind stress), with a maximum amplitude of τ 0 = 0 . 05 s − 2 ; • ρ 0 = 10 3 kg.m − 3 is the water density ; • r = 9 . 10 − 8 s − 1 is the friction coefficient. • ν = 5 m 2 .s − 1 is the viscosity (or dissipation) coefficient. AIP 2009, Vienna, July 23 2009 12/28

  14. Shallow water model 2D shallow water model , state = height h and horizontal velocity ( u, v ) (run example) Numerical parameters : Domain : L = 2000 km × 2000 km ; Rigid boundary and no-slip BC ; Time step = 1800 s ; Assimilation period : 15 days ; Forecast period : 15 + 45 days Observations : of h only ( ∼ satellite obs), every 5 gridpoints in each space direction, every 24 hours. Background : true state one month before the beginning of the assimilation period + white gaussian noise ( ∼ 10%) Comparison BFN - 4DVAR : sea height h ; velocity : u and v . AIP 2009, Vienna, July 23 2009 13/28

  15. Convergence - perfect obs. 0.4 0.25 h u 0.35 0.2 0.3 0.25 0.15 RMS error RMS error 0.2 0.1 0.15 0.1 0.05 0.05 0 0 0 100 200 300 400 500 600 700 0 100 200 300 400 500 600 700 Time steps Time steps Relative difference between the BFN 0.3 v iterates (5 first iterations) and the 0.25 true solution versus the time steps, 0.2 RMS error for h , u and v . 0.15 0.1 0.05 0 0 100 200 300 400 500 600 700 Time steps AIP 2009, Vienna, July 23 2009 14/28

  16. Comparaison - perfect obs. Relative error h u v Background state 37 . 6% 21 . 5% 30 . 3% BFN (5 iterations, converged) 0 . 44% 1 . 78% 2 . 41% 4D-VAR (5 iterations) 0 . 64% 3 . 14% 4 . 47% 4D-VAR (18 iterations, converged) 0 . 61% 2 . 43% 3 . 46% Relative error of the background state and various identified initial conditions for the three variables. AIP 2009, Vienna, July 23 2009 15/28

  17. Noisy observations 0.2 h u 0.15 v RMS error Relative difference bet- 0.1 ween the true solution 0.05 and the forecast trajec- tory corresponding to the 0 0 500 1000 1500 2000 2500 3000 BFN (top) and 4D-VAR Time steps (bottom) identified initial 0.2 conditions at convergence, h versus time, for the three u 0.15 v variables and in the case RMS error 0.1 of noisy observations. 0.05 0 0 500 1000 1500 2000 2500 3000 Time steps AIP 2009, Vienna, July 23 2009 16/28

Recommend


More recommend