talk time decomposition methods for parabolic problems
play

Talk: Time-Decomposition Methods for Parabolic Problems : - PowerPoint PPT Presentation

17 th International Conference on Domain Decomposition Methods St. Wolfgang/Strobl, Austria, July 3-7, 2006 July 3, 2006, Minisymposium of Martin Gander Talk: Time-Decomposition Methods for Parabolic Problems : Convergence results of Iterative


  1. 17 th International Conference on Domain Decomposition Methods St. Wolfgang/Strobl, Austria, July 3-7, 2006 July 3, 2006, Minisymposium of Martin Gander Talk: Time-Decomposition Methods for Parabolic Problems : Convergence results of Iterative Splitting methods. J¨ urgen Geiser Humboldt Universit¨ at zu Berlin Department of Mathematics Unter den Linden 6 D-10099 Berlin, Germany J¨ urgen Geiser 1

  2. Outline of the talk 1.) Introduction 2.) Decomposition-methods 3.) Time-Decomposition methods 3.1) Sequential Splitting methods 3.2) Iterative Splitting method 4.) Numerical experiments 5.) Future Works J¨ urgen Geiser 2

  3. Motivation and Ideas Design of fast solvers with high accuracy Efficient solver by decoupling in simpler equations or domains for solving multi-physics problems Parallelization and accelerating the solver-process Physical correct splitting and analytical Decomposition method : preservation of physics Fast computations for complicate and decoupable problems J¨ urgen Geiser 3

  4. Model-Equation Systems of parabolic-differential equations with first order time- derivation and second order spatial-derivations ∂c ∂t = f ( c ) + Ac + Bc , in Ω × (0 , T ) , (1) c ( x, t ) = g ( x, t ) , on ∂ Ω × (0 , T ) (Boundary-Condition) , c ( x, 0) = c 0 ( x ) , in Ω (Initial-Condition) , where c = ( c 1 , . . . , c n ) t and f ( c ) = ( f 1 ( c ) , . . . , f n ( c )) t , 0 1 0 1 − v 11 · ∇ · · · − v n 1 · ∇ ∇ D 11 · ∇ . . . ∇ D n 1 · ∇ A , B = . . . . . . . . . . . . . . . A = · · · , @ @ A − v 1 n · ∇ − v nn · ∇ ∇ D 1 n · ∇ . . . ∇ D nn · ∇ · · · R n a matrix-space. Convection- and diffusion-operator with A, B : X → X and X = I sufficient smoothness c i ∈ C 2 , 1 (Ω , [0 , T ]) for i = 1 , . . . , n J¨ urgen Geiser 4

  5. First Part : Decomposition Methods Ideas : Decoupling the time-scales, space-scales. Decoupling the multi-physics. Time-adaptivity, Space-adaptivity. Parallelization in Time and Space. Methods : Operator-Splitting and Variational Splitting Methods (Time). Iterative and extended Operator Splitting Methods (Time). Waveform-Relaxation-Methods (Time). Schwarz Wave form relaxation method (Space). Additive and Multiplicative Schwarz method (Space). Partition of Units combined with Splitting methods (Time and Space). J¨ urgen Geiser 5

  6. Time-Decomposition methods History and Literature: ADI-methods (Alternating direction implicit), see : Peaceman- Rachford (1955). Strang-Marchuk-Splitting methods, see : Strang (1968). Waveform-relaxation Methods, see : Vandewalle (1993). Variational Splitting Methods, see : Lubich (2003). Iterative Operator-Splitting Methods, see : Kanney, Miller, Kelly (2003), Farago, Geiser (2005). Extended Iterative Operator Splitting Methods, see : Geiser (2006). Decoupling methods as preservation of physics, see : Geiser (2006). J¨ urgen Geiser 6

  7. Introduction : Operator-Splitting-Method Idea: Decoupling of complex equations in simpler equations, solving simpler equations and re-coupling the results over the initial-conditions. Equations: ∂ t c = Ac + Bc , where the initial-conditions are c ( t n ) = c n , (or Variational-formulation: ( ∂ t c, v ) = ( Ac, v ) + ( Bc, v ) . ) Splitting-method of first order ∂ t c ∗ = Ac ∗ c ∗ ( t n ) = c n , with ∂ t c ∗∗ = Bc ∗∗ c ∗∗ ( t n ) = c ∗ ( t n +1 ) , with where the results of the methods are c ( t n +1 ) = c ∗∗ ( t n +1 ) , and there are some splitting-errors for these methods, Literature : [Strang 68], [Karlsen et al 2001]. J¨ urgen Geiser 7

  8. Splitting-Errors of the Method The error of the splitting-method of first order is ∂ t c = ( B + A ) c , c = exp( τ ( B + A )) c ( t n ) . ˜ Local error for the decomposition and the full solution c ( t n + τ ) − exp( τB ) exp( τA ) c ( t n ) , e ( c ) = ˜ exp( τ ( B + A )) c ( t n ) − exp( τB ) exp( τA ) c ( t n ) , = 1 2 τ ( BA − AB ) c ( t n ) + O ( τ 2 ) , e ( c ) /τ = O ( τ ) for A, B not commuting, otherwise one get exact results, where τ = t n +1 − t n , [Strang 68]. J¨ urgen Geiser 8

  9. Higher order splitting-methods Strang or Strang-Marchuk-Splitting, cf. [Marchuk 68, Strang68] ∂c ∗ ( t ) = Ac ∗ ( t ) , with t n ≤ t ≤ t n +1 / 2 and c ∗ ( t n ) = c n sp , (2) ∂t ∂c ∗∗ ( t ) = Bc ∗∗ ( t ) , with t n ≤ t ≤ t n +1 , c ∗∗ ( t n ) = c ∗ ( t n +1 / 2 ) , ∂t ∂c ∗∗∗ ( t ) = Ac ∗∗∗ ( t ) , t n +1 / 2 ≤ t ≤ t n +1 , c ∗∗∗ ( t n +1 / 2 ) = c ∗∗ ( t n +1 ) , ∂t where t n +1 / 2 = t n + 0 . 5 τ n and the approximation on the next time level t n +1 is defined as c n +1 = c ∗∗∗ ( t n +1 ) . sp The splitting error of the Strang splitting is ρ n = 1 24 τ 2 n ([ B, [ B, A ]] − 2[ A, [ A, B ]]) c ( t n ) + O ( τ 3 n ) , (3) see, e.g.[Hundsdorfer, Verwer 2003]. J¨ urgen Geiser 9

  10. Iterative splitting-Methods ∂c i ( t ) = Ac i ( t ) + Bc i − 1 ( t ) , with c i ( t n ) = c n sp , (4) ∂t ∂c i +1 ( t ) = Ac i ( t ) + Bc i +1 ( t ) , with c i +1 ( t n ) = c n sp , (5) ∂t where c 0 ( t ) is any fixed function for each iteration. (Here, as before, c n sp denotes the known split approximation at the time level t = t n .) The split approximation at the time-level t = t n +1 is defined as c n +1 = c 2 m +1 ( t n +1 ) . (Clearly, the functions c k ( t ) ( k = i − 1 , i, i + 1 ) sp depend on the interval [ t n , t n +1 ] , too, but, for the sake of simplicity, in our notation we omit the dependence on n .) J¨ urgen Geiser 10

  11. Error for the Iterative splitting-method Theorem 1. The error for the splitting methods is given as : || e i || = K || B || τ n || e i − 1 || + O ( τ 2 n ) (6) and hence || e 2 m +1 || = K m || e 0 || τ 2 m + O ( τ 2 m +1 ) , (7) n n where τ n is the time-step, e 0 the initial error e 0 ( t ) = c ( t ) − c 0 ( t ) and m the number of iteration-steps, K and K m are constants, || B || is the maximum norm of operator B and A and B are bounded, monotone operators. Proof : Taylor-expansion and estimation of exp -functions. See the work Geiser,Farago (2005). J¨ urgen Geiser 11

  12. Nonlinear Iterative splitting-Methods ∂c i ( t ) = A ( c i ( t )) + B ( c i − 1 ( t )) , with c i ( t n ) = c n sp , (8) ∂t ∂c i +1 ( t ) = A ( c i ( t )) + B ( c i +1 ( t )) , with c i +1 ( t n ) = c n sp , (9) ∂t where c 0 ( t ) is any fixed function for each iteration. (Here, as before, c n sp denotes the known split approximation at the time level t = t n .) The split approximation at the time-level t = t n +1 is defined as c n +1 = c 2 m +1 ( t n +1 ) . (Clearly, the functions c k ( t ) ( k = i − 1 , i, i + 1 ) sp depend on the interval [ t n , t n +1 ] , too, but, for the sake of simplicity, in our notation we omit the dependence on n .) J¨ urgen Geiser 12

  13. Consistency Theory for the nonlinear iterative splitting method Theorem 2. Let us consider the nonlinear operator-equation in a Banach space X ∂ t c ( t ) = A ( c ( t )) + B ( c ( t )) , 0 < t ≤ T (10) c (0) = c 0 We linearised the nonlinear operators and obtain the linearised equation ∂ t c ( t ) = ˜ Ac ( t ) + ˜ Bc ( t ) + R (˜ c ) , 0 < t ≤ T } ; , ˜ A = ∂A ∂c (˜ c ) ˜ B = ∂B ∂c (˜ c ) (11) c ( ∂A c ) + ∂B R (˜ c ) = A (˜ c ) + B (˜ c ) − ˜ ∂c (˜ ∂c (˜ c )) c (0) = c 0 , J¨ urgen Geiser 13

  14. where ˜ A, ˜ B, ˜ A + ˜ B : X → X are given linear operators being generators of the C 0 -semigroup and c 0 ∈ X is a given element. Then the iteration process (8)–(9) is convergent and the and the rate of the convergence is of second order. We obtain the iterative result : � e i � = Kτ n � e i − 1 � + O ( τ 2 n ) , (12) and hence � e 2 m +1 � = K 1 τ 2 m +1 � e 0 � + O ( τ 2 m +1 (13) ) , n n where e i ( t ) = c ( t ) − c i ( t ) and 2 m + 1 are the number of iterates. J¨ urgen Geiser 14

  15. Proof 3. See [Geiser & Kravvaritis 2006, Preprint] Let us consider the iteration (8)–(9) on the sub-interval [ t n , t n +1 ] . For the error function e i ( t ) = c ( t ) − c i ( t ) we have the relations t ∈ ( t n , t n +1 ] , ∂ t e i ( t ) = A ( e i ( t )) + B ( e i − 1 ( t )) , (14) e i ( t n ) = 0 and t ∈ ( t n , t n +1 ] , ∂ t e i +1 ( t ) = A ( e i ( t )) + B ( e i +1 ( t )) , (15) e i +1 ( t n ) = 0 for m = 0 , 2 , 4 , . . . , with e 0 (0) = 0 and e − 1 ( t ) = c ( t ) . J¨ urgen Geiser 15

  16. We obtain the linearised equations : In the following we use the notations X 2 for the product space X × X enabled with the norm � ( u, v ) � = max {� u � , � v �} ( u, v ∈ X ). The elements E i ( t ) , F i ( t ) ∈ X 2 and the linear operator A : X 2 → X 2 are defined as follows � ∂A ( c i − 1 ) � � � e i ( t ) 0 ∂c E i ( t ) = ; A = . (16) ∂A ( c i − 1 ) ∂B ( c i − 1 ) e i +1 ( t ) ∂c ∂c � � ∂A ( e i − 1 ) A ( e i − 1 ( t )) + B ( e i − 1 ( t )) − e i − 1 ∂c F i ( t ) = ; ∂A ( e i − 1 ) ∂B ( e i − 1 ) A ( e i − 1 ( t )) + B ( e i − 1 ( t )) − e i − 1 − e i − 1 ∂c ∂c (17) J¨ urgen Geiser 16

Recommend


More recommend