simulation methods for differential equations
play

Simulation methods for differential equations Moritz Diehl and Rien - PowerPoint PPT Presentation

Simulation methods for differential equations Moritz Diehl and Rien Quirynen February 16, 2016 1 / 38 Introduction Dynamic system simulation: map from inputs to outputs 20 5 15 4 10 3 5 u 0 y 2 5 1 10 0 15 20 1


  1. Stiffness example Let us consider the following simple one-dimensional system x ( t ) = − 50( x ( t ) − cos ( t )) ˙ Stepsize h = 0.038 2 explicit euler implicit euler exact 1.5 1 x 0.5 0 −0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 t 14 / 38

  2. Stiffness example Let us consider the following simple one-dimensional system x ( t ) = − 50( x ( t ) − cos ( t )) ˙ Stepsize h = 0.04 2 explicit euler implicit euler exact 1.5 1 x 0.5 0 −0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 t 14 / 38

  3. Explicit vs. Implicit Euler Explicit Euler: x n = x n − 1 + h f ( t n − 1 , x n − 1 ) Implicit Euler: x n = x n − 1 + h f ( t n , x n ) For given x n − 1 , implicit method needs root finding solver to find x n . 15 / 38

  4. Stiffness Stiffness depends largely on 16 / 38

  5. Stiffness Stiffness depends largely on ◮ the eigenvalues λ ( t ) of the Jacobian ∂ f ∂ x ◮ but also system dimension, smoothness of the solution, . . . 16 / 38

  6. Stiffness Stiffness depends largely on ◮ the eigenvalues λ ( t ) of the Jacobian ∂ f ∂ x ◮ but also system dimension, smoothness of the solution, . . . ⇓ ◮ various mathematical definitions exist ◮ new concepts needed: A-stability, I-stability, A( α )-stability, L-stability, . . . 16 / 38

  7. Stiffness Stiffness depends largely on ◮ the eigenvalues λ ( t ) of the Jacobian ∂ f ∂ x ◮ but also system dimension, smoothness of the solution, . . . ⇓ ◮ various mathematical definitions exist ◮ new concepts needed: A-stability, I-stability, A( α )-stability, L-stability, . . . Main message: stiff systems require (semi-)implicit methods! 16 / 38

  8. Overview Runge-Kutta methods: Runge-Kutta explicit implicit

  9. Overview Runge-Kutta methods: Runge-Kutta explicit implicit implicit 17 / 38

  10. Implicit Runge-Kutta (IRK) methods IRK as the natural generalization from ERK methods: 0 c 2 a 21 c 3 a 31 a 32 . . ... . . . . · · · c s a s 1 a s 2 b 1 b 2 · · · b s 18 / 38

  11. Implicit Runge-Kutta (IRK) methods IRK as the natural generalization from ERK methods: 0 c 1 a 11 · · · a 1 s c 2 a 21 c 2 a 21 · · · a 2 s c 3 a 31 a 32 . . . . . . . . ... . . . . . . . ⇒ · · · c s a s 1 a ss · · · c s a s 1 a s 2 b 1 · · · b s b 1 b 2 · · · b s 18 / 38

  12. Implicit Runge-Kutta (IRK) methods IRK as the natural generalization from ERK methods: 0 c 1 a 11 · · · a 1 s c 2 a 21 c 2 a 21 · · · a 2 s c 3 a 31 a 32 . . . . . . . . ... . . . . . . . ⇒ · · · c s a s 1 a ss · · · c s a s 1 a s 2 b 1 · · · b s b 1 b 2 · · · b s e.g. x n = x n − 1 + h f n − 1 18 / 38

  13. Implicit Runge-Kutta (IRK) methods IRK as the natural generalization from ERK methods: 0 c 1 a 11 · · · a 1 s c 2 a 21 c 2 a 21 · · · a 2 s c 3 a 31 a 32 . . . . . . . . ... . . . . . . . ⇒ · · · c s a s 1 a ss · · · c s a s 1 a s 2 b 1 · · · b s b 1 b 2 · · · b s e.g. x n = x n − 1 + h f n x n = x n − 1 + h f n − 1 18 / 38

  14. Implicit Runge-Kutta (IRK) methods IRK as the natural generalization from ERK methods:   s � k 1 = f  t n − 1 + c 1 h , x n − 1 + h a 1 j k j  c 1 a 11 · · · a 1 s j =1 c 2 a 21 · · · a 2 s . . . . . . . . . . . .   s � k s = f  t n − 1 + c s h , x n − 1 + h a sj k j c s a s 1 · · · a ss  j =1 · · · b 1 b s s � x n = x n − 1 + h b i k i i =1 19 / 38

  15. Implicit Runge-Kutta (IRK) methods IRK as the natural generalization from ERK methods:   s � k 1 = f  t n − 1 + c 1 h , x n − 1 + h a 1 j k j  c 1 a 11 · · · a 1 s j =1 c 2 a 21 · · · a 2 s . . . . . . . . . . . .   s � k s = f  t n − 1 + c s h , x n − 1 + h a sj k j c s a s 1 · · · a ss  j =1 · · · b 1 b s s � x n = x n − 1 + h b i k i i =1 pro : nice properties (order, stability) 19 / 38

  16. Implicit Runge-Kutta (IRK) methods IRK as the natural generalization from ERK methods:   s � k 1 = f  t n − 1 + c 1 h , x n − 1 + h a 1 j k j  c 1 a 11 · · · a 1 s j =1 c 2 a 21 · · · a 2 s . . . . . . . . . . . .   s � k s = f  t n − 1 + c s h , x n − 1 + h a sj k j c s a s 1 · · · a ss  j =1 · · · b 1 b s s � x n = x n − 1 + h b i k i i =1 pro : nice properties (order, stability) con : large nonlinear system 19 / 38

  17. Implicit Runge-Kutta (IRK) methods IRK as the natural generalization from ERK methods:   s � k 1 = f  t n − 1 + c 1 h , x n − 1 + h a 1 j k j  c 1 a 11 · · · a 1 s j =1 c 2 a 21 · · · a 2 s . . . . . . . . . . . .   s � k s = f  t n − 1 + c s h , x n − 1 + h a sj k j c s a s 1 · · · a ss  j =1 · · · b 1 b s s � x n = x n − 1 + h b i k i i =1 pro : nice properties (order, stability) con : large nonlinear system ⇒ Newton’s method 19 / 38

  18. Implicit Runge-Kutta (IRK) methods Explicit ODE system: x ( t ) = f ( t , x ( t )) ˙   s � k 1 = f  t n − 1 + c 1 h , x n − 1 + h a 1 j k j  j =1 . . .   s � k s = f  t n − 1 + c s h , x n − 1 + h a sj k j  j =1 s � x n = x n − 1 + h b i k i i =1 20 / 38

  19. Implicit Runge-Kutta (IRK) methods Explicit ODE system: Implicit ODE/DAE (index 1): x ( t ) = f ( t , x ( t )) ˙ 0 = f ( t , ˙ x ( t ) , x ( t ) , z ( t ))     s s � � k 1 = f  t n − 1 + c 1 h , x n − 1 + h a 1 j k j 0 = f  t n − 1 + c 1 h , k 1 , x n − 1 + h a 1 j k j , Z 1   j =1 j =1 . . . . . .     s s � � k s = f  t n − 1 + c s h , x n − 1 + h a sj k j 0 = f  t n − 1 + c s h , k s , x n − 1 + h a sj k j , Z s   j =1 j =1 s s � x n = x n − 1 + h � b i k i x n = x n − 1 + h b i k i i =1 i =1 20 / 38

  20. Implicit Runge-Kutta (IRK) methods Explicit ODE system: Implicit ODE/DAE (index 1): x ( t ) = f ( t , x ( t )) ˙ 0 = f ( t , ˙ x ( t ) , x ( t ) , z ( t ))     s s � � k 1 = f  t n − 1 + c 1 h , x n − 1 + h a 1 j k j 0 = f  t n − 1 + c 1 h , k 1 , x n − 1 + h a 1 j k j , Z 1   j =1 j =1 . . . . . .     s s � � k s = f  t n − 1 + c s h , x n − 1 + h a sj k j 0 = f  t n − 1 + c s h , k s , x n − 1 + h a sj k j , Z s   j =1 j =1 s s � x n = x n − 1 + h � b i k i x n = x n − 1 + h b i k i i =1 i =1 20 / 38

  21. Collocation methods Important family of IRK methods: ◮ distinct c i ’s (nonconfluent) ◮ polynomial q ( t ) of degree s 21 / 38

  22. Collocation methods Important family of IRK methods: ◮ distinct c i ’s (nonconfluent) ◮ polynomial q ( t ) of degree s q ( t n − 1 ) = x n − 1 q ( t n − 1 + c 1 h ) = f ( t n − 1 + c 1 h , q ( t n − 1 + c 1 h )) ˙ . . . q ( t n − 1 + c s h ) = f ( t n − 1 + c s h , q ( t n − 1 + c s h )) ˙ continuous approximation ⇒ x n = q ( t n − 1 + h ) 21 / 38

  23. Collocation methods Important family of IRK methods: ◮ distinct c i ’s (nonconfluent) ◮ polynomial q ( t ) of degree s q ( t n − 1 ) = x n − 1 q ( t n − 1 + c 1 h ) = f ( t n − 1 + c 1 h , q ( t n − 1 + c 1 h )) ˙ . . . q ( t n − 1 + c s h ) = f ( t n − 1 + c s h , q ( t n − 1 + c s h )) ˙ NOTE: this is very popular continuous approximation in direct optimal control! ⇒ x n = q ( t n − 1 + h ) 21 / 38

  24. Collocation methods How to implement a collocation method? q ( t n − 1 ) = x n − 1 q ( t n − 1 + c 1 h ) = f ( t n − 1 + c 1 h , q ( t n − 1 + c 1 h )) ˙ . . . q ( t n − 1 + c s h ) = f ( t n − 1 + c s h , q ( t n − 1 + c s h )) ˙ 22 / 38

  25. Collocation methods How to implement a collocation method? q ( t n − 1 ) = x n − 1 q ( t n − 1 + c 1 h ) = f ( t n − 1 + c 1 h , q ( t n − 1 + c 1 h )) ˙ . . . q ( t n − 1 + c s h ) = f ( t n − 1 + c s h , q ( t n − 1 + c s h )) ˙ This is nothing else than . . . s � k 1 = f ( t n − 1 + c 1 h , x n − 1 + h a 1 j k j ) j =1 . . . s � k s = f ( t n − 1 + c s h , x n − 1 + h a sj k j ) j =1 s � x n = x n − 1 + h b i k i i =1 where the Butcher table is defined by the collocation nodes c i . 22 / 38

  26. Collocation methods Example: The Gauss methods 23 / 38

  27. Collocation methods Example: The Gauss methods 1 s = 1, p = 2 0.8 ◮ roots of Legendre s = 2, p = 4 0.6 s = 3, p = 6 0.4 polynomials 0.2 ◮ A-stable 0 −0.2 −0.4 ◮ optimal order −0.6 −0.8 ( p = 2 s ) −1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 23 / 38

  28. Collocation methods Example: The Gauss methods 1 0.8 s = 1, p = 2 ◮ roots of Legendre s = 2, p = 4 0.6 s = 3, p = 6 0.4 polynomials 0.2 ◮ A-stable 0 −0.2 −0.4 ◮ optimal order −0.6 −0.8 ( p = 2 s ) −1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 c 1 = 1 2 , s = 1 , p = 2 , √ √ c 1 = 1 6 , c 2 = 1 3 3 2 − 2 + 6 , s = 2 , p = 4 , √ √ c 1 = 1 10 , c 2 = 1 15 2 , c 3 = 1 15 2 − 2 + 10 , s = 3 , p = 6 . 23 / 38

  29. Collocation methods Example: The Gauss methods 1 0.8 s = 1, p = 2 ◮ roots of Legendre s = 2, p = 4 0.6 s = 3, p = 6 polynomials 0.4 0.2 0 ◮ A-stable −0.2 −0.4 ◮ optimal order −0.6 −0.8 ( p = 2 s ) −1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 At least as popular: Radau IIA methods ( p = 2 s − 1, stiffly accurate, L-stable) 23 / 38

  30. Overview Runge-Kutta methods: Runge-Kutta explicit implicit 24 / 38

  31. Overview Runge-Kutta methods: Runge-Kutta explicit semi-implicit implicit 24 / 38

  32. Semi-implicit Runge-Kutta methods The matrix A is not strictly lower triangular . . . 25 / 38

  33. Semi-implicit Runge-Kutta methods The matrix A is not strictly lower triangular . . . but there is a specific structure! ◮ Diagonal IRK (DIRK) ◮ Singly DIRK (SDIRK) ◮ Explicit SDIRK (ESDIRK) 25 / 38

  34. Intermezzo: sensitivity propagation Task of the integrator in nonlinear optimal control ◮ x k +1 = Φ k ( x k , u k ) ◮ nonlinear equality constraint 26 / 38

  35. Intermezzo: sensitivity propagation Task of the integrator in nonlinear optimal control ◮ x k +1 = Φ k ( x k , u k ) ◮ nonlinear equality constraint ↓ ◮ linearization at ¯ w k = (¯ x k , ¯ u k ) w k ) − x k +1 + ∂ Φ k 0 = Φ k ( ¯ ∂ w ( ¯ w k ) ( w k − ¯ w k ) ◮ integration and sensitivity generation is typically a major computational step 27 / 38

  36. Intermezzo: sensitivity propagation “integrate-then-differentiate” ◮ derivatives of result ◮ Internal Numerical Differentiation (IND) ◮ direct IFT approach 28 / 38

  37. Intermezzo: sensitivity propagation “integrate-then-differentiate” ◮ derivatives of result ◮ Internal Numerical Differentiation (IND) ◮ direct IFT approach “differentiate-then-integrate” ◮ sensitivity equations ◮ extends IVP (forward) ◮ or new IVP (reverse) ⇒ They are different 28 / 38

  38. Intermezzo: sensitivity propagation “integrate-then-differentiate” ◮ derivatives of result ◮ Internal Numerical Differentiation (IND) x = f ( x ) ˙ ◮ direct IFT approach “differentiate-then-integrate” ◮ sensitivity equations ◮ extends IVP (forward) ◮ or new IVP (reverse) ⇒ They are different . . . or not? 28 / 38

  39. Intermezzo: sensitivity propagation “integrate-then-differentiate” ◮ derivatives of result x = f ( x ) ˙ ⇓ integrate ◮ Internal Numerical Differentiation (IND) x k +1 = x k + h f ( x k ) ◮ direct IFT approach “differentiate-then-integrate” � ˙ � � f ( x ) � x = F ( X ) = ◮ sensitivity equations ˙ ∂ f S ∂ x S ◮ extends IVP (forward) ◮ or new IVP (reverse) ⇒ They are different . . . or not? 28 / 38

  40. Intermezzo: sensitivity propagation “integrate-then-differentiate” x = f ( x ) ˙ ◮ derivatives of result ⇓ integrate ◮ Internal Numerical x k +1 = x k + h f ( x k ) Differentiation (IND) S k +1 = S k + h ∂ f ( x k ) S k ◮ direct IFT approach ∂ x � ˙ “differentiate-then-integrate” � � f ( x ) � x = F ( X ) = ˙ ∂ f ◮ sensitivity equations S ∂ x S ◮ extends IVP (forward) ⇓ integrate ◮ or new IVP (reverse) X k +1 = X k + h F ( X k ) ⇒ They are different . . . or not? 28 / 38

  41. Intermezzo: sensitivity propagation Variational Differential Equations “differentiate-then-integrate” Solve additional matrix differential equation x = f ( x ) ˙ x (0) = x 0 , x ( t N ) = x N S (0) = d , S ( t N ) = ∂ x N S = ∂ f ˙ d ∂ x S ∂ x 0 29 / 38

  42. Intermezzo: sensitivity propagation Variational Differential Equations “differentiate-then-integrate” Solve additional matrix differential equation x = f ( x ) ˙ x (0) = x 0 , x ( t N ) = x N S (0) = d , S ( t N ) = ∂ x N S = ∂ f ˙ d ∂ x S ∂ x 0 Very accurate at reasonable costs, but: ◮ Have to get expressions for ∂ f ∂ x ( · ). ◮ Computed sensitivity is not 100 % identical with derivative of (discretized) integrator result Φ( · ). ◮ What about implicit integration schemes? 29 / 38

  43. Intermezzo: sensitivity propagation External Numerical Differentiation (END) “integrate-then-differentiate” Finite differences: perturb x 0 and call integrator several times x ( t N ; x 0 + ǫ e i ) − x ( t N ; x 0 ) ǫ 30 / 38

  44. Intermezzo: sensitivity propagation External Numerical Differentiation (END) “integrate-then-differentiate” Finite differences: perturb x 0 and call integrator several times x ( t N ; x 0 + ǫ e i ) − x ( t N ; x 0 ) ǫ Very easy to implement, but several problems: ◮ Relatively expensive with overhead of error control. ◮ How to choose perturbation stepsize? Rule of thumb: √ ǫ = TOL where TOL is integrator tolerance. ◮ Loss of half the digits of accuracy: if integrator accuracy has value of TOL = 10 − 4 , derivative has only two valid digits! 30 / 38

  45. Intermezzo: sensitivity propagation External Numerical Differentiation (END) “integrate-then-differentiate” Finite differences: perturb x 0 and call integrator several times x ( t N ; x 0 + ǫ e i ) − x ( t N ; x 0 ) ǫ Very easy to implement, but several problems: ◮ Relatively expensive with overhead of error control. ◮ How to choose perturbation stepsize? Rule of thumb: √ ǫ = TOL where TOL is integrator tolerance. ◮ Loss of half the digits of accuracy: if integrator accuracy has value of TOL = 10 − 4 , derivative has only two valid digits! ◮ Due to adaptivity, each call might have different discretization grids: output x ( t N ; x 0 ) is not differentiable! 30 / 38

  46. Intermezzo: sensitivity propagation Internal Numerical Differentiation (IND) “integrate-then-differentiate” Like END, but evaluate simultaneously all perturbed trajectories x i with frozen discretization grid. Up to round-off and linearization errors identical with derivative of numerical solution Φ( · ), but: ◮ How to choose perturbation stepsize? 31 / 38

  47. Intermezzo: sensitivity propagation Internal Numerical Differentiation (IND) “integrate-then-differentiate” Like END, but evaluate simultaneously all perturbed trajectories x i with frozen discretization grid. Up to round-off and linearization errors identical with derivative of numerical solution Φ( · ), but: ◮ How to choose perturbation stepsize? Rule of thumb: √ ǫ = PREC where PREC is machine precision . 31 / 38

  48. Intermezzo: sensitivity propagation Internal Numerical Differentiation (IND) “integrate-then-differentiate” Like END, but evaluate simultaneously all perturbed trajectories x i with frozen discretization grid. Up to round-off and linearization errors identical with derivative of numerical solution Φ( · ), but: ◮ How to choose perturbation stepsize? Rule of thumb: √ ǫ = PREC where PREC is machine precision . Note: adaptivity of nominal trajectory only, reuse of matrix factorization in implicit methods, so not only more accurate, but also cheaper than END! 31 / 38

  49. Intermezzo: sensitivity propagation Algorithmic Differentiation (AD) “integrate-then-differentiate” Use Algorithmic Differentiation (AD) to differentiate each step of the integration scheme. 32 / 38

  50. Intermezzo: sensitivity propagation Algorithmic Differentiation (AD) “integrate-then-differentiate” Use Algorithmic Differentiation (AD) to differentiate each step of the integration scheme. Illustration: AD for Euler x = f ( x ) ˙ 32 / 38

  51. Intermezzo: sensitivity propagation Algorithmic Differentiation (AD) “integrate-then-differentiate” Use Algorithmic Differentiation (AD) to differentiate each step of the integration scheme. Illustration: AD for Euler x = f ( x ) ˙ ⇓ integrate x k +1 = x k + h f ( x k ) 32 / 38

  52. Intermezzo: sensitivity propagation Algorithmic Differentiation (AD) “integrate-then-differentiate” Use Algorithmic Differentiation (AD) to differentiate each step of the integration scheme. Illustration: AD for Euler x = f ( x ) ˙ ⇓ integrate x k +1 = x k + h f ( x k ) S k +1 = S k + h ∂ f ( x k ) S k ∂ x Up to machine precision 100 % identical with derivative of numerical solution Φ( · ), but: 32 / 38

  53. Intermezzo: sensitivity propagation Algorithmic Differentiation (AD) “integrate-then-differentiate” Use Algorithmic Differentiation (AD) to differentiate each step of the integration scheme. Illustration: AD for Euler x = f ( x ) ˙ ⇓ integrate x k +1 = x k + h f ( x k ) S k +1 = S k + h ∂ f ( x k ) S k ∂ x Up to machine precision 100 % identical with derivative of numerical solution Φ( · ), but: ◮ tailored implementations needed (e.g. ACADO) . . . ◮ or integrator and right-hand side f ( · ) need to be compatible codes (e.g. C++ when using ADOL-C) 32 / 38

  54. Overview Classes of numerical methods: General Linear Methods

  55. Overview Classes of numerical methods: General Linear Methods Multistep Single-step

  56. Overview Classes of numerical methods: General Linear Methods Multistep Single-step Linear Multistep Runge-Kutta

  57. Overview Classes of numerical methods: General Linear Methods Multistep Single-step Linear Multistep Runge-Kutta explicit implicit explicit implicit

  58. Overview Classes of numerical methods: General Linear Methods and others ... Multistep Single-step Linear Multistep Runge-Kutta explicit implicit explicit implicit

  59. Overview Classes of numerical methods: General Linear Methods and others ... Multistep Single-step Linear Multistep Linear Multistep Runge-Kutta explicit implicit explicit implicit 33 / 38

  60. Multistep methods Each method takes a step forward in time to find the next solution point, but this can be based either: 34 / 38

  61. Multistep methods Each method takes a step forward in time to find the next solution point, but this can be based either: ◮ on the previous point and its derivative, often with intermediate steps (single step) 34 / 38

  62. Multistep methods Each method takes a step forward in time to find the next solution point, but this can be based either: ◮ on the previous point and its derivative, often with intermediate steps (single step) ◮ on a certain amount of previous points and their derivatives (multistep) 34 / 38

  63. Multistep methods Each method takes a step forward in time to find the next solution point, but this can be based either: ◮ on the previous point and its derivative, often with intermediate steps (single step) ◮ on a certain amount of previous points and their derivatives (multistep) ⇒ good starting procedure needed! 34 / 38

  64. Linear multistep methods Let us consider the simplified system ˙ x ( t ) = f ( t , x ( t )). A s -step LM method then uses x i , f i = f ( t i , x i ) for i = n − s , . . . , n − 1 to compute x n ≈ x ( t n ): x n + a s − 1 x n − 1 + . . . + a 0 x n − s = h ( b s f n + b s − 1 f n − 1 + . . . + b 0 f n − s ) 35 / 38

  65. Linear multistep methods Let us consider the simplified system ˙ x ( t ) = f ( t , x ( t )). A s -step LM method then uses x i , f i = f ( t i , x i ) for i = n − s , . . . , n − 1 to compute x n ≈ x ( t n ): x n + a s − 1 x n − 1 + . . . + a 0 x n − s = � � h b s f n + b s − 1 f n − 1 + . . . + b 0 f n − s explicit ( b s = 0) ↔ implicit ( b s � = 0) 35 / 38

Recommend


More recommend