Scientific Computing I Part I Module 4: Numerical Methods for ODE Basic Numerical Methods Michael Bader Lehrstuhl Informatik V Winter 2005/2006 Motivation: Direction Fields “Following the Arrows” given: initial value problem: direction field illustrates slope for given time t n and value y n : ˙ y ( t ) = f ( t , y ( t )) , y ( t 0 ) = y 0 y n = f ( t n , y n ) ˙ easily computable: direction field “follow arrows” = make a small step in the given direction: 5 4 y n + 1 := y n + τ ˙ p(t) y n = y n + τ f ( t n , y n ) 3 motivates numerical scheme: 2 1 y 0 := y 0 0 0 2 4 6 8 10 y n + 1 := y n + τ f ( t n , y n ) for n = 0 , 1 , 2 ,... t idea: “follow the arrows” Euler’s Method Euler’s Method – 1D examples numerical scheme is called Euler’s method : model of Maltus, ˙ p ( t ) = α p ( t ) : y n + 1 := y n + τ f ( t n , y n ) p n + 1 := p n + τα p n Logistic Growth, ˙ results from finite difference approximation: p ( t ) = α ( 1 − p ( t ) / β ) p ( t ) : y n + 1 − y n � 1 − p n � ≈ ˙ y n = f ( t n , y n ) p n + 1 := p n + τα p n τ β (difference quotient instead of derivative) Logistic growth with threshold: or from truncation of Taylor expansion: � 1 − p n �� 1 − p n � p n + 1 := p n + τα p n y ( t n )+ O ( τ 2 ) y ( t n + 1 ) = y ( t n )+ τ ˙ β δ
Euler’s Method in 2D Discretized Model vs. Discrete Model simplest example: model of Maltus Euler’s method is easily extend to systems of ODE: p n + 1 := p n − τα p n , α > 0 y n + 1 := y n + τ f ( t n , y n ) compare to discrete model: example: nonlinear extinction model p n + 1 := p n − δ p n , δ > 0 � � 71 8 − 23 12 p ( t ) − 25 p ( t ) ˙ = 12 q ( t ) p ( t ) with decay rate δ (“percentage”) � � 73 8 − 25 12 p ( t ) − 23 q ( t ) ˙ = 12 q ( t ) q ( t ) obvious restriction in the discrete model: δ < 1 Euler’s method: obvious restriction for τ in the discretized model? � � 71 8 − 23 12 p n − 25 τα < 1 ⇒ τ < α − 1 p ( t ) ˙ = p n + τ 12 q n p n � � 73 8 − 25 12 p n − 23 ˙ q ( t ) = q n + τ 12 q n q n not that simple in non-linear models or systems of ODE! Implicit Euler Implicit Euler – Examples example: Model of Maltus Euler’s method (“explicit Euler”): 1 p n + 1 := p n + τα p n + 1 ⇒ p n + 1 = 1 − τα p n y n + 1 := y n + τ f ( t n , y n ) correct (discrete) model? implicit Euler: 0 < ( 1 − τα ) − 1 < 1 for any τ α < 0 : then τ < α − 1 required! y n + 1 := y n + τ f ( t n + 1 , y n + 1 ) α > 0 : then explicit formula for y n + 1 not immediately available in physics α < 0 is more frequent! to do: solve equation for y n + 1 (damped systems, friction, . . . ) implicit schemes preferred when explicit schemes require very small τ Implicit Euler – 2D Example Local Discretisation Error local influence of using differences instead of derivatives example: arms race example: Euler’s method p n + 1 = b 1 + a 11 p n + 1 + a 12 q n + 1 �� y t + τ − y ( t ) � � q n + 1 = b 2 + a 21 p n + 1 + a 22 q n + 1 ) � � l ( τ ) = max − f ( t , y ( t )) � � τ [ a , b ] � � solve linear system of equations: memory hook: insert exact solution y ( t ) into ( 1 − a 11 ) p n + 1 − a 12 q n + 1 = b 1 y n + 1 − y n − ˙ y n − a 21 p n + 1 +( 1 − a 22 ) q n + 1 = b 2 τ A numerical scheme is called consistent , if (for each time step n ) l ( τ ) → 0 for τ → 0
Global Discretisation Error Order of Consistency/Convergence compare numerical solution with exact solution A numerical scheme is called consistent of order p ( p -th example: Euler’s method order consistent), if e ( τ ) = max [ a , b ] {� y k − y ( t k ) �} l ( τ ) = O ( τ p ) ( y ( t ) the exact solution; y k the solution of the A numerical scheme is called convergent of order p ( p -th order convergent), if discretized equation) A numerical scheme is called convergent , if e ( τ ) = O ( τ p ) e ( τ ) → 0 for τ → 0 Runge-Kutta-Methods 1st idea: use additional evaluations of f , e.g.: Part II y n + 1 = g ( y n , f ( t n , y n ) , f ( t n + 1 , y n + 1 )) open question: where to obtain y n + 1 ) , how to Advanced Numerical Methods choose g 2nd idea: numerical approximations for missing values of y : y n + 1 ≈ y n + τ f ( t n , y n ) ⇒ y n + 1 = g � y n , f ( t n , y n ) , f ( t n + 1 , y n + τ f ( t n , y n )) � Runge-Kutta-Methods of 2nd order Runge-Kutta-Method of 4th order classical 4th-order Runge-Kutta: 3rd idea: choose g such that order of consistency is intermediate steps: maximal k 1 = f ( t n , y n ) example: 2nd-order Runge-Kutta: t n + τ 2 , y n + τ � � k 2 = f 2 k 1 y n + 1 = y n + τ � � f ( t n , y n )+ f ( t n + 1 , y n + τ f ( t n , y n )) t n + τ 2 , y n + τ 2 � � k 3 = f 2 k 2 (“method of Heun”) k 3 = f ( t n + τ , y n + τ k 3 ) further example: modified Euler (also 2nd order) explicit scheme: t n + τ 2 , y n + τ � � y n + 1 = y n + τ f 2 f ( t n , y n )) y n + 1 = y n + τ � � k 1 + 2 k 2 + 2 k 3 + k 4 6
Multistep Methods Multistep and Numerical Quadrature 3rd idea: use numerical method for integration 1st idea: use previous steps for computation: → interpolate f using a polynomial p : y n + 1 = g ( y n , y n − 1 ,..., y n − q + 1 ) t n + 1 t n + 1 � � y ( t n + 1 ) − y ( t n ) = f ( t , y ( t )) dt ≈ p ( t ) dt 2nd idea: use integral form of ODE t n t n y ( t ) ˙ = f ( t , y ( t )) where t n + 1 t n + 1 � � ˙ y ( t ) dt = f ( t , y ( t )) dt p ( t j ) = f ( t j , y ( t j )) for j = n − s + 1 ,..., n . t n t n compute integral and obtain quadrature rule: t n + 1 � y ( t n + 1 ) − y ( t n ) = f ( t , y ( t )) dt =? n ∑ y n + 1 = y n + α j f ( t j , y j ) t n j = n − s + 1 Adams-Bashforth Adams-Moulton use idea of Adams-Bashforth, but: s = 1 ⇒ use y n only (leads to Euler’s method): include value y n + 1 ⇒ implicit scheme first order: implicit Euler p ( t ) = f ( t n , y n ) , y n + 1 = y n + τ f ( t n , y n ) p ( t ) = f ( t n + 1 , y n + 1 ) , y n + 1 = y n + τ f ( t n + 1 , y n + 1 ) s = 2 ⇒ use y n − 1 and y n : second order: t n − t f ( t n − 1 , y n − 1 )+ t − t n − 1 p ( t ) = f ( t n , y n ) , τ τ y n + 1 = y n + τ y n + τ 2 ( f ( t n , y n )+ f ( t n + 1 , y n + 1 )) � � y n + 1 = 3 f ( t n , y n ) − f ( t n − 1 , y n − 1 ) 2 how to obtain y k + 1 ? usually consistent of s -th order solve (nonlinear) equation ⇒ difficult! modified at start (no previous values available) easier and more common: predictor-corrector approach Problems for Numerical Methods for ODE Ill-Conditioned Problems small changes in input entail completely different Possible problems: results Ill-Conditioned Problems: Numerical treatment of such problems is always small changes in the input ⇒ big changes in the difficult! exact solution of the ODE discriminate: Instability: only at critical points? big errors in the numerical solution compared to the everywhere? exact solution (for arbitrarily small time steps possible risks: although the method is consistent) non-precise input Stiffness: round-off errors,. . . small time steps required for acceptable errors in question: what are you interested in? the approximate solution (although the exact really the solution for specific initial condition? solution is smooth) statistical info on the solution? general behaviour (patterns)?
Stability Stability (2) Example: Observation: ˙ y ( t ) = − 2 y ( t )+ 1 , y ( 0 ) = 1 2-step rule: y n + 1 = y n − 1 + 2 τ ( 1 − 2 y n ) 2 ( e − 2 t + 1 ) exact solution: y ( t ) = 1 well-conditioned: y ε ( 0 ) = 1 + ε ⇒ y ε ( t ) − y ( t ) = ε e − 2 t start with exact initial values: y 0 = y ( 0 ) and y 1 = y ( τ ) use midpoint rule (multistep scheme): numerical results for different sizes of τ : τ = 1 . 0 ⇒ y 9 = − 4945 . 5 , y 10 = 20953 . 9 y n + 1 = y n − 1 + 2 τ · f ( x n , y n ) τ = 0 . 1 ⇒ y 79 = − 1725 . 3 , y 80 = 2105 . 7 τ = 0 . 01 ⇒ y 999 = − 154 . 6 , y 1000 = 158 . 7 leads to numerical scheme: midpoint rule is 2nd-order consistent, but does not y n + 1 = y n − 1 + 2 τ ( 1 − 2 y n ) converge here: oscillations or instable behaviour Stability (3) Stiff Equations Example: ˙ y ( t ) = − 1000 y ( t )+ 1000 , y ( 0 ) = y 0 = 2 reason: difference equation generates spurious solutions exact solution: y ( t ) = e − 1000 t + 1 analysis: roots µ i of characteristic polynomial y 2 = y 0 + 4 τ ( 1 − y ) ; all | µ i | < 1? explicit Euler (stable): Stability of ODE schemes: y k + 1 = y k + τ ( − 1000 y k + 1000 ) single step schemes: always stable = ( 1 − 1000 τ ) y k + 1000 τ multistep schemes: additional stability conditions ( 1 − 1000 τ ) k + 1 + 1 = in general: consistency + stability = convergence oscillations and divergence for δ t > 0 . 002 Why that? Consistency and stability are asymptotic terms! Stiff Equations – Summary Summary Runge-Kutta-methods: multiple evaluations of f (expensive, if f is expensive to compute) stable, well-behaved, easy to implement Typical situation: Multistep methods: one term in the ODE demands very small time step higher order, but only evaluations of f (interesting, but does not contribute much to the solution if f is expensive to compute) Remedy: use implicit (or semi-implicit) methods stability problems; behave “like wild horses” in practice: do not use uniform τ and s Implicit methods: for stiff equations most often used as corrector scheme
Recommend
More recommend