1 Math 211 Math 211 Lecture #13 Runge-Kutta Methods September 24, 2003 2 Basic Problem Basic Problem Numerically “solve” y ′ = f ( t, y ) on the interval [ a, b ] with y ( a ) = y 0 . • Find a discrete set of points a = t 0 < t 1 < t 2 < · · · < t N − 1 < t N = b • and values y 0 , y 1 , y 2 , . . . , y N − 1 , y N with y j approximately equal to y ( t j ) . Return 3 Runge-Kutta vs Euler Runge-Kutta vs Euler • Both use a fixed step size h = ( b − a ) /N. • Euler’s method � y k = y k − 1 + f ( t k − 1 , y k − 1 ) · h ◮ Uses one slope f ( t k − 1 , y k − 1 ) • Runge-Kutta methods � y k = y k − 1 + S · h ◮ S is a weighted average of two or more slopes. ◮ Slopes chosen to increase the accuracy. Return 1 John C. Polking
4 Second Order Runge-Kutta Second Order Runge-Kutta The basic RK step is y k = y k − 1 + S · h • RK2 uses S = 1 2 ( s 1 + s 2 ) , where � s 1 = f ( t k − 1 , y k − 1 ) � s 2 = f ( t k − 1 + h, y k − 1 + s 1 · h ) • y k = y k − 1 + 1 2 ( s 1 + s 2 ) · h ; t k = t k − 1 + h Return 5 Second Order Runge-Kutta – Algorithm Second Order Runge-Kutta – Algorithm Input t 0 and y 0 . for k = 1 to N s 1 = f ( t k − 1 , y k − 1 ) s 2 = f ( t k − 1 + h, y k − 1 + s 1 h ) y k = y k − 1 + 1 2 ( s 1 + s 2 ) h t k = t k − 1 + h Return RK2 step General idea Euler 6 2 nd Order R-K – Error Analysis 2 nd Order R-K – Error Analysis • The truncation error at each step is ≤ Ch 3 . • There are N = ( b − a ) /h steps, but truncation error can propagate exponentially. • Computation shows that e L ( b − a ) − 1 � Max error ≤ C � h 2 , where C & L are constants that depend on f . • Good news: decreases like h 2 as h decreases. • Bad news: can get exponentially large as b − a increases. Return Euler 2 John C. Polking
7 Fourth Order Runge-Kutta Fourth Order Runge-Kutta The basic RK step is y k = y k − 1 + S · h • RK4 uses S = 1 6 ( s 1 + 2 s 2 + 2 s 3 + s 4 ) , where � s 1 = f ( t k − 1 , y k − 1 ) � s 2 = f ( t k − 1 + h/ 2 , y k − 1 + s 1 · h/ 2) � s 3 = f ( t k − 1 + h/ 2 , y k − 1 + s 2 · h/ 2) � s 4 = f ( t k − 1 + h, y k − 1 + s 3 · h ) • y k = y k − 1 + 1 6 ( s 1 + 2 s 2 + 2 s 3 + s 4 ) · h Return RK2 8 Fourth Order Runge-Kutta – Algorithm Fourth Order Runge-Kutta – Algorithm Input t 0 and y 0 . for k = 1 to N s 1 = f ( t k − 1 , y k − 1 ) s 2 = f ( t k − 1 + h/ 2 , y k − 1 + s 1 · h/ 2) s 3 = f ( t k − 1 + h/ 2 , y k − 1 + s 2 · h/ 2) s 4 = f ( t k − 1 + h, y k − 1 + s 3 · h ) y k = y k − 1 + 1 6 ( s 1 + 2 s 2 + 2 s 3 + s 4 ) · h t k = t k − 1 + h Return RK4step RK2 Euler 9 4 th Order R-K – Error Analysis 4 th Order R-K – Error Analysis • The truncation error at each step is ≤ Ch 5 . • There are N = ( b − a ) /h steps, but the truncation error can propagate exponentially. • Computation shows that e L ( b − a ) − 1 � Max error ≤ C � h 4 , where C & L are constants that depend on f . • Good news: decreases like h 4 as h decreases. • Bad news: can get exponentially large as b − a increases. Return RK2 Euler 3 John C. Polking
10 M ATLAB Routines rk2.m & rk4.m M ATLAB Routines rk2.m & rk4.m • Syntax: [t,y] = rk2(derfile, [ t 0 , t f ] , y 0 , h ); � derfile - derivative m-file defining the equation. � t 0 - initial time; t f - final time. � y 0 - initial value. � h - step size. 11 Experimental Error Analysis Experimental Error Analysis • IVP y ′ = cos( t ) / (2 y − 2) with y (0) = 3 • Exact solution: y ( t ) = 1 + √ 4 + sin t. • For several step sizes solve using Runge-Kutta methods and compare with the exact solution. • For several step sizes solve IVP using Euler’s method and the Runge-Kutta methods and compare the errors with the 3 methods. � Use odesolvedemo.m. Return 12 Euler’s Method – Algorithm Euler’s Method – Algorithm Input t 0 and y 0 . for k = 1 to N y k = y k − 1 + f ( t k − 1 , y k − 1 ) h t k = t k − 1 + h Return 4 John C. Polking
13 Error Analysis – Euler’s method Error Analysis – Euler’s method • Truncation error at each step is ≤ Ch 2 . • There are N = ( b − a ) /h steps, but truncation error can grow exponentially. • Computation shows that e L ( b − a ) − 1 � Maximum error ≤ C � h, where C & L are constants that depend on f . • Good news: the error decreases as h decreases. • Bad news: the error can get exponentially large as the length of the interval [i.e., b-a] increases. Return 5 John C. Polking
Recommend
More recommend