The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. y 1 = y 0 + hf ( y 0 ) y 1 f ( y 0 ) y 0 x 0 x 1 x 2 x 3 x 4 Scientific Computation and Differential Equations – p. 6/36
The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. y 2 = y 1 + hf ( y 1 ) y 1 = y 0 + hf ( y 0 ) y 2 ) y 1 ( f y 1 f ( y 0 ) y 0 x 0 x 1 x 2 x 3 x 4 Scientific Computation and Differential Equations – p. 6/36
The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. y 2 = y 1 + hf ( y 1 ) y 1 = y 0 + hf ( y 0 ) y 2 ) y 1 ( f y 1 f ( y 0 ) y 0 x 0 x 1 x 2 x 3 x 4 Scientific Computation and Differential Equations – p. 6/36
The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. y 3 = y 2 + hf ( y 2 ) y 3 y 2 = y 1 + hf ( y 1 ) ) y 2 ( f y 1 = y 0 + hf ( y 0 ) y 2 ) y 1 ( f y 1 f ( y 0 ) y 0 x 0 x 1 x 2 x 3 x 4 Scientific Computation and Differential Equations – p. 6/36
The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. y 3 = y 2 + hf ( y 2 ) y 3 y 2 = y 1 + hf ( y 1 ) ) y 2 ( f y 1 = y 0 + hf ( y 0 ) y 2 ) y 1 ( f y 1 f ( y 0 ) y 0 x 0 x 1 x 2 x 3 x 4 Scientific Computation and Differential Equations – p. 6/36
The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. y 4 = y 3 + hf ( y 3 ) y 4 ) y 3 = y 2 + hf ( y 2 ) 3 y ( f y 3 y 2 = y 1 + hf ( y 1 ) ) y 2 ( f y 1 = y 0 + hf ( y 0 ) y 2 ) y 1 ( f y 1 f ( y 0 ) y 0 x 0 x 1 x 2 x 3 x 4 Scientific Computation and Differential Equations – p. 6/36
More modern methods attempt to improve on the Euler method by Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler method by 1. Using more past history Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler method by 1. Using more past history – Linear multistep methods Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler method by 1. Using more past history – Linear multistep methods 2. Doing more complicated calculations in each step Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler method by 1. Using more past history – Linear multistep methods 2. Doing more complicated calculations in each step – Runge–Kutta methods Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler method by 1. Using more past history – Linear multistep methods 2. Doing more complicated calculations in each step – Runge–Kutta methods 3. Doing both of these Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler method by 1. Using more past history – Linear multistep methods 2. Doing more complicated calculations in each step – Runge–Kutta methods 3. Doing both of these – General linear methods Scientific Computation and Differential Equations – p. 7/36
Some important dates 1883 Adams & Bashforth Linear multistep methods 1895 Runge Runge-Kutta method 1901 Kutta 1925 Nyström Special methods for second order 1926 Moulton Adams-Moulton method 1952 Curtiss & Hirschfelder Stiff problems Scientific Computation and Differential Equations – p. 8/36
Linear multistep methods We will write the differential equation in autonomous form y ′ ( x ) = f ( y ( x )) , y ( x 0 ) = y 0 , Scientific Computation and Differential Equations – p. 9/36
Linear multistep methods We will write the differential equation in autonomous form y ′ ( x ) = f ( y ( x )) , y ( x 0 ) = y 0 , and the aim, for the moment, will be to calculate approximations to y ( x i ) , where x i = x 0 + hi, i = 1 , 2 , 3 , . . . , and h is the “stepsize”. Scientific Computation and Differential Equations – p. 9/36
Linear multistep methods We will write the differential equation in autonomous form y ′ ( x ) = f ( y ( x )) , y ( x 0 ) = y 0 , and the aim, for the moment, will be to calculate approximations to y ( x i ) , where x i = x 0 + hi, i = 1 , 2 , 3 , . . . , and h is the “stepsize”. Linear multistep methods base the approximation to y ( x n ) on a linear combination of approximations to y ( x n − i ) and approximations to y ′ ( x n − i ) , i = 1 , 2 , . . . , k . Scientific Computation and Differential Equations – p. 9/36
Write y i as the approximation to y ( x i ) and f i as the approximation to y ′ ( x i ) = f ( y ( x i )) . Scientific Computation and Differential Equations – p. 10/36
Write y i as the approximation to y ( x i ) and f i as the approximation to y ′ ( x i ) = f ( y ( x i )) . A linear multistep method can be written as k k � � y n = α i y n − i + h β i f n − i i =1 i =0 Scientific Computation and Differential Equations – p. 10/36
Write y i as the approximation to y ( x i ) and f i as the approximation to y ′ ( x i ) = f ( y ( x i )) . A linear multistep method can be written as k k � � y n = α i y n − i + h β i f n − i i =1 i =0 This is a 1-stage 2 k -value method. Scientific Computation and Differential Equations – p. 10/36
Write y i as the approximation to y ( x i ) and f i as the approximation to y ′ ( x i ) = f ( y ( x i )) . A linear multistep method can be written as k k � � y n = α i y n − i + h β i f n − i i =1 i =0 This is a 1-stage 2 k -value method. 1 stage? One evaluation of f per step. Scientific Computation and Differential Equations – p. 10/36
Write y i as the approximation to y ( x i ) and f i as the approximation to y ′ ( x i ) = f ( y ( x i )) . A linear multistep method can be written as k k � � y n = α i y n − i + h β i f n − i i =1 i =0 This is a 1-stage 2 k -value method. 1 stage? One evaluation of f per step. 2 k value? This many quantities are passed between steps. Scientific Computation and Differential Equations – p. 10/36
Write y i as the approximation to y ( x i ) and f i as the approximation to y ′ ( x i ) = f ( y ( x i )) . A linear multistep method can be written as k k � � y n = α i y n − i + h β i f n − i i =1 i =0 This is a 1-stage 2 k -value method. 1 stage? One evaluation of f per step. 2 k value? This many quantities are passed between steps. β 0 = 0 : explicit. Scientific Computation and Differential Equations – p. 10/36
Write y i as the approximation to y ( x i ) and f i as the approximation to y ′ ( x i ) = f ( y ( x i )) . A linear multistep method can be written as k k � � y n = α i y n − i + h β i f n − i i =1 i =0 This is a 1-stage 2 k -value method. 1 stage? One evaluation of f per step. 2 k value? This many quantities are passed between steps. β 0 = 0 : explicit. β 0 � = 0 : implicit. Scientific Computation and Differential Equations – p. 10/36
Runge–Kutta methods A Runge–Kutta method computes y n in terms of a single input y n − 1 and s stages Y 1 , Y 2 , . . . , Y s , Scientific Computation and Differential Equations – p. 11/36
Runge–Kutta methods A Runge–Kutta method computes y n in terms of a single input y n − 1 and s stages Y 1 , Y 2 , . . . , Y s , where s � Y i = y n − 1 + h a ij f ( Y j ) , i = 1 , 2 , . . . , s, j =1 Scientific Computation and Differential Equations – p. 11/36
Runge–Kutta methods A Runge–Kutta method computes y n in terms of a single input y n − 1 and s stages Y 1 , Y 2 , . . . , Y s , where s � Y i = y n − 1 + h a ij f ( Y j ) , i = 1 , 2 , . . . , s, j =1 s � y n = y n − 1 + h b i f ( Y i ) . i =1 Scientific Computation and Differential Equations – p. 11/36
Runge–Kutta methods A Runge–Kutta method computes y n in terms of a single input y n − 1 and s stages Y 1 , Y 2 , . . . , Y s , where s � Y i = y n − 1 + h a ij f ( Y j ) , i = 1 , 2 , . . . , s, j =1 s � y n = y n − 1 + h b i f ( Y i ) . i =1 This is an s -stage 1 -value method. Scientific Computation and Differential Equations – p. 11/36
Runge–Kutta methods A Runge–Kutta method computes y n in terms of a single input y n − 1 and s stages Y 1 , Y 2 , . . . , Y s , where s � Y i = y n − 1 + h a ij f ( Y j ) , i = 1 , 2 , . . . , s, j =1 s � y n = y n − 1 + h b i f ( Y i ) . i =1 This is an s -stage 1 -value method. It is natural to ask if there are useful methods which are multistage (as for Runge–Kutta methods) and multivalue (as for linear multistep methods). Scientific Computation and Differential Equations – p. 11/36
In other words, we ask if there is any value in completing this diagram: Runge-Kutta Linear Multistep Euler Scientific Computation and Differential Equations – p. 12/36
In other words, we ask if there is any value in completing this diagram: Runge-Kutta Linear Multistep Euler Scientific Computation and Differential Equations – p. 12/36
In other words, we ask if there is any value in completing this diagram: General Linear Methods Runge-Kutta Linear Multistep Euler Scientific Computation and Differential Equations – p. 12/36
General linear methods We will consider methods characterised by an ( s + r ) × ( s + r ) partitioned matrix of the form s r � A � U s . r B V Scientific Computation and Differential Equations – p. 13/36
General linear methods We will consider methods characterised by an ( s + r ) × ( s + r ) partitioned matrix of the form s r � A � U s . r B V The r values input to step n − 1 will be denoted by y [ n − 1] , i = 1 , 2 , . . . , r with corresponding output values i y [ n ] and the stage values by Y i , i = 1 , 2 , . . . , s . i Scientific Computation and Differential Equations – p. 13/36
General linear methods We will consider methods characterised by an ( s + r ) × ( s + r ) partitioned matrix of the form s r � A � U s . r B V The r values input to step n − 1 will be denoted by y [ n − 1] , i = 1 , 2 , . . . , r with corresponding output values i y [ n ] and the stage values by Y i , i = 1 , 2 , . . . , s . i The stage derivatives will be denoted by F i = f ( Y i ) . Scientific Computation and Differential Equations – p. 13/36
The formula for computing the stages (and simultaneously the stage derivatives) are: s r u ij y [ n − 1] � � Y i = h a ij F j + F i = f ( Y i ) , , j j =1 j =1 for i = 1 , 2 , . . . , s . Scientific Computation and Differential Equations – p. 14/36
The formula for computing the stages (and simultaneously the stage derivatives) are: s r u ij y [ n − 1] � � Y i = h a ij F j + F i = f ( Y i ) , , j j =1 j =1 for i = 1 , 2 , . . . , s . To compute the output values, use the formula s r y [ n ] v ij y [ n − 1] � � = h b ij F j + i = 1 , 2 , . . . , r. , i j j =1 j =1 Scientific Computation and Differential Equations – p. 14/36
For convenience, write y [ n − 1] y [ n ] Y 1 F 1 1 1 y [ n − 1] y [ n ] Y 2 F 2 2 . y [ n − 1] = y [ n ] = 2 Y = F = , , , , . . . . . . . . . . . y [ n − 1] y [ n ] Y s F s r r Scientific Computation and Differential Equations – p. 15/36
For convenience, write y [ n − 1] y [ n ] Y 1 F 1 1 1 y [ n − 1] y [ n ] Y 2 F 2 2 . y [ n − 1] = y [ n ] = 2 Y = F = , , , , . . . . . . . . . . . y [ n − 1] y [ n ] Y s F s r r so that we can write the calculations in a step more simply as � Y � A U � � hF � � = . y [ n ] y [ n − 1] B V Scientific Computation and Differential Equations – p. 15/36
Examples of general linear methods We will look at five examples Scientific Computation and Differential Equations – p. 16/36
Examples of general linear methods We will look at five examples A Runge–Kutta method Scientific Computation and Differential Equations – p. 16/36
Examples of general linear methods We will look at five examples A Runge–Kutta method A “re-use” method Scientific Computation and Differential Equations – p. 16/36
Examples of general linear methods We will look at five examples A Runge–Kutta method A “re-use” method An Almost Runge–Kutta method Scientific Computation and Differential Equations – p. 16/36
Examples of general linear methods We will look at five examples A Runge–Kutta method A “re-use” method An Almost Runge–Kutta method An Adams-Bashforth/Adams-Moulton method Scientific Computation and Differential Equations – p. 16/36
Examples of general linear methods We will look at five examples A Runge–Kutta method A “re-use” method An Almost Runge–Kutta method An Adams-Bashforth/Adams-Moulton method A modified linear multistep method Scientific Computation and Differential Equations – p. 16/36
A Runge–Kutta method One of the famous families of fourth order methods of Kutta, written as a general linear method, is 0 0 0 0 1 � A U 0 0 0 1 θ � 1 2 − 1 1 = 0 0 1 8 θ 8 θ B V 2 θ − 1 − 1 1 2 0 1 2 θ 1 2 1 0 6 1 6 3 Scientific Computation and Differential Equations – p. 17/36
A Runge–Kutta method One of the famous families of fourth order methods of Kutta, written as a general linear method, is 0 0 0 0 1 � A U 0 0 0 1 θ � 1 2 − 1 1 = 0 0 1 8 θ 8 θ B V 2 θ − 1 − 1 1 2 0 1 2 θ 1 2 1 0 6 1 6 3 In a step from x n − 1 to x n = x n − 1 + h , the stages give approximations at x n − 1 + 1 x n − 1 + θh , x n − 1 + h . x n − 1 , 2 h and Scientific Computation and Differential Equations – p. 17/36
A Runge–Kutta method One of the famous families of fourth order methods of Kutta, written as a general linear method, is 0 0 0 0 1 � A U 0 0 0 1 θ � 1 2 − 1 1 = 0 0 1 8 θ 8 θ B V 2 θ − 1 − 1 1 2 0 1 2 θ 1 2 1 0 6 1 6 3 In a step from x n − 1 to x n = x n − 1 + h , the stages give approximations at x n − 1 + 1 x n − 1 + θh , x n − 1 + h . x n − 1 , 2 h and We will look at the special case θ = − 1 2 . Scientific Computation and Differential Equations – p. 17/36
In the special θ = − 1 2 case 0 0 0 0 1 − 1 � A U 0 0 0 1 � 2 3 − 1 0 0 1 = 4 4 B V − 2 1 2 0 1 1 2 1 0 6 1 6 3 Scientific Computation and Differential Equations – p. 18/36
In the special θ = − 1 2 case 0 0 0 0 1 − 1 � A U 0 0 0 1 � 2 3 − 1 0 0 1 = 4 4 B V − 2 1 2 0 1 1 2 1 0 6 1 6 3 Because the derivative at x n − 1 + θh = x n − 1 − 1 2 h = x n − 2 + 1 2 h , was evaluated in the previous step, we can try re-using this value. Scientific Computation and Differential Equations – p. 18/36
In the special θ = − 1 2 case 0 0 0 0 1 − 1 � A U 0 0 0 1 � 2 3 − 1 0 0 1 = 4 4 B V − 2 1 2 0 1 1 2 1 0 6 1 6 3 Because the derivative at x n − 1 + θh = x n − 1 − 1 2 h = x n − 2 + 1 2 h , was evaluated in the previous step, we can try re-using this value. This will save one function evaluation. Scientific Computation and Differential Equations – p. 18/36
A ‘re-use’ method This gives the re-use method 0 0 0 1 0 3 1 − 1 � A U 0 0 � 4 4 = − 2 2 0 1 1 B V 1 2 1 1 0 6 3 6 0 1 0 0 0 Scientific Computation and Differential Equations – p. 19/36
A ‘re-use’ method This gives the re-use method 0 0 0 1 0 3 1 − 1 � A U 0 0 � 4 4 = − 2 2 0 1 1 B V 1 2 1 1 0 6 3 6 0 1 0 0 0 Why should this method not be preferred to a standard Runge–Kutta method? Scientific Computation and Differential Equations – p. 19/36
A ‘re-use’ method This gives the re-use method 0 0 0 1 0 3 1 − 1 � A U 0 0 � 4 4 = − 2 2 0 1 1 B V 1 2 1 1 0 6 3 6 0 1 0 0 0 Why should this method not be preferred to a standard Runge–Kutta method? There are at least two reasons Stepsize change is complicated and difficult Scientific Computation and Differential Equations – p. 19/36
A ‘re-use’ method This gives the re-use method 0 0 0 1 0 3 1 − 1 � A U 0 0 � 4 4 = − 2 2 0 1 1 B V 1 2 1 1 0 6 3 6 0 1 0 0 0 Why should this method not be preferred to a standard Runge–Kutta method? There are at least two reasons Stepsize change is complicated and difficult The stability region is smaller Scientific Computation and Differential Equations – p. 19/36
To overcome these difficulties, we can do several things: Scientific Computation and Differential Equations – p. 20/36
To overcome these difficulties, we can do several things: Restore the missing stage, Scientific Computation and Differential Equations – p. 20/36
To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step, Scientific Computation and Differential Equations – p. 20/36
To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step, Use a linear combination of the derivatives computed in the previous step (instead of just one of these), Scientific Computation and Differential Equations – p. 20/36
To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step, Use a linear combination of the derivatives computed in the previous step (instead of just one of these), Re-organize the data passed between steps. Scientific Computation and Differential Equations – p. 20/36
To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step, Use a linear combination of the derivatives computed in the previous step (instead of just one of these), Re-organize the data passed between steps. We then get methods like the following: Scientific Computation and Differential Equations – p. 20/36
An ARK method 1 0 0 0 0 1 1 2 1 7 1 0 0 0 1 16 16 16 − 1 0 1 − 3 − 1 2 0 4 4 4 2 1 1 0 0 1 0 , 3 6 6 2 1 1 0 0 1 0 3 6 6 0 0 0 1 0 0 0 − 1 0 − 2 2 0 − 1 0 3 3 Scientific Computation and Differential Equations – p. 21/36
An ARK method 1 0 0 0 0 1 1 2 1 7 1 0 0 0 1 16 16 16 − 1 0 1 − 3 − 1 2 0 4 4 4 2 1 1 0 0 1 0 , 3 6 6 2 1 1 0 0 1 0 3 6 6 0 0 0 1 0 0 0 − 1 0 − 2 2 0 − 1 0 3 3 where y [ n ] y [ n ] y [ n ] ≈ hy ′ ( x n ) , ≈ h 2 y ′′ ( x n ) , ≈ y ( x n ) , 1 2 3 Scientific Computation and Differential Equations – p. 21/36
An ARK method 1 0 0 0 0 1 1 2 1 7 1 0 0 0 1 16 16 16 − 1 0 1 − 3 − 1 2 0 4 4 4 2 1 1 0 0 1 0 , 3 6 6 2 1 1 0 0 1 0 3 6 6 0 0 0 1 0 0 0 − 1 0 − 2 2 0 − 1 0 3 3 where y [ n ] y [ n ] y [ n ] ≈ hy ′ ( x n ) , ≈ h 2 y ′′ ( x n ) , ≈ y ( x n ) , 1 2 3 with Y 2 ≈ y ( x n − 1 + 1 Y 1 ≈ Y 3 ≈ Y 4 ≈ y ( x n ) , 2 h ) . Scientific Computation and Differential Equations – p. 21/36
The good things about this “Almost Runge–Kutta method” are: Scientific Computation and Differential Equations – p. 22/36
The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method Scientific Computation and Differential Equations – p. 22/36
The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method Unlike standard Runge–Kutta methods, the stage order is 2. Scientific Computation and Differential Equations – p. 22/36
The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method Unlike standard Runge–Kutta methods, the stage order is 2. This means that the stage values are computed to the same accuracy as an order 2 Runge-Kutta method. Scientific Computation and Differential Equations – p. 22/36
The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method Unlike standard Runge–Kutta methods, the stage order is 2. This means that the stage values are computed to the same accuracy as an order 2 Runge-Kutta method. Although it is a multi-value method, both starting the method and changing stepsize are essentially cost-free operations. Scientific Computation and Differential Equations – p. 22/36
An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. Scientific Computation and Differential Equations – p. 23/36
An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. For example, the ‘PECE’ method of order 3 computes a predictor y ∗ n and a corrector y n by the formulae y ∗ � 23 12 f ( y n − 1 ) − 4 3 f ( y n − 2 ) + 5 � n = y n − 1 + h 12 f ( y n − 3 ) , Scientific Computation and Differential Equations – p. 23/36
An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. For example, the ‘PECE’ method of order 3 computes a predictor y ∗ n and a corrector y n by the formulae y ∗ � 23 12 f ( y n − 1 ) − 4 3 f ( y n − 2 ) + 5 � n = y n − 1 + h 12 f ( y n − 3 ) , � 5 12 f ( y ∗ n ) + 2 3 f ( y n − 1 ) − 1 � y n = y n − 1 + h 12 f ( y n − 2 ) . Scientific Computation and Differential Equations – p. 23/36
An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. For example, the ‘PECE’ method of order 3 computes a predictor y ∗ n and a corrector y n by the formulae y ∗ � 23 12 f ( y n − 1 ) − 4 3 f ( y n − 2 ) + 5 � n = y n − 1 + h 12 f ( y n − 3 ) , � 5 12 f ( y ∗ n ) + 2 3 f ( y n − 1 ) − 1 � y n = y n − 1 + h 12 f ( y n − 2 ) . It might be asked: Is it possible to obtain improved order by using values of y n − 2 , y n − 3 in the formulae? Scientific Computation and Differential Equations – p. 23/36
Recommend
More recommend