am 205 lecture 11
play

AM 205: lecture 11 Final project worth 30% of grade Due on Thursday - PowerPoint PPT Presentation

AM 205: lecture 11 Final project worth 30% of grade Due on Thursday December 10th at 5 PM on Canvas, along with associated code Completed in teams of two or three. (Single-person projects will be allowed with instructor permission.)


  1. AM 205: lecture 11 ◮ Final project worth 30% of grade ◮ Due on Thursday December 10th at 5 PM on Canvas, along with associated code ◮ Completed in teams of two or three. (Single-person projects will be allowed with instructor permission.) All team members receive the same grade. ◮ Piazza is best place to find teammates

  2. Very rough length guidelines Team members Pages 1 9 2 14 3 18 ◮ Precise length of write-up is not important. Scientific content is more important. ◮ Optional: submit a poster to the CS poster session on December 7th, 12 PM–2 PM in Maxwell–Dworkin lobby. 1 IACS will cover poster cost. Roughly count as 25% reduction in write-up length. 1 This is the tentative date for this event. It will be confirmed shortly.

  3. AM 205: final project topic ◮ Find an application area of interest and apply methods from the course to it. ◮ Project must involve some coding. No purely theoretical projects allowed. ◮ Fine to take problems directly from research, within reason. It should be an aspect of a project that is carried out for this course, as opposed to something already ongoing

  4. AM 205: project proposal By November 19th at 6 PM, each team should arrange a half-hour meeting with Chris, Rapha¨ el, or Jordan to discuss a project idea and direction. Four points automatically awarded for doing this. Nothing written is necessary—only the meeting is required. However, feel free to bring documents, papers, or other resources to the meeting. Total grade for project: 60 points. A detailed breakdown is posted on the website.

  5. Finite Difference Approximations So far we have talked about finite difference formulae to approximate f ′ ( x i ) at some specific point x i Question: What if we want to approximate f ′ ( x ) on an interval x ∈ [ a , b ]? Answer: We need to simultaneously approximate f ′ ( x i ) for x i , i = 1 , . . . , n

  6. Differentiation Matrices We need a map from the vector F ≡ [ f ( x 1 ) , f ( x 2 ) , . . . , f ( x n )] ∈ R n to the vector of derivatives F ′ ≡ [ f ′ ( x 1 ) , f ′ ( x 2 ) , . . . , f ′ ( x n )] ∈ R n F ′ denote our finite difference approximation to the vector of Let � F ′ ≈ F ′ derivatives, i.e. � Differentiation is a linear operator 2 , hence we expect the map from F ′ to be an n × n matrix F to � This is indeed the case, and this map is a differentiation matrix, D 2 Since ( α f + β g ) ′ = α f ′ + β g ′

  7. Differentiation Matrices Row i of D corresponds to the finite difference formula for f ′ ( x i ), since then D ( i , :) F ≈ f ′ ( x i ) e.g. for forward difference approx. of f ′ , non-zero entries of row i are D ii = − 1 D i , i +1 = 1 h , h This is a sparse matrix with two non-zero diagonals

  8. Differentiation Matrices n=100 h=1/(n-1) D=np.diag(-np.ones(n)/h)+np.diag(np.ones(n-1)/h,1) plt.spy(D) plt.show() 0 10 20 30 40 50 60 70 80 90 100 0 20 40 60 80 100 nz = 199

  9. Differentiation Matrices But what about the last row? 80 85 90 95 100 80 85 90 95 100 nz = 199 D n , n +1 = 1 h is ignored!

  10. Differentiation Matrices We can use the backward difference formula (which has the same order of accuracy) for row n instead D n , n − 1 = − 1 D nn = 1 h , h 80 85 90 95 100 80 85 90 95 100 nz = 200 Python demo: Differentiation matrices

  11. Integration of ODE Initial Value Problems In this chapter we consider problems of the form y ′ ( t ) = f ( t , y ) , y (0) = y 0 Here y ( t ) ∈ R n and f : R × R n → R n Writing this system out in full, we have:     y ′ 1 ( t ) f 1 ( t , y )     y ′ 2 ( t ) f 2 ( t , y )     y ′ ( t ) =  =  = f ( t , y ( t ))  .   .  . .   . . y ′ n ( t ) f n ( t , y ) This is a system of n coupled ODEs for the variables y 1 , y 2 , . . . , y n

  12. ODE IVPs Initial Value Problem implies that we know y (0), i.e. y (0) = y 0 ∈ R n is the initial condition The order of an ODE is the highest-order derivative that appears Hence y ′ ( t ) = f ( t , y ) is a first order ODE system

  13. ODE IVPs We only consider first order ODEs since higher order problems can be transformed to first order by introducing extra variables For example, recall Newton’s Second Law: y ′′ ( t ) = F ( t , y , y ′ ) y (0) = y 0 , y ′ (0) = v 0 , m Let v = y ′ , then F ( t , y , v ) v ′ ( t ) = m y ′ ( t ) = v ( t ) and y (0) = y 0 , v (0) = v 0

  14. ODE IVPs: A Predator–Prey ODE Model For example, a two-variable nonlinear ODE, the Lotka–Volterra equation, can be used to model populations of two species: � � y 1 ( α 1 − β 1 y 2 ) y ′ = ≡ f ( y ) y 2 ( − α 2 + β 2 y 1 ) The α and β are modeling parameters, describe birth rates, death rates, predator-prey interactions

  15. ODEs in Python and MATLAB Both Python and MATLAB have very good ODE IVP solvers They employ adaptive time-stepping ( h is varied during the calculation) to increase efficiency Python has functions odeint (a general purpose routine) and ode (a routine with more options) Most popular MATLAB function is ode45 , which uses the classical fourth-order Runge–Kutta method In the remainder of this chapter we will discuss the properties of methods like the Runge–Kutta method

  16. Approximating an ODE IVP Given y ′ = f ( t , y ), y (0) = y 0 : suppose we want to approximate y at t k = kh , k = 1 , 2 , . . . Notation: Let y k be our approx. to y ( t k ) Euler’s method: Use finite difference approx. for y ′ and sample f ( t , y ) at t k : 3 y k +1 − y k = f ( t k , y k ) h Note that this, and all methods considered in this chapter, are written the same regardless of whether y is a vector or a scalar 3 Note that we replace y ( t k ) by y k

  17. Euler’s Method Quadrature-based interpretation: integrating the ODE y ′ = f ( t , y ) from t k to t k +1 gives � t k +1 y ( t k +1 ) = y ( t k ) + f ( s , y ( s ))d s t k � t k +1 Apply n = 0 Newton–Cotes quadrature to f ( s , y ( s ))d s , based t k on interpolation point t k : � t k +1 f ( s , y ( s ))d s ≈ ( t k +1 − t k ) f ( t k , y k ) = hf ( t k , y k ) t k Again, this gives Euler’s method: y k +1 = y k + hf ( t k , y k ) Python example: Euler’s method for y ′ = λ y

  18. Backward Euler Method We can derive other methods using the same quadrature-based approach Apply n = 0 Newton–Cotes quadrature based on interpolation point t k +1 to � t k +1 y ( t k +1 ) = y ( t k ) + f ( s , y ( s ))d s t k to get the backward Euler method: y k +1 = y k + hf ( t k +1 , y k +1 )

  19. Backward Euler Method (Forward) Euler method is an explicit method: we have an explicit formula for y k +1 in terms of y k y k +1 = y k + hf ( t k , y k ) Backward Euler is an implicit method, we have to solve for y k +1 which requires some extra work y k +1 = y k + hf ( t k +1 , y k +1 )

  20. Backward Euler Method For example, approximate y ′ = 2 sin( ty ) using backward Euler: At the first step ( k = 1), we get y 1 = y 0 + h sin( t 1 y 1 ) To compute y 1 , let F ( y 1 ) ≡ y 1 − y 0 − h sin( t 1 y 1 ) and solve for F ( y 1 ) = 0 via, say, Newton’s method Hence implicit methods are more complicated and more computationally expensive at each time step Why bother with implicit methods? We’ll see why shortly...

  21. Trapezoid Method We can derive methods based on higher-order quadrature Apply n = 1 Newton–Cotes quadrature (Trapezoid rule) at t k , t k +1 to � t k +1 y ( t k +1 ) = y ( t k ) + f ( s , y ( s ))d s t k to get the Trapezoid Method: y k +1 = y k + h 2 ( f ( t k , y k ) + f ( t k +1 , y k +1 ))

  22. One-Step Methods The three methods we’ve considered so far have the form y k +1 = y k + h Φ( t k , y k ; h ) (explicit) = y k + h Φ( t k +1 , y k +1 ; h ) (implicit) y k +1 y k +1 = y k + h Φ( t k , y k , t k +1 , y k +1 ; h ) (implicit) where the choice of the function Φ determines our method These are called one-step methods: y k +1 depends on y k (One can also consider multistep methods, where y k +1 depends on earlier values y k − 1 , y k − 2 , . . . ; we’ll discuss this briefly later)

  23. Convergence We now consider whether one-step methods converge to the exact solution as h → 0 Convergence is a crucial property, we want to be able to satisfy an accuracy tolerance by taking h sufficiently small In general a method that isn’t convergent will give misleading results and is useless in practice!

  24. Convergence We define global error, e k , as the total accumulated error at t = t k e k ≡ y ( t k ) − y k We define truncation error, T k , as the amount “left over” at step k when we apply our method to the exact solution and divide by h e.g. for an explicit one-step ODE approximation, we have T k ≡ y ( t k +1 ) − y ( t k ) − Φ( t k , y ( t k ); h ) h

  25. Convergence The truncation error defined above determines the local error introduced by the ODE approximation For example, suppose y k = y ( t k ), then for the case above we have hT k ≡ y ( t k +1 ) − y k − h Φ( t k , y k ; h ) = y ( t k +1 ) − y k +1 Hence hT k is the error introduced in one step of our ODE approximation 4 Therefore the global error e k is determined by the accumulation of the T j for j = 0 , 1 , . . . , k − 1 Now let’s consider the global error of the Euler method in detail 4 Because of this fact, the truncation error is defined as hT k in some texts

Recommend


More recommend