finite difference method motivation
play

Finite Difference Method Motivation For a given smooth function , - PowerPoint PPT Presentation

Finite Difference Method Motivation For a given smooth function , we want to calculate the derivative at a given value of . Suppose we dont know how to compute the analytical expression for , or it is


  1. Finite Difference Method

  2. Motivation For a given smooth function 𝑔 𝑦 , we want to calculate the derivative 𝑔′ 𝑦 at a given value of 𝑦. Suppose we don’t know how to compute the analytical expression for 𝑔′ 𝑦 , or it is computationally very expensive. However you do know how to evaluate the function value: We know that: 𝑔 𝑦 + β„Ž βˆ’ 𝑔(𝑦) 𝑔′ 𝑦 = lim β„Ž !β†’# ! "#$ %! " Can we just use 𝑔′ 𝑦 β‰ˆ as an approximation? How do we choose β„Ž ? $ Can we get estimate the error of our approximation?

  3. Finite difference method For a differentiable function 𝑔: β„› β†’ β„› , the derivative is defined as: 𝑔 𝑦 + β„Ž βˆ’ 𝑔(𝑦) 𝑔′ 𝑦 = lim β„Ž !β†’# Taylor Series centered at 𝑦 , where Μ… 𝑦 = 𝑦 + β„Ž ! ! ! " 𝑔 𝑦 + β„Ž = 𝑔 𝑦 + 𝑔 $ 𝑦 β„Ž + 𝑔′′ 𝑦 % +𝑔′′′ 𝑦 & + β‹― 𝑔 𝑦 + β„Ž = 𝑔 𝑦 + 𝑔 $ 𝑦 β„Ž + 𝑃(β„Ž % ) We define the Forward Finite Difference as: Therefore, the truncation error of the forward finite difference approximation is bounded by:

  4. In a similar way, we can write: 𝑔 𝑦 βˆ’ β„Ž = 𝑔 𝑦 βˆ’ 𝑔 ! 𝑦 β„Ž + 𝑃(β„Ž " ) β†’ 𝑔 ! 𝑦 = 𝑔 𝑦 βˆ’ 𝑔 𝑦 βˆ’ β„Ž + 𝑃(β„Ž) β„Ž And define the Backward Finite Difference as: 𝑒𝑔 𝑦 = 𝑔 𝑦 βˆ’ 𝑔 𝑦 βˆ’ β„Ž β†’ 𝑔 ! 𝑦 = 𝑒𝑔 𝑦 + 𝑃(β„Ž) β„Ž And subtracting the two Taylor approximations # ! # " 𝑔 𝑦 + β„Ž = 𝑔 𝑦 + 𝑔 ! 𝑦 β„Ž + 𝑔′′ 𝑦 " +𝑔′′′ 𝑦 $ + β‹― # ! # " 𝑔 𝑦 βˆ’ β„Ž = 𝑔 𝑦 βˆ’ 𝑔 ! 𝑦 β„Ž + 𝑔′′ 𝑦 " βˆ’π‘”β€²β€²β€² 𝑦 $ + β‹― 𝑔 𝑦 + β„Ž βˆ’ 𝑔 𝑦 βˆ’ β„Ž = 2𝑔 ! 𝑦 β„Ž + 𝑔′′′ 𝑦 β„Ž % 6 + 𝑃(β„Ž & ) 𝑔 ! 𝑦 = 𝑔 𝑦 + β„Ž βˆ’ 𝑔 𝑦 βˆ’ β„Ž + 𝑃(β„Ž " ) 2β„Ž And define the Central Finite Difference as: 𝑒𝑔 𝑦 = 𝑦 + β„Ž βˆ’ 𝑔 𝑦 βˆ’ β„Ž β†’ 𝑔 ! 𝑦 = 𝑒𝑔 𝑦 + 𝑃(β„Ž " ) 2β„Ž

  5. How accurate is the finite difference approximation? How many function evaluations (in additional to 𝑔 𝑦 )? Forward Finite Difference: ' ()# *' ( β†’ 𝑔 ! 𝑦 = 𝑒𝑔 𝑦 + 𝑃(β„Ž) Truncation error: 𝑃(β„Ž) 𝑒𝑔 𝑦 = # Cost: 1 function evaluation Backward Finite Difference: β†’ 𝑔 ! 𝑦 = 𝑒𝑔 𝑦 + 𝑃(β„Ž) ' ( *' (*# Truncation error: 𝑃(β„Ž) 𝑒𝑔 𝑦 = # Cost: 1 function evaluation Central Finite Difference: β†’ 𝑔 ! 𝑦 = 𝑒𝑔 𝑦 + 𝑃(β„Ž " ) Truncation error: 𝑃(β„Ž " ) 𝑒𝑔 𝑦 = ' ()# *' (*# "# Cost: 2 function evaluation2 Our typical trade-off issue! We can get better accuracy with Central Finite Difference with the (possible) increased computational cost. How small should the value of π’Š ?

  6. Example 𝑓𝑠𝑠𝑝𝑠 β„Ž 𝑔 𝑦 = 𝑓 $ βˆ’ 2 𝑔′ 𝑦 = 𝑓 $ We want to obtain an approximation for 𝑔′ 1 π‘’π‘”π‘π‘žπ‘žπ‘ π‘π‘¦ = (𝑓 $%! βˆ’2) βˆ’ (𝑓 $ βˆ’2) β„Ž Truncation error 𝑓𝑠𝑠𝑝𝑠(β„Ž) = 𝑏𝑐𝑑(𝑔′ 𝑦 βˆ’ π‘’π‘”π‘π‘žπ‘žπ‘ π‘π‘¦)

  7. Example Should we just keep decreasing the perturbation β„Ž , in order to approach the limit β„Ž β†’ 0 and obtain a better approximation for the derivative?

  8. Uh-Oh ! What happened here? 𝑔 𝑦 = 𝑓 $ βˆ’ 2, 𝑔′ 𝑦 = 𝑓 $ β†’ 𝑔′ 1 β‰ˆ 2.7 Forward Finite Difference 𝑒𝑔 1 = 𝑔 1 + β„Ž βˆ’ 𝑔(1) β„Ž

  9. When computing the finite difference approximation, we have two competing source of errors: Truncation errors and Rounding errors 𝑒𝑔(𝑦) = 𝑔 𝑦 + β„Ž βˆ’ 𝑔(𝑦) ≀ πœ— + |𝑔 𝑦 | β„Ž β„Ž

  10. Loss of accuracy due to rounding Minimize the total error 𝑓𝑠𝑠𝑝𝑠 ~ πœ— + |𝑔 𝑦 | 𝑓𝑠𝑠𝑝𝑠~𝑁 β„Ž Optimal β€œh” Truncation error: + π‘β„Ž β„Ž Gives 𝑓𝑠𝑠𝑝𝑠~ πœ— & |𝑔 𝑦 | β„Ž = πœ— + |𝑔 𝑦 |/𝑁 Rounding error: β„Ž

Recommend


More recommend