Nonlinear Equations
๐ = 40 ๐/๐ How can we solve these equations? โข Spring force: ๐บ = ๐ ๐ฆ ๐ ! = 0.5 ๐๐ก/๐ What is the displacement when ๐บ = 2N? โข Drag force: ๐บ = 0.5 ๐ท ! ๐ ๐ต ๐ค " = ๐ ! ๐ค " What is the velocity when ๐บ = 20N ?
โข Spring force: ๐ ๐ฆ = ๐ ๐ฆ โ ๐บ = 0 ๐ ! = 0.5 ๐๐ก/๐ โข Drag force: ๐ ๐ค = ๐ ! ๐ค " โ๐บ = 0 Find the root (zero) of the nonlinear equation ๐ ๐ค Nonlinear Equations in 1D Goal: Solve ๐ ๐ฆ = 0 for ๐: โ โ โ Often called Root Finding
Bisection method Algorithm: 1.Take two points, ๐ and ๐ , on each side of the root such that ๐(๐) and ๐(๐) have opposite signs. 2.Calculate the midpoint ๐ = !"# $ 3. Evaluate ๐(๐) and use ๐ to replace either ๐ or ๐ , keeping the signs of the endpoints opposite.
Convergence โข The bisection method does not estimate ๐ฆ . , the approximation of the desired root ๐ฆ. It instead finds an interval smaller than a given tolerance that contains the root. โข The length of the interval at iteration ๐ is /01 " ! . We can define this interval as the error at iteration ๐ ๐ โ ๐ |๐ "%& | |๐ "%& | 2 "%& lim |๐ " | = lim |๐ " | = lim = 0.5 ๐ โ ๐ "โ$ "โ$ "โ$ 2 " โข Linear convergence
Convergence An iterative method converges with rate ๐ if: ||๐ .=> || lim ||๐ . || ? = ๐ท, 0 < ๐ท < โ .โ< ๐ = 1: linear convergence ๐ > 1: superlinear convergence ๐ = 2: quadratic convergence Linear convergence gains a constant number of accurate digits each step (and ๐ท < 1 matters! Quadratic convergence doubles the number of accurate digits in each step (however it only starts making sense once ||๐ . || is small (and ๐ท does not matter much)
Example: Consider the nonlinear equation ๐ ๐ฆ = 0.5๐ฆ " โ 2 and solving f x = 0 using the Bisection Method. For each of the initial intervals below, how many iterations are required to ensure the root is accurate within 2 0@ ? A) [โ10, โ1.8] B) [โ3, โ2.1] C) [โ4, 1.9]
Bisection Method - summary q The function must be continuous with a root in the interval ๐, ๐ q Requires only one function evaluations for each iteration! o The first iteration requires two function evaluations. q Given the initial internal [๐, ๐] , the length of the interval after ๐ iterations is /01 " ! q Has linear convergence
Newtonโs method โข Recall we want to solve ๐ ๐ฆ = 0 for ๐: โ โ โ โข The Taylor expansion: ๐ ๐ฆ . + โ โ ๐ ๐ฆ . + ๐โฒ ๐ฆ . โ gives a linear approximation for the nonlinear function ๐ near ๐ฆ . . ๐ ๐ฆ . + โ = 0 โ โ = โ๐ ๐ฆ . /๐โฒ ๐ฆ . โข Algorithm: ๐ฆ A = ๐ก๐ข๐๐ ๐ข๐๐๐ ๐๐ฃ๐๐ก๐ก ๐ฆ .=> = ๐ฆ . โ ๐ ๐ฆ . /๐โฒ ๐ฆ .
Newtonโs method Equation of the tangent line: ๐โฒ(๐ฆ . ) = ๐ ๐ฆ . โ 0 ๐ฆ . โ ๐ฆ .=> ๐ฆ "%& ๐ฆ "
Iclicker question Consider solving the nonlinear equation 5 = 2.0 ๐ C + ๐ฆ " What is the result of applying one iteration of Newtonโs method for solving nonlinear equations with initial starting guess ๐ฆ A = 0, i.e. what is ๐ฆ > ? A) โ2 B) 0.75 C) โ1.5 D) 1.5 E) 3.0
Newtonโs Method - summary q Must be started with initial guess close enough to root (convergence is only local). Otherwise it may not converge at all. q Requires function and first derivative evaluation at each iteration (think about two function evaluations) q What can we do when the derivative evaluation is too costly (or difficult to evaluate)? q Typically has quadratic convergence ||๐ .=> || lim ||๐ . || " = ๐ท, 0 < ๐ท < โ .โ<
Secant method Also derived from Taylor expansion, but instead of using ๐โฒ ๐ฆ . , it approximates the tangent with the secant line: ๐ฆ .=> = ๐ฆ . โ ๐ ๐ฆ . /๐โฒ ๐ฆ . Secant line: ๐โฒ(๐ฆ . ) โ ๐ ๐ฆ . โ ๐ ๐ฆ .0> ๐ฆ . โ ๐ฆ .0> โข Algorithm: ๐ฆ ! , ๐ฆ " = ๐ก๐ข๐๐ ๐ข๐๐๐ ๐๐ฃ๐๐ก๐ก๐๐ก ๐ # ๐ฆ $ = ๐ ๐ฆ $ โ ๐ ๐ฆ $%" ๐ฆ $ โ ๐ฆ $%" ๐ฆ $&" = ๐ฆ $ โ ๐ ๐ฆ $ /๐โฒ ๐ฆ $ ๐ฆ "%& ๐ฆ "'& ๐ฆ "
Secant Method - summary q Still local convergence q Requires only one function evaluation per iteration (only the first iteration requires two function evaluations) q Needs two starting guesses q Has slower convergence than Newtonโs Method โ superlinear convergence ||๐ .=> || lim ||๐ . || ? = ๐ท, 1 < ๐ < 2 .โ<
1D methods for root finding: Method Update Convergence Cost Check signs of ๐ ๐ and Linear (๐ = 1 and c = 0.5) Bisection One function evaluation per ๐ ๐ iteration, no need to compute derivatives ๐ข ! = |๐ โ ๐| 2 ! Superlinear ๐ = 1.618 , Secant ๐ฆ !"# = ๐ฆ ! + โ One function evaluation per local convergence properties, iteration (two evaluations for โ = โ๐ ๐ฆ ! /๐๐๐ convergence depends on the the initial guesses only), no initial guess need to compute derivatives ๐๐๐ = ๐ ๐ฆ ! โ ๐ ๐ฆ !$# ๐ฆ ! โ ๐ฆ !$# Quadratic ๐ = 2 , local ๐ฆ !"# = ๐ฆ ! + โ Newton Two function evaluations per convergence properties, iteration, requires first order โ = โ๐ ๐ฆ ! /๐โฒ ๐ฆ ! convergence depends on the derivatives initial guess
Nonlinear system of equations
Robotic arms https://www.youtube.com/watch?v=NRgNDlVtmz0 (Robotic arm 1) https://www.youtube.com/watch?v=9DqRkLQ5Sv8 (Robotic arm 2) https://www.youtube.com/watch?v=DZ_ocmY8xEI (Blender)
Nonlinear system of equations Goal: Solve ๐ ๐ = ๐ for ๐: โ D โ โ D In other words, ๐ ๐ is a vector-valued function ๐ > ๐ ๐ > ๐ฆ > , ๐ฆ " , ๐ฆ E , โฆ , ๐ฆ D ๐ ๐ = โฎ = โฎ ๐ D ๐ ๐ D ๐ฆ > , ๐ฆ " , ๐ฆ E , โฆ , ๐ฆ D If looking for a solution to ๐ ๐ = ๐ , then instead solve ๐ ๐ = ๐ ๐ โ ๐ = ๐
Newtonโs method Approximate the nonlinear function ๐ ๐ by a linear function using Taylor expansion: ๐ ๐ + ๐ โ ๐ ๐ + ๐ฒ ๐ ๐ where ๐ฒ ๐ is the Jacobian matrix of the function ๐ : FG " ๐ FG " ๐ โฆ FC " FC # IJ = FG $ ๐ ๐ฒ ๐ = or ๐ฒ ๐ โฎ โฑ โฎ FC % FG # ๐ FG # ๐ โฆ FC " FC # Set ๐ ๐ + ๐ = ๐ โน ๐ฒ ๐ ๐ = โ๐ ๐ This is a linear system of equations (solve for ๐ )!
Newtonโs method Algorithm: ๐ A = ๐๐๐๐ข๐๐๐ ๐๐ฃ๐๐ก๐ก Solve ๐ฒ ๐ . ๐ . = โ๐ ๐ . Update ๐ .=> = ๐ . + ๐ . Convergence: โข Typically has quadratic convergence โข Drawback: Still only locally convergent Cost: โข Main cost associated with computing the Jacobian matrix and solving the Newton step.
Newtonโs method - summary q Typically quadratic convergence (local convergence) q Computing the Jacobian matrix requires the equivalent of ๐ " function evaluations for a dense problem (where every function of ๐ ๐ depends on every component of ๐ ). q Computation of the Jacobian may be cheaper if the matrix is sparse. q The cost of calculating the step ๐ is ๐ ๐ E for a dense Jacobian matrix (Factorization + Solve) q If the same Jacobian matrix ๐ฒ ๐ . is reused for several consecutive iterations, the convergence rate will suffer accordingly (trade-off between cost per iteration and number of iterations needed for convergence)
Example Consider solving the nonlinear system of equations 2 = 2๐ง + ๐ฆ 4 = ๐ฆ " + 4๐ง " What is the result of applying one iteration of Newtonโs method with the following initial guess? ๐ A = 1 0
Finite Difference Find an approximate for the Jacobian matrix: %& % ๐ %& % ๐ โฆ %( % %( & )* = %& ' ๐ ๐ฒ ๐ = or ๐ฒ ๐ โฎ โฑ โฎ %( ( %& & ๐ %& & ๐ โฆ %( % %( & In 1D: ๐๐ ๐ฆ โ ๐ ๐ฆ + โ โ ๐ ๐ฆ ๐๐ฆ โ In ND: G $ ๐=K ๐บ % 0G $ ๐ IJ = FG $ ๐ ๐ฒ ๐ FC % โ K
Recommend
More recommend