Optimization (Introduction)
Optimization Goal: Find the minimizer π β that minimizes the objective (cost) function π π : β " β β Unconstrained Optimization
Optimization Goal: Find the minimizer π β that minimizes the objective (cost) function π π : β " β β Constrained Optimization
Unconstrained Optimization β’ What if we are looking for a maximizer π β ? π π β = max π π π
Calculus problem: maximize the rectangle area subject to perimeter constraint max π β β !
π΅π ππ = π $ π % π % π $ πππ ππππ’ππ = 2(π $ + π % ) π % π $
Unconstrained Optimization 1D
What is the optimal solution? (1D) π π¦ β = min π π¦ " (First-order) Necessary condition (Second-order) Sufficient condition
Types of optimization problems π π¦ β = min π: nonlinear, continuous π π¦ and smooth " Gradient-free methods Evaluate π π¦ Gradient (first-derivative) methods Evaluate π π¦ , πβ² π¦ Second-derivative methods Evaluate π π¦ , πβ² π¦ , πβ²β² π¦
Does the solution exists? Local or global solution?
Example (1D) Consider the function π π = 7 ! 8 β 7 " 9 β 11 π¦ : + 40π¦ . Find the stationary point and check the sufficient condition 100 - 6 - 4 - 2 2 4 6 - 100 - 200
Optimization in 1D: Golden Section Search β’ Similar idea of bisection method for root finding β’ Needs to bracket the minimum inside an interval β’ Required the function to be unimodal A function π: β β β is unimodal on an interval [π, π] ΓΌ There is a unique π β β [π, π] such that π(π β ) is the minimum in [π, π] ΓΌ For any π¦ ; , π¦ : β [π, π] with π¦ ; < π¦ : π¦ : < π β βΉ π(π¦ ; ) > π(π¦ : ) Β§ π¦ ; > π β βΉ π(π¦ ; ) < π(π¦ : ) Β§
π π $ $ π π & & π ' π π % % π ' π¦ $ π¦ $ π¦ & π¦ & π¦ % π¦ ' π¦ % π¦ ' π π π π π π
Golden Section Search
Golden Section Search What happens with the length of the interval after one iteration? β ! = π β " Or in general: β #$! = π β # Hence the interval gets reduced by π (for bisection method to solve nonlinear equations, π =0.5) For recursion: π β ! = (1 β π) β " π π β " = (1 β π) β " π % = (1 β π) π = π. πππ
Golden Section Search β’ Derivative free method! β’ Slow convergence: |π ?B; | lim = 0.618 π = 1 (ππππππ ππππ€ππ πππππ) π ? ?βA β’ Only one function evaluation per iteration
Example
Newtonβs Method Using Taylor Expansion, we can approximate the function π with a quadratic function about π¦ C π π¦ β π π¦ C + π D π¦ C (π¦ β π¦ C ) + ; : π D β² π¦ C (π¦ β π¦ C ) : And we want to find the minimum of the quadratic function using the first-order necessary condition
Newtonβs Method β’ Algorithm: π¦ 0 = starting guess π¦ 123 = π¦ 1 β πβ² π¦ 1 /πβ²β² π¦ 1 β’ Convergence: β’ Typical quadratic convergence β’ Local convergence (start guess close to solution) β’ May fail to converge, or converge to a maximum or point of inflection
Newtonβs Method (Graphical Representation)
Example Consider the function π π¦ = 4 π¦ 9 + 2 π¦ : + 5 π¦ + 40 If we use the initial guess π¦ C = 2 , what would be the value of π¦ after one iteration of the Newtonβs method?
Recommend
More recommend