Computational Optimization Augmented Lagrangian NW 17.3
Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday for 4/25 presenters). Monday, April 28, evening class presentations, pizza provided. Tuesday, April 29, in class presentations. Exam – May 6 Tuesday, open notes/book
General Equality Problem min f x ( ) = ∈ ( NLP ) s t . . h x ( ) 0 i E i
Augmented Lagrangian Consider min f(x) s.t h(x)=0 Start with L(x, λ )=f(x)- λ ’h(x) Add penalty L(x, λ ,c)=f(x)- λ ’h(x)+ μ /2||h(x)|| 2 The penalty helps insure that the point is feasible.
Lagrangian Multiplier Estimate L(x, λ , μ )=f(x)- λ ’h(x)+ μ /2||h(x)|| 2 ∇ μ =∇ − λ ∇ + ⋅ μ ∇ = If x L x v ( , , ) f x ( ) ' h x ( ) h x ( )' h x ( ) 0 [ ] ⇒∇ − λ μ − ⋅ ∇ = f x ( ) h x ( ) ' h x ( ) 0 Looks like the Lagrangian Multiplier! + = λ λ − μ k 1 k k h x ( ) 17.39 i i i
In Class Exercise Consider s t x + = 3 min x . . 1 0 Find x*, λ * satisfying the KKT conditions The augmented Lagrangian is L(x, λ *, c)= Plot f(x), L(x, λ *), L(x*, λ * ,4), L(x*, λ * ,16) L(x*, λ * ,40) Compare these functions near x*
Augmented Lagrangian Algorithm for equality constraints (17.3) λ μ > > Given 0 0 0 x , , 0, tol 0 For k = 0,1,2….. find an approximate minimizer of L(x, λ , μ ) such that ∇ λ μ ≤ k k L ( x , , ) tol x if optimal stop update Lagrangain multipliers 17.39 + = λ λ − μ k 1 k k k h x ( ) 17.39 i i i chose new penalty + 1 μ ≥ μ k k
AL has Nice Properties Penalty function can improve conditioning and convexity. Automatically gives estimates of Lagrangian Multipliers Finite penalty term
Theorem 17.5 Let x* be local solution of NLP-equality with LICQ and SOSC satisfied. Then for all μ sufficiently large, x* is a strict local minimizer of L(x, λ , μ ) . Only need a finite penalty term!
Why AL works AL solution close to real solution if penalty μ is large enough or if multiplier λ is close enough to the real thing. Subproblems have a strict local min, so unconstrained minimization methods should work well.
Add bounds constraints Original Problem min f x ( ) = ∈ s t . . h x ( ) 0 i E i ≤ ≤ l x u Add only inequalities to Lagrangian μ 2 λ μ = − λ + 2 k k k ' min L x ( , , ) f x ( ) h x ( ) h x ( ) x 2 ≤ ≤ s t . . l x u
Algorithm 17.4 For bounds constraint case Just put nonlinear equalities in augmented Lagragian subproblem and keep bounds as is. If near feasible, update multipliers and penalties Else just update penalties.
Inequality Problems Method of multiplier can be extended to this case using penalty parameter t ) ( m 2 = + ⎡ − ⎤ − ∑ 1 2 L x u t ( , , ) f x ( ) u tg ( ) x u ⎣ ⎦ j j j + 2 = j 1 If strict complementarity holds this function is twice differentiable.
Inequality Problems ) ( m ⎡ ⎤ ∇ = ∇ − − ∇ = ∑ L x u t ( , , ) f x ( ) u tg ( ) x ' g ( ) x 0 ⎣ ⎦ x j j j + = j 1 ) ( m ⎡ ⎤ ∇ = − − − = ∑ L x u t ( , , ) u tg ( ) x u 0 ⎣ ⎦ u j j j + = j 1 KKT point of Augmented Lagrangian is KKT point of original problem Estimate of Lagrangian Multiplier is ⎡ ⎤ = − u u tg ( ) x ⎣ ⎦ j j j +
Inequality Problems ( ) m ⎡ ⎤ ∇ = ∇ − − ∇ = ∑ L x u t ( , , ) f x ( ) u tg ( ) x ' g ( ) x 0 ⎣ ⎦ x j j j + = j 1 m ( ) ⎡ ⎤ = − = ∇ − ∇ = ∑ u u tg ( ) x ⎣ ⎦ ˆ f x ( ) u ' g ( ) x 0 j j j + j j = j 1 ≥ ˆ u 0 j ) ( m � ∇ = − ⎡ − ⎤ − = ∑ ˆ L x u t ( , , ) u tg ( ) x u 0 ⎣ ⎦ u j j j + = j 1 � > ⇒ = if g ( ) x 0 for t sufficiently large u 0 j j � � = ⇒ ≥ = if g ( ) x 0 u 0, u g ( ) x 0 j j j j < ⇒ if g ( ) x 0 for t suffici ently large get a contradiction j
NLP Family of Algorithms Basic Sequential Sequential Augmented Projection Linear Prog Lagrangian or Reduced Method Quadratric Gradient Programming Directions Steepest Newton Quasi Newton Conjugate Descent Gradient Space Direct Null Range Constraints Active Set Barrier Penalty Step Size Line Search Trust Region
Hybrid Approaches Method can be any combination of these algorithms MINOS: For linear program utilizes a simplex method. The generalization of this to nonlinear programs with linear constraints is the reduced gradient method. Nonlinear constraints are handled by utilizing the augmented Lagrangian. A BFGS estimate of the Hessian is used.
Recommend
More recommend