MATH529 – Fundamentals of Optimization Fundamentals of Constrained Optimization VIII: Algorithms Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 24
Algorithms for Nonlinear Constrained Optimization One basic idea: Use Lagrangian-like functions as proxies (or analytical tools) for dealing with a constrained problem. 2 / 24
Algorithms for Nonlinear Constrained Optimization In this course: Penalty Methods Interior-Point Methods 3 / 24
Penalty Methods for Nonlinear Constrained Optimization Idea: Have a mechanism that generates solutions using information about their quality. (Favoring better quality solutions.) 4 / 24
Penalty Methods for Nonlinear Constrained Optimization Idea: Have a mechanism that generates solutions using information about their quality. (Favoring better quality solutions.) In the process of determining the quality of those solutions, penalize those that are infeasible by reducing their quality based on the degree they violate the constraints . 5 / 24
Penalty Methods for Nonlinear Constrained Optimization Example: Say c 1 ( x ) = x 1 − x 2 = 3. Given u = (2 , − 0 . 5) T , and v = ( − 1 , 0) T , u should receive a better score than v because 2 . 5 is closer to 3 than − 1. 6 / 24
Penalty Methods for Nonlinear Constrained Optimization A common way to implement these ideas in order to deal with equality constraints is to define a proxy for the objective function as follows: Q ( x , µ ) = f ( x ) + µ � ( c i ( x )) 2 2 i ∈E where µ > 0 is called the penalty parameter . 7 / 24
Penalty Methods for Nonlinear Constrained Optimization Effects of the penalty parameter: 8 / 24
Penalty Methods for Nonlinear Constrained Optimization Minimize xy subject to x 2 + y 2 = 1: 9 / 24
Penalty Methods for Nonlinear Constrained Optimization 2 ( x 2 + y 2 − 1) 2 : Q ( x , y , 1) = xy + 1 ( x ⋆ , y ⋆ ) = ( − 0 . 8660 , 0 . 8660), or ( x ⋆ , y ⋆ ) = (0 . 866 , − 0 . 866). 10 / 24
Penalty Methods for Nonlinear Constrained Optimization Q ( x , y , 40) = xy + 20( x 2 + y 2 − 1) 2 : ( x ⋆ , y ⋆ ) = ( − 0 . 7115 , 0 . 7115), or ( x ⋆ , y ⋆ ) = (0 . 7115 , − 0 . 7115). 11 / 24
Penalty Methods for Nonlinear Constrained Optimization Penalty Methods: Initialize µ 0 > 0, x 0 . for i = 1,. . . ,k Use your favorite algorithm to find an approximate minimizer of Q ( x k , µ k ), call it m k ; If m k is good enough, break and return m k as solution. Else choose new µ k +1 > µ k , and set x x +1 = m k . endfor 12 / 24
Penalty Methods for Nonlinear Constrained Optimization MATLAB Example 13 / 24
Penalty Methods for Nonlinear Constrained Optimization Issues: Divergence The Hessian becomes ill-conditioned with large values of µ k . It is harder to deal with inequality constraints because in this case: Q ( x , µ ) = f ( x ) + µ ( c i ( x )) 2 + µ � � (min { c i ( x ) , 0 } ) 2 2 2 i ∈E i ∈I therefore, Q is no longer twice continuously differentiable. 14 / 24
Penalty Methods for Nonlinear Constrained Optimization Augmented Lagrangian Method 15 / 24
Penalty Methods for Nonlinear Constrained Optimization We saw that: � ∇ x Q ( x , µ k ) = ∇ f ( x ) + µ k c i ( x ) ∇ c i ( x ) i ∈E Now, compare this equation with the gradient of the Lagrangian: � ∇ x L ( x , λ i ) = ∇ f ( x ) − λ i ∇ c i ( x ) i ∈E At a solution point of Q , we can say that c i ( x k ) ≈ − λ ⋆ i for all µ k i ∈ E . This means that c i ( x k ) → 0 as µ k → ∞ , but in general a solution to Q is biased. 16 / 24
Penalty Methods for Nonlinear Constrained Optimization A way to reduce this bias, is to use what is called the augmented Lagrangian which is defined as: λ i c i ( x ) + µ ( c i ( x )) 2 � � L A ( x , λ, µ ) = f ( x ) − 2 i ∈E i ∈E The idea is then to use L A ( x , λ k , µ k ) instead of Q ( x , µ k ) as proxy for the constrained problem. 17 / 24
Penalty Methods for Nonlinear Constrained Optimization This works because the optimality conditions for L A ( x , λ k , µ k ) say that ∇ L A ( x k , λ k , µ k ) = 0 and therefore ∇ L A ( x k , λ k , µ k ) = ∇ f ( x k ) − � ( λ k i − µ k c i ( x k )) ∇ c i ( x k ) = 0 i ∈E and so i ≈ λ k λ ⋆ i − µ k c i ( x k ) for all i ∈ E . We can see now that c i ( x k ) ≈ − 1 ( λ ⋆ − λ k i ). So, c i ( x k ) would be µ k much smaller than before provided that λ k i is close to λ ⋆ i . 18 / 24
Penalty Methods for Nonlinear Constrained Optimization A method that implements the augmented Lagrangian method would use the formula λ k +1 = λ k i − µ k c i ( x k ) i to have a better behaved algorithm that does not require µ k → ∞ (at least not as fast) to have accurate solutions. 19 / 24
Penalty Methods for Nonlinear Constrained Optimization It is possible to modify a problem with inequality constraints so that the augmented Lagrangian method can be used without modification: The idea is to transform c i ( x ) ≥ 0 into c i ( x ) − s i = 0 subject to bound constraints (which are easier to deal with). Another approach is to use: 20 / 24
Barrier Methods 21 / 24
Barrier Methods for Nonlinear Constrained Optimization Similar to the penalty method, but now the penalty is smooth: 22 / 24
Barrier Methods for Nonlinear Constrained Optimization Common barrier functions: 1 x − ln( x ) 23 / 24
Barrier Methods for Nonlinear Constrained Optimization Example: Minimize 2 x 2 + 9 y subject to x + y ≥ 4. 24 / 24
Recommend
More recommend