supplemental notes kuhn tucker first order conditions p
play

Supplemental notes: Kuhn-Tucker first-order conditions P. Dybvig - PDF document

Supplemental notes: Kuhn-Tucker first-order conditions P. Dybvig Minimization problem (like in the slides): Choose x N to minimize f ( x ) subject to ( i E ) g i ( x ) = 0, and ( i I ) g i ( x ) 0. x = ( x 1 , ..., x N


  1. Supplemental notes: Kuhn-Tucker first-order conditions P. Dybvig Minimization problem (like in the slides): Choose x ∈ ℜ N to minimize f ( x ) subject to ( ∀ i ∈ E ) g i ( x ) = 0, and ( ∀ i ∈ I ) g i ( x ) ≥ 0. x = ( x 1 , ..., x N ) is a vector of choice variables . f ( x ) is the scalar-valued objective function . g i ( x ) = 0, i ∈ E are equality constraints . g i ( x ) ≥ 0, i ∈ I are inequality constraints . E � I = ∅ Kuhn-Tucker conditions: ∇ f ( x ∗ ) = � I λ i ∇ g i ( x ∗ ) i ∈E � ( ∀ i ∈ I ) λ i ≥ 0 λ i g i ( x ∗ ) = 0 The feasible solution x ∗ is called regular if the set {∇ g i ( x ∗ ) | g i ( x ∗ ) = 0 } is a linearly independent set. In particular, an interior solution is always regular. If x ∗ is regular and f and the g i s are differentiable, the Kuhn-Tucker condi- tions are necessary for feasible x ∗ to be optimal. If the optimization problem is convex, then the Kuhn-Tucker conditions are sufficient for an optimum.

  2. Maximization problem: Choose x ∈ ℜ N to maximize f ( x ) subject to ( ∀ i ∈ E ) g i ( x ) = 0, and ( ∀ i ∈ I ) g i ( x ) ≤ 0. x = ( x 1 , ..., x N ) is a vector of choice variables . f ( x ) is the scalar-valued objective function . g i ( x ) = 0, i ∈ E are equality constraints . g i ( x ) ≤ 0, i ∈ I are inequality constraints . E � I = ∅ Kuhn-Tucker conditions: ∇ f ( x ∗ ) = � I λ i ∇ g i ( x ∗ ) i ∈E � ( ∀ i ∈ I ) λ i ≥ 0 λ i g i ( x ∗ ) = 0 (same theorems as on the previous page) 2

  3. example, Second Model in-class exercise from Lecture 1 Given p n > 0 and π n > 0 for n = 1 , ..., N , and W 0 > 0, choose x = ( x 1 , ..., x N ) ∈ ℜ N to � N maximize n =1 π n x n subject to � N n =1 p n x n = W 0 and ( ∀ n ) x n ≥ 0 π 1 < p 2 p 1 π 2 < ... < p N π N (states ordered from cheapest to most expensive) ∇ f = ( π 1 , ..., π N ) E = { 0 } , g 0 ( x ) = � N n =1 p n x n − W 0 ∇ g 0 = ( p 1 , p 2 , ..., p N ) I = { 1 , 2 , ..., N } , for n > 0, g n ( x ) = − x n and ∇ g n = (0 , ..., 0 , − 1 , 0 , ... 0) with the − 1 in the n th coordinant (Note: LP ⇒ gradients do not vary with x .) Kuhn-Tucker conditions: ∇ f = � N n =0 λ n ∇ g n for n = 1 , ..., N , λ n ≥ 0 and x n λ n = 0 For n = 1 , ..., N , π n = λ 0 p n − λ n or λ 0 = π n /p n + λ n /p n . Because λ n /p n ≥ 0, λ 0 ≥ max( π n /p n ) = π 1 /p 1 . However, we cannot have λ 0 > max π n /p n because then complementary slackness would imply all x n are 0, which would not satisfy the budget constraint. Therefore, we have λ 0 = π 1 /p 1 and λ n = (( π 1 /p 1 ) − ( π n /p n )) p n . This expression for λ n is positive for n = 2 , ..., N (implying that x n = 0 for n = 2 , ..., N ) and zero for n = 1. Using the budget constraint to compute x 1 = W 0 /p 1 , we have the unique solution of the Kuhn-Tucker conditions: λ 0 = π 1 /p 1 For n = 2 , ..., N , x n = 0 and λ n = (( π 1 /p 1 ) − ( π n /p n )) p n > 0 x 1 = W 0 /p 1 and λ 1 = 0 It is easy to verify that this is a feasible solution satisfying the Kuhn-Tucker conditions in a convex optimization. Therefore x is optimal. 3

Recommend


More recommend