EC400 Part II, Math for Micro: Lecture 6 Leonardo Felli NAB.SZT 16 September 2010
SOC with Two Variables and One Constraint Consider the problem: max f ( x 1 , x 2 ) x 1 , x 2 s.t. g 1 ( x 1 , x 2 ) ≤ b . The bordered Hessian matrix is: ∂ g ∂ g 0 ∂ x 1 ∂ x 2 ∂ 2 f − λ ∗ ∂ 2 g ∂ 2 f ∂ 2 g ∂ g − λ ∗ B = ∂ x 2 ∂ x 2 ∂ x 1 ∂ x 1 ∂ x 2 ∂ x 1 ∂ x 2 1 1 ∂ 2 f ∂ 2 g ∂ 2 f − λ ∗ ∂ 2 f ∂ g − λ ∗ ∂ x 2 ∂ x 2 ∂ x 2 ∂ x 1 ∂ x 2 ∂ x 1 ∂ x 2 2 2 Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 2 / 31
The leading principal submatrixes are: ∂ g 0 ∂ x 1 B 1 = (0) , B 2 = , B 3 = B ∂ 2 f − λ ∗ ∂ 2 g ∂ g ∂ x 2 ∂ x 2 ∂ x 1 1 1 The sufficient SOC are: | B 2 | < 0 (sign of ( − 1) 1 ) which is always satisfied, and | B 3 | > 0 (sign of ( − 1) 2 ). Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 3 / 31
Consider now the problem: max f ( x 1 , x 2 , x 3 ) x 1 , x 2 , x 3 s.t. g ( x 1 , x 2 , x 3 ) ≤ b The bordered Hessian matrix is: ∂ g ∂ g ∂ g 0 ∂ x 1 ∂ x 2 ∂ x 3 ∂ g ∂ L ∂ L ∂ L ∂ x 2 ∂ x 1 ∂ x 1 ∂ x 2 ∂ x 1 ∂ x 3 1 H = ∂ g ∂ L ∂ L ∂ L ∂ x 2 ∂ x 2 ∂ x 1 ∂ x 2 ∂ x 2 ∂ x 3 2 ∂ g ∂ L ∂ L ∂ L ∂ x 2 ∂ x 3 ∂ x 1 ∂ x 3 ∂ x 2 ∂ x 3 3 Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 4 / 31
The leading principal submatrixes are: ∂ g 0 ∂ x 1 H 1 = (0) , H 2 = ∂ g ∂ L ∂ x 2 ∂ x 1 1 ∂ g ∂ g 0 ∂ x 1 ∂ x 2 ∂ g ∂ L ∂ L H 3 = , H 4 = H ∂ x 2 ∂ x 1 ∂ x 1 ∂ x 2 1 ∂ g ∂ L ∂ L ∂ x 2 ∂ x 2 ∂ x 1 ∂ x 2 2 The sufficient SOC are then | H 2 | < 0 (always satisfied), | H 3 | > 0 and | H 4 | < 0. Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 5 / 31
Maximum Value Functions Profit functions and indirect utility functions are example of maximum value functions, whereas cost functions and expenditure functions are minimum value functions. Definition Let x ( b ) be a solution to the problem of maximizing f ( x ) subject to g ( x ) ≤ b , the corresponding maximum value function is then v ( b ) = f ( x ( b )) A maximum value function is non decreasing. Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 6 / 31
The Interpretation of the Lagrange Multiplier Consider the problem: max f ( x ) x ∈ R n g 1 ( x ) ≤ b ∗ s.t. 1 . . . g k ( x ) ≤ b ∗ k . Let b ∗ = ( b ∗ 1 , ..., b ∗ k ) and x ∗ 1 ( b ∗ ) , ..., x ∗ n ( b ∗ ) denote the optimal solution and let λ 1 ( b ∗ ) , ..., λ k ( b ∗ ) be the corresponding Lagrange multipliers. Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 7 / 31
Theorem Assume that, as b varies near b ∗ , x ∗ 1 ( b ∗ ) , ..., x ∗ n ( b ∗ ) and λ 1 ( b ∗ ) , ..., λ k ( b ∗ ) are differentiable functions and that x ∗ ( b ∗ ) satisfies the constraint qualification. Then for each j = 1 , 2 , ..., k : ∂ f ( x ∗ ( b ∗ )) = λ j ( b ∗ ) ∂ b j Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 8 / 31
Proof: We prove it in the case of a single equality constraint, with f and g functions of two variables. The Lagrangian is L ( x , y , λ ; b ) = f ( x , y ) − λ ( h ( x , y ) − b ) The solution satisfies for all b : ∂ L 0 = ∂ x ( x ∗ ( b ) , y ∗ ( b ) , λ ∗ ( b ); b ) ∂ f ∂ x ( x ∗ ( b ) , y ∗ ( b )) − λ ∗ ( b ) ∂ h ∂ x ( x ∗ ( b ) , y ∗ ( b )) , = Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 9 / 31
and ∂ L ∂ y ( x ∗ ( b ) , y ∗ ( b ) , λ ∗ ( b ); b ) 0 = ∂ f ∂ y ( x ∗ ( b ) , y ∗ ( b )) − λ ∗ ( b ) ∂ h = ∂ y ( x ∗ ( b ) , y ∗ ( b )) , Furthermore, since h ( x ∗ ( b ) , y ∗ ( b )) = b for all b : ∂ h ∂ x ( x ∗ , y ∗ ) dx ∗ ( b ) + ∂ h ∂ y ( x ∗ , y ∗ ) dy ∗ ( b ) = 1 db db Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 10 / 31
Therefore, using the chain rule, we have: df ( x ∗ ( b ) , y ∗ ( b )) db ∂ f ∂ x ( x ∗ , y ∗ ) dx ∗ ( b ) + ∂ f ∂ y ( x ∗ , y ∗ ) dy ∗ ( b ) = db db � ∂ h ∂ x ( x ∗ , y ∗ ) dx ∗ ( b ) ∂ y ( x ∗ , y ∗ ) dy ∗ ( b ) � + ∂ h λ ∗ ( b ) = db db λ ∗ ( b ) . = Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 11 / 31
Economic Interpretation: Lagrange multiplier can be interpreted as a ‘shadow price’. For example, in a firm profit maximization problem, Lagrange multipliers tell us how valuable another unit of input would be to the firm’s profits. Alternatively, they tell us how much the firm’s maximum profit changes when the constraint is relaxed. Finally, they identify the maximum amount the firm would be willing to pay to acquire another unit of input. Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 12 / 31
Recall that L ( x ( b ) , y ( b ) , λ ( b )) = f ( x ( b ) , y ( b )) − λ ( b )( g ( x ( b ) , y ( b )) − b ) = f ( x ( b ) , y ( b )) So that ∂ b L ( x ( b ) , y ( b ) , λ ( b ); b ) = d ∂ dbf ( x ( b ) , y ( b ); b ) = λ ( b ) Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 13 / 31
Envelope Theorem What we have found is simply a particular application of the envelope theorem, which says that ∂ b L ( x ( b ) , y ( b ) , λ ( b ); b ) = d ∂ dbf ( x ( b ) , y ( b ); b ) Consider the problem: max f ( x 1 , x 2 , ..., x n ) x 1 , x 2 ,..., x n s.t. h 1 ( x 1 , x 2 , ..., x n , c ) = 0 . . . h k ( x 1 , x 2 , ..., x n , c ) = 0 . Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 14 / 31
Let x ∗ 1 ( c ) , ..., x ∗ n ( c ) denote the optimal solution and let µ 1 ( c ) , ..., µ k ( c ) be the corresponding Lagrange multipliers. Suppose that x ∗ 1 ( c ) , ..., x ∗ n ( c ) and µ 1 ( c ) , ..., µ k ( c ) are differentiable functions and that x ∗ ( c ) satisfies the constraint qualification. Then for each j = 1 , 2 , ..., k : ∂ c L ( x ∗ ( c ) , µ ( c ); c ) = d ∂ dc f ( x ∗ ( c ); c ) Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 15 / 31
Notice: in the case h i ( x 1 , x 2 , ..., x n , c ) = 0 is h ′ i ( x 1 , x 2 , ..., x n ) − c = 0 , then we are back to the previous case: ∂ c L ( x ∗ ( c ) , µ ( c ); c ) = d ∂ dc f ( x ∗ ( c ) , c ) = λ j ( c ) but the statement is more general. Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 16 / 31
Proof: We prove it for the simpler case of an unconstrained problem. Let φ ( x ; a ) be a continuous function of x ∈ R n and a scalar a . For any a , consider the maximization problem of max φ ( x ; a ) . Let x ∗ ( a ) be the solution of this problem and a continuous and differentiable function of a . We will show that da φ ( x ∗ ( a ); a ) = ∂ d ∂ a φ ( x ∗ ( a ); a ) Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 17 / 31
By the chain rule we have: d ∂φ ( x ∗ ( a ); a ) dx ∗ da ( a ) + ∂φ � i da φ ( x ∗ ( a ); a ) = ∂ a ( x ∗ ( a ); a ) ∂ x i i Since by the first order conditions: ∂φ ( x ∗ ( a ); a ) = 0 , ∀ i ∂ x i We then get: d da φ ( x ∗ ( a ); a ) = ∂φ ∂ a ( x ∗ ( a ); a ) Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 18 / 31
Intuitively, when we are already at a maximum, changing slightly the parameters of the problem or the constraints, does not affect, up to a first order, the value of the maximum function through changes in the solution x ∗ ( a ), because of the first order conditions ∂φ ( x ∗ ( a ); a ) = 0 ∂ x i This is a local result, when we use the envelope theorem we have to make sure though that we don’t jump to another solution in a discrete manner. Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 19 / 31
Implicit Function Theorem In economic theory, once we pin down an equilibrium or a solution to an optimization problem, we are interested in how the exogenous variables change the value of the endogenous variables. The key tool used in this endeavor is the Implicit Function Theorem. We have been using the Implicit Function Theorem throughout without stating and explaining why we can use it. Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 20 / 31
Recommend
More recommend