solving linear and integer programs
play

Solving Linear and Integer Programs Robert E. Bixby ILOG, Inc. - PowerPoint PPT Presentation

Solving Linear and Integer Programs Robert E. Bixby ILOG, Inc. and Rice University Outline Linear Programming: Introduction to basic LP, including duality Primal and dual simplex algorithms Computational progress in linear


  1. Solving Linear and Integer Programs Robert E. Bixby ILOG, Inc. and Rice University

  2. Outline � Linear Programming: � Introduction to basic LP, including duality � Primal and dual simplex algorithms � Computational progress in linear programming � Implementing the dual simplex algorithm � Mixed-Integer Programming: 2

  3. Some Basic Theory 3

  4. Linear Program – Definition A linear program ( LP ) in standard form is an optimization problem of the form Minimize c T x (P) Subject to Ax = b x ≥ 0 Where c ∈ R n , b ∈ R m , A ∈ R m×n , and x is a vector of n variables . c T x is known as the objective function , Ax=b as the constraints , and x ≥ 0 as the nonnegativity conditions. b is called the right-hand side . 4

  5. Dual Linear Program – Definition The dual ( or adjoint) linear program corresponding to (P) is the optimization problem Maximize b T π Subject to A T π ≤ c (D) π free In this context, (P) is referred to as the primal linear program . Primal Minimize c T x Subject to Ax = b x ≥ 0 5

  6. Weak Duality Theorem (von Neumann 1947) Let x be feasible for (P) and π feasible for (D). Then b T π ≤ c T x Maximize Minimize If b T π = c T x , then x is optimal for (P) and π is optimal for (D); moreover, if either (P) or (D) is unbounded , then the other problem is infeasible . Proof: π T b π T Ax ▆ = ≤ c T x π T A ≤ c T & x ≥ 0 Ax = b 6

  7. Solving Linear Programs � Three types of algorithms are available � Primal simplex algorithms (Dantzig 1947) � Dual simplex algorithms (Lemke 1954) � Developed in context of game theory � Primal-dual log barrier algorithms � Interior-point algorithms (Karmarkar 1989) � Reference: Primal-Dual Interior Point Methods, S. Wright, 1997, SIAM Primary focus: Dual simplex algorithms 7

  8. Basic Solutions – Definition Let B be an ordered set of m distinct indices (B 1 ,…,B m ) taken from {1,…,n}. B is called a basis for (P) if A B is nonsingular. The variables x B are known as the basic variables and the variables x N as the non-basic variables, where N = {1,…,n}\B . The corresponding basic solution X ∈ R n is given by X N =0 and X B =A B-1 b . B is called ( primal ) feasible if X B ≥ 0. Note: AX = b ⇒ A B X B + A N X N = b ⇒ A B X B = b ⇒ X B = A B -1 b 8

  9. Primal Simplex Algorithm (Dantzig, 1947) Input: A feasible basis B and vectors X B = A B -1 b and D N = c N – A N T A B -T c B . � Step 1: (Pricing) If D N ≥ 0 , stop, B is optimal; else let j = argmin{D k : k ∈ N}. � Step 2: (FTRAN) Solve A B y=A j . � Step 3: (Ratio test) If y ≤ 0 , stop, (P) is unbounded; else, let i = argmin{X Bk /y k : y k > 0}. � Step 4: (BTRAN) Solve A B T z = e i . � Step 5: (Update) Compute α N =-A N T z . Let B i =j . Update X B (using y ) and D N (using α N ) Note: x j is called the entering variable and x Bi the leaving variable. The D N values are known as reduced costs – like partial derivatives of the objective function relative to the nonbasic variables. 9

  10. Primal Simplex Example 10

  11. The Simplex Algorithm Consider the following simple LP: Maximize 3 x 1 + 2 x 2 + 2 x 3 x 3 ≤ Subject to x 1 + 8 ≤ x 1 + x 2 7 ≤ 12 x 1 + 2 x 2 x 1 , x 2 , x 3 ≥ 0 11

  12. The Primal Simplex Algorithm Maximize z = 3 x 1 + 2 x 2 + 2 x 3 Add slacks: Initial basis B = (4,5,6) Maximize 3 x 1 + 2 x 2 + 2 x 3 + 0 x 4 + 0 x 5 + 0 x 6 x 3 Subject to x 1 + x 3 + x 4 = 8 x 1 + x 2 + x 5 = 7 x 1 + 2 x 2 + x 6 = 12 x 1 , x 2 , x 3 , x 4 ,x 5 ,x 6 ≥ 0 Optimal (0,0,8) (0,6,8) (2,5,6) z = 28 (0,6,0) z = 0 x 2 (2,5,0) (7,0,1) z = 23 x 1 enters, x 5 leaves basis (7,0,0) z = 21 D 1 = rate of change of z relative to x 1 = 21/7=3 x 1 12

  13. Dual Simple Algorithm – Setup Simplex algorithms apply to problems with constraints in equality form. We convert (D) to this form by adding the dual slacks d : Maximize b T π Subject to A T π + d = c ⇔ A T π ≤ c π free, d ≥ 0 13

  14. Dual Simple Algorithm – Setup Maximize b T π π c B A B T I B 0 Subject to A T π + d = c d B = A N T 0 I N c N π free, d ≥ 0 d N Given a basis B, the corresponding dual basic variables are π and d N . d B are the nonbasic variables . The corresponding dual basic solution Π ,D is determined as follows: D B =0 ⇒ Π = A B-T c B ⇒ D N =c N – A NT Π B is dual feasible if D N ≥ 0. 14

  15. Dual Simple Algorithm – Setup Maximize b T π π c B A B T I B 0 Subject to A T π + d = c d B = A N T 0 I N c N π free, d ≥ 0 d N Observation: We may assume that every dual basis has the above form. Proof: Assuming that the primal has a basis is equivalent to assuming that rank(A)=m (# of rows), and this implies that all π variables can be ▆ assumed to be basic. This observation establishes a 1-1 correspondence between primal and dual bases. 15

  16. An Important Fact If X and Π ,D are corresponding primal and dual basic solutions determined by a basis B , then Π T b = c T X . Hence, by weak duality, if B is both primal and dual feasible, then X is optimal for (P) and Π is optimal for (D). Proof: c T X = c BT X B (since X N =0 ) = Π T A B X B (since Π = A B-T c B ) ▆ = Π T b (since A B X B =b ) 16

  17. Dual Simplex Algorithm (Lemke, 1954) Input: A dual feasible basis B and vectors X B = A B -1 b and D N = c N – A N T B -T c B . � Step 1: (Pricing) If X B ≥ 0 , stop, B is optimal; else let i = argmin{X Bk : k ∈ {1,…,m}}. � Step 2: (BTRAN) Solve B T z = e i . Compute α N =-A N T z . � Step 3: (Ratio test) If α N ≤ 0 , stop, (D) is unbounded; else, let j = argmin{D k / α k : α k > 0}. � Step 4: (FTRAN) Solve A B y = A j . � Step 5: (Update) Set B i =j . Update X B (using y ) and D N (using α N ) Note: d Bi is the entering variable and d j is the leaving variable. (Expressed in terms of the primal: x Bi is the leaving variable and x j is the entering variable) 17

  18. Simplex Algorithms Input: A primal feasible basis B Input: A dual feasible basis B and and vectors vectors X B =A B -1 b & D N =c N – A N T A B -T c B . X B =A B -1 b & D N =c N – A N T A B -T c B . � Step 1: (Pricing) If D N ≥ 0 , stop, � Step 1: (Pricing) If X B ≥ 0 , stop, B is optimal; else, let B is optimal; else, let j = argmin{D k : k ∈ N}. i = argmin{X Bk : k ∈ {1,…,m}}. � Step 2: (FTRAN) Solve A B y=A j . � Step 2: (BTRAN) Solve A B T z = e i . Compute α N =-A N T z . � Step 3: (Ratio test) If y ≤ 0 , stop, � Step 3: (Ratio test) If α N ≤ 0 , (P) is unbounded; else, let stop, (D) is unbounded; else, let i = argmin{X Bk /y k : y k > 0}. j = argmin{D k / α k : α k > 0}. T z = � Step 4: (BTRAN) Solve A B e i . � Step 4: (FTRAN) Solve A B y = A j . � Step 5: (Update) Compute α N = T z . Let B i =j . Update X B -A N � Step 5: (Update) Set B i =j . (using y ) and D N (using α N ) Update X B (using y ) and D N (using α N ) 18

  19. Correctness: Dual Simplex Algorithm � Termination criteria � Optimality (DONE – by “An Important Fact” !!!) � Unboundedness � Other issues � Finding starting dual feasible basis, or showing that no feasible solution exists � Input conditions are preserved (i.e., that B is still a feasible basis) � Finiteness 19

  20. Summary: What we have done and what we have to do � Done � Defined primal and dual linear programs � Proved the weak duality theorem � Introduced the concept of a basis � Stated primal and dual simplex algorithms � To do (for dual simplex algorithm) � Show correctness � Describe key implementation ideas � Motivation 20

  21. Dual Unboundedness ( ⇒ primal infeasible) � We carry out a key calculation � As noted earlier, in an iteration of the dual Maximize b T π d Bi enters basis in Subject to A T π + d = c d j leaves basis π free, d ≥ 0 � The idea: Currently d Bi = 0 , and X Bi < 0 has motivated us to increase d Bi to θ > 0 , leaving the other components of d B at 0 (the object being to increase the objective). Letting d , π be the corresponding dual solution as a function of θ , we obtain d B = θ e i π = Π – θ z d N = D N – θ α N where α N and z are as computed in the algorithm. 21

  22. (Dual Unboundedness – cont.) � Letting d , π be the corresponding dual solution as a function of θ . Using α N and z from dual algorithm, d B = θ e i d N = D N – θ α N π = π – θ z . � Using θ > 0 and X Bi < 0 yields new_objective = π T b = ( π – θ z) T b = π T b – θ X Bi = old_objective – θ X Bi > old_objective � Conclusion 1: If α N ≤ 0 , then d N ≥ 0 ∀ θ > 0 ⇒ (D) is unbounded. � Conclusion 2: If α N not ≤ 0, then d N ≥ 0 ⇒ θ ≤ D j / α j ∀ α j > 0 ⇒ θ max = min{D j / α j : α j > 0} 22

  23. (Dual Unboundedness – cont.) � Finiteness: If D B > 0 for all dual feasible bases B , then the dual simplex algorithm is finite: The dual objective strictly increases at each iteration ⇒ no basis repeats, and there are a finite number of bases. � There are various approaches to guaranteeing finiteness in general: � Bland’s Rules: Purely combinatorial, bad in practice. � CPLEX: A perturbation is introduced to guarantee D B > 0. 23

  24. Computational History Linear Programming of 24

Recommend


More recommend