maximin and maximal solutions for linear programming
play

Maximin and Maximal Solutions for Linear Programming Problems with - PowerPoint PPT Presentation

Maximin and Maximal Solutions for Linear Programming Problems with Possibilistic Uncertainty Erik Quaeghebeur, Nathan Huntley, Keivan Shariatmadar, Gert de Cooman Ghent University, SYSTeMS Research Group, Belgium Linear programming problems


  1. Maximin and Maximal Solutions for Linear Programming Problems with Possibilistic Uncertainty Erik Quaeghebeur, Nathan Huntley, Keivan Shariatmadar, Gert de Cooman Ghent University, SYSTeMS Research Group, Belgium

  2. Linear programming problems under uncertainty c T x maximize subject to ax ≤ b , x ≥ 0 Variables x : optimization vector. Parameters c : objective function coefficient vector, a : constraint coefficient matrix, b : constraint coefficient vector. Independence of components of A and B is assumed. Give meaning by reformulating as a decision problem with utility functions G x := c T xI Ax ≤ B + LI Ax � B = L + ( c T x − L ) I Ax ≤ B L : penalty value; L < c T x for ‘feasible’ x .

  3. Linear programming problems under uncertainty c T x maximize subject to Ax ≤ B , x ≥ 0 with given uncertainty model for ( A , B ) Variables x : optimization vector. Parameters c : objective function coefficient vector, A : constraint coefficient matrix with uncertain components , B : constraint coefficient vector with uncertain components . Independence of components of A and B is assumed. Give meaning by reformulating as a decision problem with utility functions G x := c T xI Ax ≤ B + LI Ax � B = L + ( c T x − L ) I Ax ≤ B L : penalty value; L < c T x for ‘feasible’ x .

  4. Linear programming problems under uncertainty c T x maximize subject to Ax ≤ B , x ≥ 0 with given uncertainty model for ( A , B ) Variables x : optimization vector. Parameters c : objective function coefficient vector, A : constraint coefficient matrix with uncertain components, B : constraint coefficient vector with uncertain components. Independence of components of A and B is assumed. Give meaning by reformulating as a decision problem with utility functions G x := c T xI Ax ≤ B + LI Ax � B = L + ( c T x − L ) I Ax ≤ B L : penalty value; L < c T x for ‘feasible’ x .

  5. Running example maximize 2 x 1 + 3 x 2 subject to 1 x 1 + 3 x 2 ≤ 2 , c T x := 2 x 1 + 3 x 2 maximize 1 x 1 + 1 x 2 ≤ B 2 , ≡ subject to x ⊳ B 2 − 3 x 1 − 3 x 2 ≤ − 1 , x 1 ≥ 0 , x 2 ≥ 0 x 2 2 3 ( 1 2 , 1 2 ) c T x := 2 x 1 + 3 x 2 maximize 1 subject to x ⊳ 1 3 x 1 1 1 3

  6. Running example maximize 2 x 1 + 3 x 2 subject to 1 x 1 + 3 x 2 ≤ 2 , c T x := 2 x 1 + 3 x 2 maximize 1 x 1 + 1 x 2 ≤ B 2 , ≡ subject to x ⊳ B 2 − 3 x 1 − 3 x 2 ≤ − 1 , x 1 ≥ 0 , x 2 ≥ 0 x 2 2 3 ( 1 2 , 1 2 ) c T x := 2 x 1 + 3 x 2 maximize 1 subject to x ⊳ 1 3 x 1 1 1 3

  7. Running example maximize 2 x 1 + 3 x 2 subject to 1 x 1 + 3 x 2 ≤ 2 , c T x := 2 x 1 + 3 x 2 maximize 1 x 1 + 1 x 2 ≤ B 2 , ≡ subject to x ⊳ B 2 − 3 x 1 − 3 x 2 ≤ − 1 , x 1 ≥ 0 , x 2 ≥ 0 x 2 2 3 ( 1 2 , 1 2 ) c T x := 2 x 1 + 3 x 2 maximize 1 subject to x ⊳ 1 3 x 1 1 1 3 Penalty value choice L := 0 in the running example.

  8. Probabilistic case (probability mass function) Maximizing expected utility P ( G x ) = L + ( c T x − L ) P ( Ax ≤ B ) c T x maximize subject to Ax ≤ B , maximize P ( G x ) → x ≥ 0 subject to x ≥ 0 with given p

  9. Probabilistic case (probability mass function) Maximizing expected utility P ( G x ) = L + ( c T x − L ) P ( Ax ≤ B ) c T x maximize subject to Ax ≤ B , maximize P ( G x ) → x ≥ 0 subject to x ≥ 0 with given p Running example x 2 c T x := 2 x 1 + 3 x 2 maximize 2 ( 1 2 , 1 3 2 ) subject to x ⊳ B 2 (1 , 1 3 ) 3 / 5 1 3 1 / 5 with p B 2 = 3 1 2 4 3 ↓ x 1 1 2 1 4 3 3 3 � maximize c T x � maximize P ( B 2 ≥ b ) subject to x ⊳ b subject to b ∈ { 2 / 3 , 1 , 4 / 3 }

  10. Optimality criteria for lower & upper previsions Generalizations of maximizing expected utility for P & P : Maximinity those x ≥ 0 are optimal that maximize lower expected utility; P ( G x ) = L + ( c T x − L ) P ( Ax ≤ B ) . Maximality those x ≥ 0 are optimal that are undominated by all other vectors z ≥ 0 in the sense that � ≥ 0 . � ( c T x − L ) I Ax ≤ B − ( c T z − L ) I Az ≤ B P ( G x − G z ) = P

  11. Optimality criteria for lower & upper previsions Generalizations of maximizing expected utility for P & P : Maximinity those x ≥ 0 are optimal that maximize lower expected utility; P ( G x ) = L + ( c T x − L ) P ( Ax ≤ B ) . Maximality those x ≥ 0 are optimal that are undominated by all other vectors z ≥ 0 in the sense that � ≥ 0 . � ( c T x − L ) I Ax ≤ B − ( c T z − L ) I Az ≤ B P ( G x − G z ) = P

  12. Optimality criteria for lower & upper previsions Generalizations of maximizing expected utility for P & P : Maximinity those x ≥ 0 are optimal that maximize lower expected utility; P ( G x ) = L + ( c T x − L ) P ( Ax ≤ B ) . Maximality those x ≥ 0 are optimal that are undominated by all other vectors z ≥ 0 in the sense that � ≥ 0 . � ( c T x − L ) I Ax ≤ B − ( c T z − L ) I Az ≤ B P ( G x − G z ) = P Dominance x ≥ 0 is undominated by z ≥ 0 in pointwise comparison of utility functions if G z = G x or max( G x − G z ) > 0 , or, equivalently, c T x ≥ max ( Ax ≤ B )=( Az ≤ B ) c T z , c T x > max ( Ax ≤ B ) ⊂ ( Az ≤ B ) c T z .

  13. Maximin solutions in the interval case c T x maximize c T x subject to Ax ≤ B , maximize x ≥ 0 → subject to ax ≤ b , with a ≤ A ≤ a , x ≥ 0 b ≤ B ≤ b

  14. Maximin solutions in the interval case c T x maximize c T x subject to Ax ≤ B , maximize x ≥ 0 → subject to ax ≤ b , with a ≤ A ≤ a , x ≥ 0 b ≤ B ≤ b Running example c T x := 2 x 1 + 3 x 2 maximize x 2 subject to x ⊳ B 2 2 3 with B 2 ∈ [ 2 / 3 , 4 / 3 ] (1 , 1 3 ) 1 ↓ 3 c T x maximize subject to x ⊳ 2 / 3 x 1 1 2 4 3 3 3

  15. Maximal solutions in the interval case c T x maximize find all x subject to Ax ≤ B , subject to ax ≤ b , x ≥ 0 → x ≥ 0 , c T x ≥ max az ≤ b c T z , with a ≤ A ≤ a , b ≤ B ≤ b dominance

  16. Maximal solutions in the interval case c T x maximize find all x subject to Ax ≤ B , subject to ax ≤ b , x ≥ 0 → x ≥ 0 , c T x ≥ max az ≤ b c T z , with a ≤ A ≤ a , b ≤ B ≤ b dominance Running example c T x := 2 x 1 + 3 x 2 maximize x 2 subject to x ⊳ B 2 2 3 with B 2 ∈ [ 2 / 3 , 4 / 3 ] (1 , 1 3 ) 1 ↓ 3 find all x subject to x ⊳ 4 / 3 x 1 1 2 4 c T x ≥ max z ⊳ 2 / 3 c T z , 3 3 c T x ≥ max 1 z 1 +1 z 2 ≤ 1 x 1 +1 x 2 c T z (dominance)

  17. Maximin solutions in the possibilistic case c T x maximize subject to Ax ≤ B , x ≥ 0 with given π ↓ � maximize c T x − L � maximize L + (1 − t ) subject to a t x ≤ b t , subject to 0 ≤ t < 1 x ≥ 0

  18. Maximin solutions in the possibilistic case x 2 Running example 2 ( 1 2 , 1 3 2 ) (1 , 1 3 ) 1 3 c T x := 2 x 1 + 3 x 2 maximize x 1 subject to x ⊳ B 2 1 2 1 4 3 3 3 1 1 / 5 with π B 2 = 3 1 2 4 3 ↓ � maximize c T x � maximize (1 − t ) subject to x ⊳ b 2 t subject to t ∈ { 0 , 1 / 5 }

  19. Maximal solutions in the possibilistic case ◮ No analytical reduction to a standard optimization problem known. ◮ Numerical approach: ◮ Make a grid in the solution set of the corresponding interval case. ◮ Compare grid points and remove the dominated ones. ◮ This is computationally expensive.

  20. Maximal solutions in the possibilistic case ◮ No analytical reduction to a standard optimization problem known. ◮ Numerical approach: ◮ Make a grid in the solution set of the corresponding interval case. ◮ Compare grid points and remove the dominated ones. ◮ This is computationally expensive. Running example (numerical) x 2 2 3 (1 , 1 3 ) 1 3 x 1 1 2 4 3 3

  21. Maximal solutions in the possibilistic case x 2 Running example (analytical) 2 ( 5 9 , 5 9 ) 3 (1 , 1 3 ) 1 c T x := 2 x 1 + 3 x 2 3 maximize subject to x ⊳ B 2 x 1 1 2 1 10 25 1 3 3 9 18 9 / 10 with π B 2 = 3 1 2 4 ↓ 3 find all x either subject to x ⊳ 1 , x � ⊳ 2 / 3 c T x < max 1 z 1 +1 z 2 ≤ 1 x 1 +1 x 2 c T z but not or subject to x ⊳ 4 / 3 , x � ⊳ 1 , c T x ≥ max 1 z 1 +1 z 2 ≤ 1 x 1 +1 x 2 c T z (dominance) c T x < 10 / 9 max z ⊳ 1 c T z (cf. green-filled dot) but not

  22. Conclusions ◮ Problem is very hard in general. (Even without dominance.) ◮ But some specific cases can be tackled, as shown by our results. ◮ Extension to problems with uncertainty in the goal function . . . ◮ Using different utility functions . . .

Recommend


More recommend