some observations on boolean logic and optimization
play

Some Observations on Boolean Logic and Optimization John Hooker - PowerPoint PPT Presentation

Some Observations on Boolean Logic and Optimization John Hooker Carnegie Mellon University January 2009 Slide 1 Outline Logic and cutting planes Logic of 0-1 inequalities Logic and linear programming Inference duality


  1. Logic and cutting planes Theorem (JNH). The prime implicates of a clause set define an integral polytope if and only if all maximal monotone subsets of the prime implicates define an integral polytope. Generalized by Guenin, and by Nobili & Sassano. Slide 32

  2. Logic of 0-1 Inequalities Slide 33

  3. Logic of 0-1 inequalities 0-1 inequalities can be viewed as logical propositions . Slide 34

  4. Logic of 0-1 inequalities 0-1 inequalities can be viewed as logical propositions . Can the resolution algorithm be generalized to 0-1 inequalities? Slide 35

  5. Logic of 0-1 inequalities 0-1 inequalities can be viewed as logical propositions . Can the resolution algorithm be generalized to 0-1 inequalities? Yes. This results in a logical analog of Chvátal’s theorem. Slide 36

  6. Logic of 0-1 inequalities 0-1 inequalities can be viewed as logical propositions . Can the resolution algorithm be generalized to 0-1 inequalities? Yes. This results in a logical analog of Chvátal’s theorem. Theorem (JNH). Classical resolution + diagonal summation generates all 0-1 prime implicates (up to logical equivalence). Slide 37

  7. Logic of 0-1 inequalities Diagonal summation: x x x x + + + ≥ 5 3 4 Each inequality is implied by 1 2 3 4 an inequality in the set to x x x x + + + ≥ 2 4 3 4 1 2 3 4 which 0-1 resolution is x x x x + + + ≥ 2 5 2 4 implied. 1 2 3 4 x x x + + ≥ 2 5 3 4 1 2 3 x x x x + + + ≥ 2 5 3 5 1 2 3 4 Diagonal sum Slide 38

  8. Logic of 0-1 inequalities Diagonal summation: x x x x + + + ≥ 5 3 4 Each inequality is implied by 1 2 3 4 an inequality in the set to x x x x + + + ≥ 2 4 3 4 1 2 3 4 which 0-1 resolution is x x x x + + + ≥ 2 5 2 4 implied. 1 2 3 4 x x x x + + + ≥ 2 5 3 0 4 1 2 3 4 x x x x + + + ≥ 2 5 3 5 1 2 3 4 Diagonal sum Slide 39

  9. Logic and Linear Programming Slide 40

  10. Logic and linear programming Theorem : A renamable Horn set of clauses is satisfiable if and only if it has a unit refutation . Horn = at most one positive literal per clause Renamable Horn = Horn after complementing some variables. Unit refutation = resolution proof of unsatisfiability in which at least one parent of each resolvent is a unit clause. Slide 41

  11. Logic and linear programming Theorem : A renamable Horn set of clauses is satisfiable if and only if it has a unit refutation . Horn = at most one positive literal per clause Renamable Horn = Horn after complementing some variables. Unit refutation = resolution proof of unsatisfiability in which at least one parent of each resolvent is a unit clause. x 1 x x ∨ 1 2 x x ∨ 1 3 x x x ∨ ∨ 1 2 3 Horn set Slide 42

  12. Logic and linear programming Theorem : A renamable Horn set of clauses is satisfiable if and only if it has a unit refutation . Horn = at most one positive literal per clause Renamable Horn = Horn after complementing some variables. Unit refutation = resolution proof of unsatisfiability in which at least one parent of each resolvent is a unit clause. x x 1 1 x x x x ∨ ∨ 1 2 1 2 x x x x ∨ ∨ 1 3 1 3 x x x x x x ∨ ∨ ∨ ∨ 1 2 3 1 2 3 Horn set Unit resolution Slide 43

  13. Logic and linear programming Theorem : A renamable Horn set of clauses is satisfiable if and only if it has a unit refutation . Horn = at most one positive literal per clause Renamable Horn = Horn after complementing some variables. Unit refutation = resolution proof of unsatisfiability in which at least one parent of each resolvent is a unit clause. x x x 1 1 1 x x x x x x ∨ ∨ ∨ 1 2 1 2 1 2 x x x x x x ∨ ∨ ∨ 1 3 1 3 1 3 x x x x x x x x x ∨ ∨ ∨ ∨ ∨ ∨ 1 2 3 1 2 3 1 2 3 Horn set Unit resolution Unit resolution Slide 44

  14. Logic and linear programming We don’t know a necessary and sufficient condition for solubility by unit refutation. But we can identify sufficient conditions by generalizing Horn sets. For example, to extended Horn sets , which rely on a rounding property of linear programming. Slide 45

  15. Logic and linear programming Theorem : A satisfiable Horn set can be solved by rounding down a solution of the linear programming relaxation. Slide 46

  16. Logic and linear programming Theorem : A satisfiable Horn set can be solved by rounding down a solution of the linear programming relaxation. x 1 x x x ∨ ∨ 1 2 3 x x ∨ 2 3 x x x ∨ ∨ 1 2 3 Horn set Slide 47

  17. Logic and linear programming Theorem : A satisfiable Horn set can be solved by rounding down a solution of the linear programming relaxation. x ≥ x 1 1 1 x x x − + − + ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x x − + − ≥ x x ∨ (1 ) (1 ) 1 2 3 2 3 x x x − + + − ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x ≤ ≤ 0 1 j Horn set LP relaxation Slide 48

  18. Logic and linear programming Theorem : A satisfiable Horn set can be solved by rounding down a solution of the linear programming relaxation. x ≥ x 1 1 1 x x x − + − + ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x x − + − ≥ x x ∨ (1 ) (1 ) 1 2 3 2 3 x x x − + + − ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x ≤ ≤ 0 1 j Horn set LP relaxation Solution: ( x 1 , x 2 , x 3 ) = (1,1/2,1/2) Slide 49

  19. Logic and linear programming Theorem : A satisfiable Horn set can be solved by rounding down a solution of the linear programming relaxation. x ≥ x 1 1 1 x x x − + − + ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x x − + − ≥ x x ∨ (1 ) (1 ) 1 2 3 2 3 x x x − + + − ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x ≤ ≤ 0 1 j Horn set LP relaxation Solution: ( x 1 , x 2 , x 3 ) = (1,1/2,1/2) Round down: ( x 1 , x 2 , x 3 ) = (1,0,0) Slide 50

  20. Logic and linear programming To generalize this, we use the following: Theorem (Chandrasekaran): If Ax ≥ b has integral components and T is nonsingular such that: - T and T − 1 are integral - Each row of T − 1 contains at most one negative entry, namely − 1 - Each row of AT − 1 contains at most one negative entry, namely − 1 −  Then if x solves Ax ≥ b , so does T Tx  1   Slide 51

  21. Logic and linear programming A clause has the extended star-chain property if it corresponds to a set of edge-disjoint flows into the root of an arborescence and a flow on one additional chain. x x x x x x ∨ ∨ ∨ ∨ ∨ 1 3 4 5 6 7 x 2 x 3 x 1 x 4 x 5 x 6 x 7 Slide 52

  22. Logic and linear programming A clause set is extended Horn if there is an arborescence for which every clause in the set has the extended star-chain property. x x x x x x ∨ ∨ ∨ ∨ ∨ 1 3 4 5 6 7 x 2 x 3 x 1 x 4 x 5 x 6 x 7 Slide 53

  23. Logic and linear programming Theorem (Chandru and JNH). A satisfiable extended Horn clause set can be solved by rounding a solution of the LP relaxation as shown: x x x x x x ∨ ∨ ∨ ∨ ∨ 1 3 4 5 6 7 up x 2 down x 3 x 1 x 4 x 5 down x 6 x 7 Slide 54

  24. Logic and linear programming Corollary . A satisfiable extended Horn clause set can be solved by assigning values as shown: x x x x x x ∨ ∨ ∨ ∨ ∨ 1 3 4 5 6 7 1 x 2 0 x 3 x 1 x 4 x 5 0 x 6 x 7 Slide 55

  25. Logic and linear programming Theorem (Chandru and JNH). A renamable extended Horn clause is satisfiable if and only if it has no unit refutation. Slide 56

  26. Logic and linear programming Theorem (Chandru and JNH). A renamable extended Horn clause is satisfiable if and only if it has no unit refutation. Theorem (Schlipf, Annexstein, Franco & Swaminathan). These results hold when then incoming chains are not edge disjoint. . Slide 57

  27. Logic and linear programming Theorem (Chandru and JNH). A renamable extended Horn clause is satisfiable if and only if it has no unit refutation. Theorem (Schlipf, Annexstein, Franco & Swaminathan). These results hold when then incoming chains are not edge disjoint. Corollary (Schlipf, Annexstein, Franco & Swaminathan). A one-step lookahead algorithm solves a satisfiable extended Horn problem without knowledge of the arborescence. Slide 58

  28. Inference duality Slide 59

  29. Inference duality Consider an optimization problem: f x min ( ) � Constraint set x D ∈ Variable domain Slide 60

  30. Inference duality Consider an optimization problem: f x min ( ) � Constraint set x D ∈ Variable domain An inference dual is: There is a proof P of f ( x ) ≥ v v max from premises in � P ( ) f x v ⇒ ≥ � ( ) Family of admissible proofs v P ∈ , ∈ � ℝ Slide 61

  31. Inference duality Linear programming: Let Ax ≥ b ⇒ cx ≥ v when uAx ≥ ub cx min dominates cx ≥ v for some u ≥ 0. Ax b ≥ dominates = uA ≤ c and ub ≥ v x ≥ 0 Inference dual is: v max P ( ) ( ) Ax b cx v ≥ ⇒ ≥ v P ∈ , ∈ � ℝ Slide 62

  32. Inference duality Linear programming: Let Ax ≥ b ⇒ cx ≥ v when uAx ≥ ub cx min dominates cx ≥ v for some u ≥ 0. Ax b ≥ dominates = uA ≤ c and ub ≥ v x ≥ 0 Inference dual is: v This becomes the classical max LP dual. P ( ) ( ) Ax b cx v ≥ ⇒ ≥ v P ∈ , ∈ � ℝ Slide 63

  33. Inference duality Linear programming: Let Ax ≥ b ⇒ cx ≥ v when uAx ≥ ub cx min dominates cx ≥ v for some u ≥ 0. Ax b ≥ dominates = uA ≤ c and ub ≥ v x ≥ 0 Inference dual is: v This becomes the classical max LP dual. P ( ) ( ) Ax b cx v ≥ ⇒ ≥ This is a strong dual because the v P ∈ , ∈ � ℝ inference method is complete (Farkas Lemma). Slide 64

  34. Inference duality General inequality constraints: Let g ( x ) ≥ 0 ⇒ f ( x ) ≥ v when ug ( x ) ≥ 0 f x min ( ) implies f ( x ) ≥ v for some u ≥ 0. g x ≥ ( ) 0 implies = all x ∈ S satisfying ug ( x ) ≥ 0 x S ∈ satisfy f ( x ) ≥ v . Inference dual is: v max P ( ) ( ) g x f x v ≥ ⇒ ≥ ( ) 0 ( ) v P ∈ , ∈ � ℝ Slide 65

  35. Inference duality General inequality constraints: Let g ( x ) ≥ 0 ⇒ f ( x ) ≥ v when ug ( x ) ≥ 0 f x min ( ) implies f ( x ) ≥ v for some u ≥ 0. g x ≥ ( ) 0 implies = all x ∈ S satisfying ug ( x ) ≥ 0 x S ∈ satisfy f ( x ) ≥ v . Inference dual is: v This becomes the surrogate dual . max P ( ) ( ) g x f x v ≥ ⇒ ≥ ( ) 0 ( ) v P ∈ , ∈ � ℝ Slide 66

  36. Inference duality General inequality constraints: Let g ( x ) ≥ 0 ⇒ f ( x ) ≥ v when ug ( x ) ≥ 0 f x min ( ) dominates f ( x ) ≥ v for some u ≥ 0. g x ≥ ( ) 0 x S ∈ Inference dual is: v max P ( ) ( ) g x f x v ≥ ⇒ ≥ ( ) 0 ( ) v P ∈ , ∈ � ℝ Slide 67

  37. Inference duality General inequality constraints: Let g ( x ) ≥ 0 ⇒ f ( x ) ≥ v when ug ( x ) ≥ 0 f x min ( ) dominates f ( x ) ≥ v for some u ≥ 0. g x ≥ ( ) 0 x S ∈ Inference dual is: v This becomes the Lagrangean dual max P ( ) ( ) g x f x v ≥ ⇒ ≥ ( ) 0 ( ) v P ∈ , ∈ � ℝ Slide 68

  38. Inference duality Integer linear programming: Let Ax ≥ b ⇒ cx ≥ v when h(Ax) ≥ h ( b ) min cx dominates cx ≥ v for some Ax b ≥ subadditive and homogeneous x S ∈ function h . Inference dual is: v max P ( ) ( ) Ax b cx v ≥ ⇒ ≥ v P ∈ , ∈ � ℝ Slide 69

  39. Inference duality Integer linear programming: Let Ax ≥ b ⇒ cx ≥ v when h(Ax) ≥ h ( b ) min cx dominates cx ≥ v for some Ax b ≥ subadditive and homogeneous x S ∈ function h . Inference dual is: This becomes the subadditive dual . v max P ( ) ( ) Ax b cx v ≥ ⇒ ≥ v P ∈ , ∈ � ℝ Slide 70

  40. Inference duality Integer linear programming: Let Ax ≥ b ⇒ cx ≥ v when h(Ax) ≥ h ( b ) min cx dominates cx ≥ v for some Ax b ≥ subadditive and homogeneous x S ∈ function h . Inference dual is: This becomes the subadditive dual . v max This is a strong dual because the P ( ) ( ) Ax b cx v ≥ ⇒ ≥ inference method is complete, due to Chvátal’s theorem. v P ∈ , ∈ � ℝ Appropriate Chvátal function is subadditive and can found by Slide 71 Gomory’s cutting plane method.

  41. Inference duality Inference duality permits a generalization of Benders decomposition. Slide 72

  42. Inference duality Inference duality permits a generalization of Benders decomposition. In classical Benders, a Benders cut is a linear combination of the subproblem constraints using dual multipliers. Slide 73

  43. Inference duality Inference duality permits a generalization of Benders decomposition. In classical Benders, a Benders cut is a linear combination of the subproblem constraints using dual multipliers. The Benders cut rules out solutions of the master problem for which the proof of optimality in the subproblem is still valid . Slide 74

  44. Inference duality Inference duality permits a generalization of Benders decomposition. In classical Benders, a Benders cut is a linear combination of the subproblem constraints using dual multipliers. The Benders cut rules out solutions of the master problem for which the proof of optimality in the subproblem is still valid . For general optimization, a Benders cut does the same, but the proof of optimality is a solution of the general inference dual . Slide 75

  45. Inference duality Inference duality permits a generalization of Benders decomposition. In classical Benders, a Benders cut is a linear combination of the subproblem constraints using dual multipliers. The Benders cut rules out solutions of the master problem for which the proof of optimality in the subproblem is still valid . For general optimization, a Benders cut does the same, but the proof of optimality is a solution of the general inference dual . This has led to orders-of-magnitude speedups in solution of scheduling and other problems by logic-based Benders decomposition . Slide 76

  46. Constraint Programming Slide 77

  47. Constraint programming Constraint programming uses logical inference to reduce backtracking. Slide 78

  48. Constraint programming Constraint programming uses logical inference to reduce backtracking. Inference takes the form of consistency maintenance . Slide 79

  49. Constraint programming Constraint programming uses logical inference to reduce backtracking. Inference takes the form of consistency maintenance . A constraint set S containing variables x 1 , …, x n is k -consistent if - for any subject of variables x 1 , …, x j , x j +1 - and any partial assignment ( x 1 , …, x j ) = ( v 1 , …, v j ) that violates no constraint in S , there is a v j +1 such that ( x 1 , …, x j +1 ) = ( v 1 , …, v j +1 ) violates no constraint in S . Slide 80

  50. Constraint programming Constraint programming uses logical inference to reduce backtracking. Inference takes the form of consistency maintenance . A constraint set S containing variables x 1 , …, x n is k -consistent if - for any subject of variables x 1 , …, x j , x j +1 - and any partial assignment ( x 1 , …, x j ) = ( v 1 , …, v j ) that violates no constraint in S , there is a v j +1 such that ( x 1 , …, x j +1 ) = ( v 1 , …, v j +1 ) violates no constraint in S . S is strongly k -consistent if it is j -consistent for j = 1, …, k . Slide 81

  51. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Slide 82

  52. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. x x x ∨ ∨ Dependency graph 1 2 3 x x x ∨ ∨ x 5 x 1 x 3 1 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Slide 83

  53. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. x x x ∨ ∨ Dependency graph 1 2 3 x x x ∨ ∨ x 5 x 1 x 3 1 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 Slide 84

  54. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. x x x ∨ ∨ Dependency graph 1 2 3 x x x ∨ ∨ x 5 x 1 x 3 1 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 We will show that this is strongly 3-consistent. We can therefore solve it without backtracking Slide 85

  55. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. x x x ∨ ∨ Dependency graph 1 2 3 x x x ∨ ∨ x 5 x 1 x 3 1 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 Slide 86

  56. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x x ∨ x 5 x 1 x 3 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 Slide 87

  57. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x x ∨ x 5 x 1 x 3 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 Slide 88

  58. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 Slide 89

  59. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 Slide 90

  60. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 Slide 91

  61. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 0 Slide 92

  62. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x 5 x x ∨ x 2 x 4 x 6 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 0 Slide 93

  63. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x 5 x x ∨ x 2 x 4 x 6 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 0 1 Slide 94

  64. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x x 2 x 4 x 6 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 0 1 Slide 95

  65. Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x x 2 x 4 x 6 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 0 1 1 Slide 96

  66. Constraint programming Theorem. Application of k -resolution makes a clause set strongly k -consistent. k-resolution = generate only resolvents with fewer than k literals. x x x ∨ ∨ 1 2 3 All resolvents have 3 or more x x x ∨ ∨ literals. 1 2 4 x x ∨ Clause set is therefore strongly 3 5 x x x ∨ ∨ 3-consistent, as claimed. 4 5 6 Slide 97

  67. Constraint programming Constraint programmers are primarily concerned with domain consistency . Slide 98

  68. Constraint programming Constraint programmers are primarily concerned with domain consistency . A constraint set S is domain consistent if for any given variable x j and any value v j in its domain, x j = v j in some solution of S . Slide 99

  69. Constraint programming Constraint programmers are primarily concerned with domain consistency . A constraint set S is domain consistent if for any given variable x j and any value v j in its domain, x j = v j in some solution of S . Domain consistency = generalized arc consistency = hyperarc consistency . Slide 100

Recommend


More recommend