Logic and cutting planes Theorem (JNH). The prime implicates of a clause set define an integral polytope if and only if all maximal monotone subsets of the prime implicates define an integral polytope. Generalized by Guenin, and by Nobili & Sassano. Slide 32
Logic of 0-1 Inequalities Slide 33
Logic of 0-1 inequalities 0-1 inequalities can be viewed as logical propositions . Slide 34
Logic of 0-1 inequalities 0-1 inequalities can be viewed as logical propositions . Can the resolution algorithm be generalized to 0-1 inequalities? Slide 35
Logic of 0-1 inequalities 0-1 inequalities can be viewed as logical propositions . Can the resolution algorithm be generalized to 0-1 inequalities? Yes. This results in a logical analog of Chvátal’s theorem. Slide 36
Logic of 0-1 inequalities 0-1 inequalities can be viewed as logical propositions . Can the resolution algorithm be generalized to 0-1 inequalities? Yes. This results in a logical analog of Chvátal’s theorem. Theorem (JNH). Classical resolution + diagonal summation generates all 0-1 prime implicates (up to logical equivalence). Slide 37
Logic of 0-1 inequalities Diagonal summation: x x x x + + + ≥ 5 3 4 Each inequality is implied by 1 2 3 4 an inequality in the set to x x x x + + + ≥ 2 4 3 4 1 2 3 4 which 0-1 resolution is x x x x + + + ≥ 2 5 2 4 implied. 1 2 3 4 x x x + + ≥ 2 5 3 4 1 2 3 x x x x + + + ≥ 2 5 3 5 1 2 3 4 Diagonal sum Slide 38
Logic of 0-1 inequalities Diagonal summation: x x x x + + + ≥ 5 3 4 Each inequality is implied by 1 2 3 4 an inequality in the set to x x x x + + + ≥ 2 4 3 4 1 2 3 4 which 0-1 resolution is x x x x + + + ≥ 2 5 2 4 implied. 1 2 3 4 x x x x + + + ≥ 2 5 3 0 4 1 2 3 4 x x x x + + + ≥ 2 5 3 5 1 2 3 4 Diagonal sum Slide 39
Logic and Linear Programming Slide 40
Logic and linear programming Theorem : A renamable Horn set of clauses is satisfiable if and only if it has a unit refutation . Horn = at most one positive literal per clause Renamable Horn = Horn after complementing some variables. Unit refutation = resolution proof of unsatisfiability in which at least one parent of each resolvent is a unit clause. Slide 41
Logic and linear programming Theorem : A renamable Horn set of clauses is satisfiable if and only if it has a unit refutation . Horn = at most one positive literal per clause Renamable Horn = Horn after complementing some variables. Unit refutation = resolution proof of unsatisfiability in which at least one parent of each resolvent is a unit clause. x 1 x x ∨ 1 2 x x ∨ 1 3 x x x ∨ ∨ 1 2 3 Horn set Slide 42
Logic and linear programming Theorem : A renamable Horn set of clauses is satisfiable if and only if it has a unit refutation . Horn = at most one positive literal per clause Renamable Horn = Horn after complementing some variables. Unit refutation = resolution proof of unsatisfiability in which at least one parent of each resolvent is a unit clause. x x 1 1 x x x x ∨ ∨ 1 2 1 2 x x x x ∨ ∨ 1 3 1 3 x x x x x x ∨ ∨ ∨ ∨ 1 2 3 1 2 3 Horn set Unit resolution Slide 43
Logic and linear programming Theorem : A renamable Horn set of clauses is satisfiable if and only if it has a unit refutation . Horn = at most one positive literal per clause Renamable Horn = Horn after complementing some variables. Unit refutation = resolution proof of unsatisfiability in which at least one parent of each resolvent is a unit clause. x x x 1 1 1 x x x x x x ∨ ∨ ∨ 1 2 1 2 1 2 x x x x x x ∨ ∨ ∨ 1 3 1 3 1 3 x x x x x x x x x ∨ ∨ ∨ ∨ ∨ ∨ 1 2 3 1 2 3 1 2 3 Horn set Unit resolution Unit resolution Slide 44
Logic and linear programming We don’t know a necessary and sufficient condition for solubility by unit refutation. But we can identify sufficient conditions by generalizing Horn sets. For example, to extended Horn sets , which rely on a rounding property of linear programming. Slide 45
Logic and linear programming Theorem : A satisfiable Horn set can be solved by rounding down a solution of the linear programming relaxation. Slide 46
Logic and linear programming Theorem : A satisfiable Horn set can be solved by rounding down a solution of the linear programming relaxation. x 1 x x x ∨ ∨ 1 2 3 x x ∨ 2 3 x x x ∨ ∨ 1 2 3 Horn set Slide 47
Logic and linear programming Theorem : A satisfiable Horn set can be solved by rounding down a solution of the linear programming relaxation. x ≥ x 1 1 1 x x x − + − + ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x x − + − ≥ x x ∨ (1 ) (1 ) 1 2 3 2 3 x x x − + + − ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x ≤ ≤ 0 1 j Horn set LP relaxation Slide 48
Logic and linear programming Theorem : A satisfiable Horn set can be solved by rounding down a solution of the linear programming relaxation. x ≥ x 1 1 1 x x x − + − + ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x x − + − ≥ x x ∨ (1 ) (1 ) 1 2 3 2 3 x x x − + + − ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x ≤ ≤ 0 1 j Horn set LP relaxation Solution: ( x 1 , x 2 , x 3 ) = (1,1/2,1/2) Slide 49
Logic and linear programming Theorem : A satisfiable Horn set can be solved by rounding down a solution of the linear programming relaxation. x ≥ x 1 1 1 x x x − + − + ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x x − + − ≥ x x ∨ (1 ) (1 ) 1 2 3 2 3 x x x − + + − ≥ x x x ∨ ∨ (1 ) (1 ) 1 1 2 3 1 2 3 x ≤ ≤ 0 1 j Horn set LP relaxation Solution: ( x 1 , x 2 , x 3 ) = (1,1/2,1/2) Round down: ( x 1 , x 2 , x 3 ) = (1,0,0) Slide 50
Logic and linear programming To generalize this, we use the following: Theorem (Chandrasekaran): If Ax ≥ b has integral components and T is nonsingular such that: - T and T − 1 are integral - Each row of T − 1 contains at most one negative entry, namely − 1 - Each row of AT − 1 contains at most one negative entry, namely − 1 − Then if x solves Ax ≥ b , so does T Tx 1 Slide 51
Logic and linear programming A clause has the extended star-chain property if it corresponds to a set of edge-disjoint flows into the root of an arborescence and a flow on one additional chain. x x x x x x ∨ ∨ ∨ ∨ ∨ 1 3 4 5 6 7 x 2 x 3 x 1 x 4 x 5 x 6 x 7 Slide 52
Logic and linear programming A clause set is extended Horn if there is an arborescence for which every clause in the set has the extended star-chain property. x x x x x x ∨ ∨ ∨ ∨ ∨ 1 3 4 5 6 7 x 2 x 3 x 1 x 4 x 5 x 6 x 7 Slide 53
Logic and linear programming Theorem (Chandru and JNH). A satisfiable extended Horn clause set can be solved by rounding a solution of the LP relaxation as shown: x x x x x x ∨ ∨ ∨ ∨ ∨ 1 3 4 5 6 7 up x 2 down x 3 x 1 x 4 x 5 down x 6 x 7 Slide 54
Logic and linear programming Corollary . A satisfiable extended Horn clause set can be solved by assigning values as shown: x x x x x x ∨ ∨ ∨ ∨ ∨ 1 3 4 5 6 7 1 x 2 0 x 3 x 1 x 4 x 5 0 x 6 x 7 Slide 55
Logic and linear programming Theorem (Chandru and JNH). A renamable extended Horn clause is satisfiable if and only if it has no unit refutation. Slide 56
Logic and linear programming Theorem (Chandru and JNH). A renamable extended Horn clause is satisfiable if and only if it has no unit refutation. Theorem (Schlipf, Annexstein, Franco & Swaminathan). These results hold when then incoming chains are not edge disjoint. . Slide 57
Logic and linear programming Theorem (Chandru and JNH). A renamable extended Horn clause is satisfiable if and only if it has no unit refutation. Theorem (Schlipf, Annexstein, Franco & Swaminathan). These results hold when then incoming chains are not edge disjoint. Corollary (Schlipf, Annexstein, Franco & Swaminathan). A one-step lookahead algorithm solves a satisfiable extended Horn problem without knowledge of the arborescence. Slide 58
Inference duality Slide 59
Inference duality Consider an optimization problem: f x min ( ) � Constraint set x D ∈ Variable domain Slide 60
Inference duality Consider an optimization problem: f x min ( ) � Constraint set x D ∈ Variable domain An inference dual is: There is a proof P of f ( x ) ≥ v v max from premises in � P ( ) f x v ⇒ ≥ � ( ) Family of admissible proofs v P ∈ , ∈ � ℝ Slide 61
Inference duality Linear programming: Let Ax ≥ b ⇒ cx ≥ v when uAx ≥ ub cx min dominates cx ≥ v for some u ≥ 0. Ax b ≥ dominates = uA ≤ c and ub ≥ v x ≥ 0 Inference dual is: v max P ( ) ( ) Ax b cx v ≥ ⇒ ≥ v P ∈ , ∈ � ℝ Slide 62
Inference duality Linear programming: Let Ax ≥ b ⇒ cx ≥ v when uAx ≥ ub cx min dominates cx ≥ v for some u ≥ 0. Ax b ≥ dominates = uA ≤ c and ub ≥ v x ≥ 0 Inference dual is: v This becomes the classical max LP dual. P ( ) ( ) Ax b cx v ≥ ⇒ ≥ v P ∈ , ∈ � ℝ Slide 63
Inference duality Linear programming: Let Ax ≥ b ⇒ cx ≥ v when uAx ≥ ub cx min dominates cx ≥ v for some u ≥ 0. Ax b ≥ dominates = uA ≤ c and ub ≥ v x ≥ 0 Inference dual is: v This becomes the classical max LP dual. P ( ) ( ) Ax b cx v ≥ ⇒ ≥ This is a strong dual because the v P ∈ , ∈ � ℝ inference method is complete (Farkas Lemma). Slide 64
Inference duality General inequality constraints: Let g ( x ) ≥ 0 ⇒ f ( x ) ≥ v when ug ( x ) ≥ 0 f x min ( ) implies f ( x ) ≥ v for some u ≥ 0. g x ≥ ( ) 0 implies = all x ∈ S satisfying ug ( x ) ≥ 0 x S ∈ satisfy f ( x ) ≥ v . Inference dual is: v max P ( ) ( ) g x f x v ≥ ⇒ ≥ ( ) 0 ( ) v P ∈ , ∈ � ℝ Slide 65
Inference duality General inequality constraints: Let g ( x ) ≥ 0 ⇒ f ( x ) ≥ v when ug ( x ) ≥ 0 f x min ( ) implies f ( x ) ≥ v for some u ≥ 0. g x ≥ ( ) 0 implies = all x ∈ S satisfying ug ( x ) ≥ 0 x S ∈ satisfy f ( x ) ≥ v . Inference dual is: v This becomes the surrogate dual . max P ( ) ( ) g x f x v ≥ ⇒ ≥ ( ) 0 ( ) v P ∈ , ∈ � ℝ Slide 66
Inference duality General inequality constraints: Let g ( x ) ≥ 0 ⇒ f ( x ) ≥ v when ug ( x ) ≥ 0 f x min ( ) dominates f ( x ) ≥ v for some u ≥ 0. g x ≥ ( ) 0 x S ∈ Inference dual is: v max P ( ) ( ) g x f x v ≥ ⇒ ≥ ( ) 0 ( ) v P ∈ , ∈ � ℝ Slide 67
Inference duality General inequality constraints: Let g ( x ) ≥ 0 ⇒ f ( x ) ≥ v when ug ( x ) ≥ 0 f x min ( ) dominates f ( x ) ≥ v for some u ≥ 0. g x ≥ ( ) 0 x S ∈ Inference dual is: v This becomes the Lagrangean dual max P ( ) ( ) g x f x v ≥ ⇒ ≥ ( ) 0 ( ) v P ∈ , ∈ � ℝ Slide 68
Inference duality Integer linear programming: Let Ax ≥ b ⇒ cx ≥ v when h(Ax) ≥ h ( b ) min cx dominates cx ≥ v for some Ax b ≥ subadditive and homogeneous x S ∈ function h . Inference dual is: v max P ( ) ( ) Ax b cx v ≥ ⇒ ≥ v P ∈ , ∈ � ℝ Slide 69
Inference duality Integer linear programming: Let Ax ≥ b ⇒ cx ≥ v when h(Ax) ≥ h ( b ) min cx dominates cx ≥ v for some Ax b ≥ subadditive and homogeneous x S ∈ function h . Inference dual is: This becomes the subadditive dual . v max P ( ) ( ) Ax b cx v ≥ ⇒ ≥ v P ∈ , ∈ � ℝ Slide 70
Inference duality Integer linear programming: Let Ax ≥ b ⇒ cx ≥ v when h(Ax) ≥ h ( b ) min cx dominates cx ≥ v for some Ax b ≥ subadditive and homogeneous x S ∈ function h . Inference dual is: This becomes the subadditive dual . v max This is a strong dual because the P ( ) ( ) Ax b cx v ≥ ⇒ ≥ inference method is complete, due to Chvátal’s theorem. v P ∈ , ∈ � ℝ Appropriate Chvátal function is subadditive and can found by Slide 71 Gomory’s cutting plane method.
Inference duality Inference duality permits a generalization of Benders decomposition. Slide 72
Inference duality Inference duality permits a generalization of Benders decomposition. In classical Benders, a Benders cut is a linear combination of the subproblem constraints using dual multipliers. Slide 73
Inference duality Inference duality permits a generalization of Benders decomposition. In classical Benders, a Benders cut is a linear combination of the subproblem constraints using dual multipliers. The Benders cut rules out solutions of the master problem for which the proof of optimality in the subproblem is still valid . Slide 74
Inference duality Inference duality permits a generalization of Benders decomposition. In classical Benders, a Benders cut is a linear combination of the subproblem constraints using dual multipliers. The Benders cut rules out solutions of the master problem for which the proof of optimality in the subproblem is still valid . For general optimization, a Benders cut does the same, but the proof of optimality is a solution of the general inference dual . Slide 75
Inference duality Inference duality permits a generalization of Benders decomposition. In classical Benders, a Benders cut is a linear combination of the subproblem constraints using dual multipliers. The Benders cut rules out solutions of the master problem for which the proof of optimality in the subproblem is still valid . For general optimization, a Benders cut does the same, but the proof of optimality is a solution of the general inference dual . This has led to orders-of-magnitude speedups in solution of scheduling and other problems by logic-based Benders decomposition . Slide 76
Constraint Programming Slide 77
Constraint programming Constraint programming uses logical inference to reduce backtracking. Slide 78
Constraint programming Constraint programming uses logical inference to reduce backtracking. Inference takes the form of consistency maintenance . Slide 79
Constraint programming Constraint programming uses logical inference to reduce backtracking. Inference takes the form of consistency maintenance . A constraint set S containing variables x 1 , …, x n is k -consistent if - for any subject of variables x 1 , …, x j , x j +1 - and any partial assignment ( x 1 , …, x j ) = ( v 1 , …, v j ) that violates no constraint in S , there is a v j +1 such that ( x 1 , …, x j +1 ) = ( v 1 , …, v j +1 ) violates no constraint in S . Slide 80
Constraint programming Constraint programming uses logical inference to reduce backtracking. Inference takes the form of consistency maintenance . A constraint set S containing variables x 1 , …, x n is k -consistent if - for any subject of variables x 1 , …, x j , x j +1 - and any partial assignment ( x 1 , …, x j ) = ( v 1 , …, v j ) that violates no constraint in S , there is a v j +1 such that ( x 1 , …, x j +1 ) = ( v 1 , …, v j +1 ) violates no constraint in S . S is strongly k -consistent if it is j -consistent for j = 1, …, k . Slide 81
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Slide 82
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. x x x ∨ ∨ Dependency graph 1 2 3 x x x ∨ ∨ x 5 x 1 x 3 1 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Slide 83
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. x x x ∨ ∨ Dependency graph 1 2 3 x x x ∨ ∨ x 5 x 1 x 3 1 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 Slide 84
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. x x x ∨ ∨ Dependency graph 1 2 3 x x x ∨ ∨ x 5 x 1 x 3 1 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 We will show that this is strongly 3-consistent. We can therefore solve it without backtracking Slide 85
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. x x x ∨ ∨ Dependency graph 1 2 3 x x x ∨ ∨ x 5 x 1 x 3 1 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 Slide 86
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x x ∨ x 5 x 1 x 3 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 Slide 87
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x x ∨ x 5 x 1 x 3 2 4 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 Slide 88
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 Slide 89
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x x ∨ 3 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 Slide 90
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 Slide 91
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x 5 x x x ∨ ∨ x 2 x 4 x 6 4 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 0 Slide 92
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x 5 x x ∨ x 2 x 4 x 6 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 0 Slide 93
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x 5 x x ∨ x 2 x 4 x 6 5 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 0 1 Slide 94
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x x 2 x 4 x 6 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 0 1 Slide 95
Constraint programming Theorem (Freuder). If constraint set S is strongly k -consistent, and its dependency graph has width less than k (with respect to the branching order), then S can be solved without backtracking. Dependency graph x 5 x 1 x 3 x x 2 x 4 x 6 6 Width = max in-degree = 2 x x x x x x 1 2 3 4 5 6 0 0 0 0 1 1 Slide 96
Constraint programming Theorem. Application of k -resolution makes a clause set strongly k -consistent. k-resolution = generate only resolvents with fewer than k literals. x x x ∨ ∨ 1 2 3 All resolvents have 3 or more x x x ∨ ∨ literals. 1 2 4 x x ∨ Clause set is therefore strongly 3 5 x x x ∨ ∨ 3-consistent, as claimed. 4 5 6 Slide 97
Constraint programming Constraint programmers are primarily concerned with domain consistency . Slide 98
Constraint programming Constraint programmers are primarily concerned with domain consistency . A constraint set S is domain consistent if for any given variable x j and any value v j in its domain, x j = v j in some solution of S . Slide 99
Constraint programming Constraint programmers are primarily concerned with domain consistency . A constraint set S is domain consistent if for any given variable x j and any value v j in its domain, x j = v j in some solution of S . Domain consistency = generalized arc consistency = hyperarc consistency . Slide 100
Recommend
More recommend