The Davis-Putnam Procedure The Davis-Putnam procedure (DP) uses the resolution rule, leading to potentially exponential use of space. [Davis & Putnam, 1960] Davis, Logemann and Loveland [1962] replaced the resolution rule with a splitting rule. The new procedure is known as the DPLL procedure. Despite its age, it is still one of the most popular and successful complete methods. Basic framework for many modern SAT solvers. Exponential time is still a problem.
DP Procedure Procedure DP( F ) 1: for i=1 to NumberOfVariablesIn( F ) do choose a variable x occuring in F 2: Resolvents := ∅ 3: for all ( C 1 , C 2 ) s.t. C 1 ∈ F , C 2 ∈ F , x ∈ C 1 , ¬ x ∈ C 2 do 4: // don’t generate tautological resolvents 5: Resolvents := Resolvents ∪ resolve( C 1 , C 2 ) 6: // x is not in the current F . 7: F := { C ∈ F| x does not occur in C }∪ Resolvents 8: 9: if F = ∅ then return UNSATISFIABLE 10: 11: else 12: return SATISFIABLE
DP Resolution for SAT formula ( x 1 ∨ x 2 ∨ x 3 ) ∧ ( x 2 ∨ ¬ x 3 ∨ ¬ x 6 ) ∧ ( ¬ x 2 ∨ x 5 ) ⇓ x 2 ( x 1 ∨ x 3 ∨ x 5 ) ∧ ( ¬ x 3 ∨ ¬ x 6 ∨ x 5 ) ⇓ x 3 ( x 1 ∨ x 5 ∨ ¬ x 6 ) ⇒ SAT
DP Resolution for UNSAT formula ( x 1 ∨ x 2 ) ∧ ( x 1 ∨ ¬ x 2 ) ∧ ( ¬ x 1 ∨ x 3 ) ∧ ( ¬ x 1 ∨ ¬ x 3 ) ⇓ x 2 ( x 1 ) ∧ ( ¬ x 1 ∨ x 3 ) ∧ ( ¬ x 1 ∨ ¬ x 3 ) ⇓ x 1 ( x 3 ) ∧ ( ¬ x 3 ) ⇓ x 3 ∅ ⇒ UNSAT
DPLL: Basic Notation & Definitions Branching variable: a variable chosen for case analysis true/false Free variable: a variable with no value yet Contradiction/dead-end/conflict: an empty clause is found Backtracking: An algorithmic technique to find solutions by trying one of several choices. If the choice proves incorrect, computation backtracks or restarts at the choice-point in order to try another choice.
Unit Resolution Unrestricted application of the resolution rule is too expensive. Unit resolution restricts one of the clauses to be a unit clause consisting of only one literal. Performing all possible unit resolution steps on a clause set can be done in linear time.
Unit Propagation Unit Resolution l l ∨ φ φ Unit Propagation algorithm UP ( F ) for clause sets F If there is a unit clause l ∈ F , then replace every l ∨ φ ∈ F 1 by φ and remove all clauses containing l from F . As a special case the empty clause ⊥ may be obtained. If F still contains a unit clause, repeat step 1. 2 Return F . 3 We sometimes write F ⊢ UP l if l ∈ UP ( F ) .
The DPLL Procedure Procedure DPLL( F ) (SAT) if F = ∅ , then return true ; (Empty) if F contains the empty clause, then return false ; (UP) if F has unit clause { u }, then return DPLL( UP ( F ∪ { u } ) ); if F has pure literal { p }, then return DPLL( UP ( F ∪ { p } ) ); (Pure) (Split) choose a variable x ; if DPLL( UP ( F ∪ { x } ) )= true , then return true else return DPLL( UP ( F ∪ {¬ x } ) );
Search Tree of the DPLL Procedure Binary Search Tree Large search tree size ⇔ Hard problem Depth-first search with backtracking
DPLL Tree of Student–Courses Problem
DPLL Performance: Original vs Variants On hard random 3-SAT problems at r = 4.25 The worst case complexity of the algorithm in our experience is O(2 n / 21 . 83 − 1 . 70 ), based on UNSAT problems. This is a big improvement! 2 100 = 1 , 267 , 650 , 000 , 000 , 000 , 000 , 000 , 000 , 000 , 000 Algorithm n=100 #nodes O(2 n / 5 ) DPLL (1962) 1,048,576 O(2 n / 20 . 63 + 0 . 44 ) Satz (1997) 39 O(2 n / 21 . 04 − 1 . 01 ) Satz215 (1999) 13 O(2 n / 21 . 10 − 1 . 35 ) Kcnfs (2003) 10 O(2 n / 21 . 83 − 1 . 70 ) Opt_Satz (2003) 7 Look-ahead enhanced DPLL based SAT solver can reliably solve problems with up to 700 variables.
Heuristics for the DPLL Procedure Objective: to reduce search tree size by choosing a best branching variable at each node of the search tree. Central issue: how to select the next best branching variable?
Branching Heuristics for DPLL Procedure Simple Use simple heuristics for branching variables selection Based on counting occurrences of literals or variables Sophisticated Use sophisticated heuristics for branching variables selection Need more resources and efforts
Simple Branching Heuristics MOMS (Maximum Occurrences in Minimum Sized clauses) heuristics: pick the literal that occurs most often in the (non-unit) minimum size clauses. Maximum binary clause occurrences too simplistic CSAT solver [Dubois et al. , 1993] Jeroslow-Wang heuristic [Jeroslow & Wang, 1990; Hooker & Vinay, 1995]: estimate the contribution of each literal ℓ to satisfy the clause set and pick one of the best. � 2 −| c | score ( ℓ ) = c ∈F ,ℓ ∈ c for each clause c the literal ℓ appears in 2 −| c | is added where | c | is the number of literals in c .
Algorithms for SAT Solving Look-ahead based DPLL
Sophisticated Branching Heuristics Unit Propagation Look-ahead (UPLA) heuristics Satz [Li & Anbulagan, 1997a] Backbone Search heuristics Kcnfs, an improved version of cnfs [Dubois & Dequen, 2001] Double Look-ahead heuristics Satz215 and March_dl [Heule & van Maaren, 2006] LAS+NVO+DEW heuristics Dew_Satz [Anbulagan & Slaney, 2005]
Unit Propagation Look-Ahead (UPLA) Heuristics like MOMS choose branching variables based on properties of the occurrences of literals. What if one could look ahead what the consequences of choosing a certain branch variable are? [Freeman, 1995; Crawford & Auton, 1996; Li & Anbulagan, 1997a] Unit Propagation Based Look-Ahead (UPLA) Set a literal l true and perform unit propagation: 1 F ′ = UP ( F ∪ { l } ) . (If the empty clause is obtained, see the next slide.) 2 Compute a heuristic value for F ′ . 3 Choose a literal with the highest value.
Unit Propagation Look-Ahead (UPLA) Failed Literals UPLA for some literals may lead to the empty clause. Lemma If F ∪ { l } ⊢ UP ⊥ , then F | = l. Here l is a failed literal. Failed literals may be set false: F := UP ( F ∪ { l } ) .
Unit Propagation Look-Ahead (UPLA) Heuristics After setting a literal l true and performing UP , calculate the weight of l : w ( l ) = diff ( F , UP ( F ∪ { l } )) = the number of clauses of minimal size in UP ( F ∪ { l } ) but not in F . A literal has a high weight if setting it true produces many clauses of minimal size (typically: clauses with 2 literals). For branching choose a variable of maximal weight w ( x ) · w ( ¬ x ) + w ( x ) + w ( ¬ x ) .
Unit Propagation Look-Ahead (UPLA) Restricted UPLA Heuristics based on UPLA are often much more informative than simpler ones like MOMS. But doing UPLA for every literal is very expensive. Li and Anbulagan [1997a] propose the use of predicates PROP for selecting a small subset of the literals for UPLA.
Unit Propagation Look-Ahead (UPLA) The Predicate PROP in the Satz solver If PROP ( x ) is true and there is no failed-literal found when performing UPLA with x and ¬ x , then consider x for branching. Li and Anbulagan [1997a] define several predicates: PROP ij ( x ) x occurs in at least i binary clauses of which at least j times both negatively and positively. PROP 0 ( x ) true for all x . Li and Anbulagan [1997a] experimentally find the following strongest: PROP z ( x ) the first predicate of PROP 41 ( x ) , PROP 31 ( x ) and PROP 0 ( x ) that is true for at least T variables (they choose T = 10).
Dew_Satz: LAS+NVO+DEW Heuristics [Anbulagan & Slaney, 2005] LAS (look-ahead saturation): guarantee to select the best branching variable NVO (neighbourhood variables ordering): attempt to limit the number of free variables examined by exploring next only the neighbourhood variables of the current assigned variable. DEW (dynamic equivalency weighting): Whenever the binary equivalency clause ( x i ⇔ x j ), which is equivalent to two CNF clauses ( ¯ x i ∨ x j ) and ( x i ∨ ¯ x j ), occurs in the formula at a node, Satz needs to perform look-ahead on x i , ¯ x i , x j and ¯ x j . Clearly, the look-aheads on x j and ¯ x j are redundant, so we avoid them by assigning the implied literal x j ( ¯ x j ’s) the weight of its parent literal x i ( ¯ x i ’s), and then by avoiding look-ahead on literals with weight zero.
EqSatz Based on Satz Enhanced with equivalency reasoning during search process Substitute the equivalent literals during the search in order to reduce the number of active variables in the current formula. Example: given the clauses x ∨ ¬ y and ¬ x ∨ y (equivalent to x ⇔ y ), we can substitute x by y or vice versa. [Li, 2000]
Equivalency: Reasoning vs Weighting Empirical Results (1/2) On 32-bit Parity Learning problem. A challenging problem [Selman et al. , 1997]. EqSatz is the first solver which solved all the instances. Lsat and March_eq perform equivalency reasoning at pre-search phase. Instance (#Vars/#Cls) Satz Dew_Satz EqSatz Lsat March_eq zChaff par32-1 (3176/10227) > 36h 12,918 242 126 0.22 > 36h par32-2 (3176/10253) > 36h 5,804 69 60 0.27 > 36h > 36h > 36h par32-3 (3176/10297) 7,198 2,863 183 2.89 par32-4 (3176/10313) > 36h 11,005 209 86 1.64 > 36h par32-5 (3176/10325) > 36h 17,564 2,639 418 8.07 > 36h par32-1-c (1315/5254) > 36h 10,990 335 270 2.63 > 36h par32-2-c (1303/5206) > 36h 411 13 16 2.19 > 36h par32-3-c (1325/5294) > 36h 4,474 1,220 374 6.65 > 36h par32-4-c (1333/5326) > 36h 7,090 202 115 0.45 > 36h > 36h > 36h par32-5-c (1339/5350) 11,899 2,896 97 6.44
Equivalency: Reasoning vs Weighting Empirical Results (2/2) Runtime of solvers on BMC and circuit-related problems Problem Dew_Satz EqSatz March_eq zChaff barrel6 4.13 0.17 0.13 2.95 barrel7 8.62 0.23 0.25 11 barrel8 72 0.36 0.38 44 barrel9 158 0.80 0.87 66 longmult10 64 385 213 872 longmult11 79 480 232 1,625 longmult12 97 542 167 1,643 longmult13 127 617 53 2,225 longmult14 154 706 30 1,456 longmult15 256 743 23 392 philips-org 697 1974 > 5,000 > 5,000 philips 295 2401 726 > 5,000
Algorithms for SAT Solving Clause learning (CL) based DPLL
DPLL with backjumping The DPLL backtracking procedure often discovers the same conflicts repeatedly. In a branch l 1 , l 2 , . . . , l n − 1 , l n , after l n and l n have led to conflicts (derivation of ⊥ ), l n − 1 is always tried next, even when it is irrelevant to the conflicts with l n and l n . Backjumping (Gaschnig, 1979) can be adapted to DPLL to backtrack from l n to l i when l i + 1 , . . . , l n − 1 are all irrelevant.
DPLL with backjumping a ¬ a ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e b ¬ b b ¬ b ¬ b ∨ d ∨ e d ∨ ¬ e c ∨ f c ¬ c c ¬ c c ¬ c c ¬ c ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d d d d d d d d d
DPLL with backjumping a ¬ a ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e b ¬ b b ¬ b ¬ b ∨ d ∨ e d ∨ ¬ e c ∨ f c ¬ c c ¬ c c ¬ c c ¬ c ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d d d d d d d d d
DPLL with backjumping a ¬ a ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e b ¬ b b ¬ b ¬ b ∨ d ∨ e d ∨ ¬ e c ∨ f c ¬ c c ¬ c c ¬ c c ¬ c ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d d d d d d d d d
DPLL with backjumping a ¬ a ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e b ¬ b b ¬ b ¬ b ∨ d ∨ e d ∨ ¬ e c ∨ f c ¬ c c ¬ c c ¬ c c ¬ c Conflict set with d : { a , d } ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d d d d d d d d d
DPLL with backjumping a ¬ a ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e b ¬ b b ¬ b ¬ b ∨ d ∨ e d ∨ ¬ e c ∨ f c ¬ c c ¬ c c ¬ c c ¬ c Conflict set with d : { a , d } Conflict set with ¬ d : { a , ¬ d } ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d d d d d d d d d
DPLL with backjumping a ¬ a ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e b ¬ b b ¬ b ¬ b ∨ d ∨ e d ∨ ¬ e c ∨ f c ¬ c c ¬ c c ¬ c c ¬ c Conflict set with d : { a , d } Conflict set with ¬ d : { a , ¬ d } No use trying ¬ c . ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d ¬ d d d d d d d d d Directly go to ¬ a .
Look-ahead vs Look-back
Clause Learning (CL) The Resolution rule is more powerful than DPLL: UNSAT proofs by DPLL may be exponentially bigger than the smallest resolution proofs. An extension to DPLL, based on recording conflict clauses, is similarly exponentially more powerful than DPLL [Beame et al. , 2004]. For many applications SAT solvers with CL are the best. Also called conflict-driven clause learning (CDCL).
Clause Learning (CL) Assume a partial assignment (a path in the DPLL search tree from the root to a leaf node) corresponding to literals l 1 , . . . , l n leads to a contradiction (with unit resolution) F ∪ { l 1 , . . . , l n } ⊢ UP ⊥ From this follows F | = l 1 ∨ · · · ∨ l n . Often not all of the literals l 1 , . . . , l n are needed for deriving the empty clause ⊥ , and a shorter clause can be derived.
Clause Learning (CL) Example ¬ d ∨ ¬ e ¬ b ∨ ¬ d ∨ e ¬ a ∨ b ¬ b ∨ ¬ d ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e a ¬ a ∨ ¬ d falsified ¬ d d ⊥
Clause Learning (CL) Example ¬ d ∨ ¬ e ¬ b ∨ ¬ d ∨ e ¬ a ∨ b ¬ b ∨ ¬ d ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e a ¬ a ∨ ¬ d falsified ¬ d d a , b ⊥
Clause Learning (CL) Example ¬ d ∨ ¬ e ¬ b ∨ ¬ d ∨ e ¬ a ∨ b ¬ b ∨ ¬ d ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e a ¬ a ∨ ¬ d falsified ¬ d d a , b , c ⊥
Clause Learning (CL) Example ¬ d ∨ ¬ e ¬ b ∨ ¬ d ∨ e ¬ a ∨ b ¬ b ∨ ¬ d ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e a ¬ a ∨ ¬ d falsified ¬ d d a , b , c , d , e ⊥
Clause Learning (CL) Example ¬ d ∨ ¬ e ¬ b ∨ ¬ d ∨ e ¬ a ∨ b ¬ b ∨ ¬ d ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e a ¬ a ∨ ¬ d falsified ¬ d d a , b , c , d , e ⊥
Clause Learning (CL) Example ¬ d ∨ ¬ e ¬ b ∨ ¬ d ∨ e ¬ a ∨ b ¬ b ∨ ¬ d ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e a ¬ a ∨ ¬ d falsified ¬ d d a , b , c , d , e ⊥
Clause Learning (CL) Example ¬ d ∨ ¬ e ¬ b ∨ ¬ d ∨ e ¬ a ∨ b ¬ b ∨ ¬ d ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e a ¬ a ∨ ¬ d falsified ¬ d d a , b , c , d , e ⊥
Clause Learning (CL) Example ¬ d ∨ ¬ e ¬ b ∨ ¬ d ∨ e ¬ a ∨ b ¬ b ∨ ¬ d ¬ a ∨ b ¬ b ∨ ¬ d ∨ e ¬ d ∨ ¬ e a ¬ a ∨ ¬ d falsified ¬ d d a , b , c , d , e ⊥
Clause Learning (CL) Procedure The Reason of a Literal For each non-decision literal l a reason is recorded: it is the clause l ∨ φ from which it was derived with ¬ φ . A Basic Clause Learning Procedure Start with the clause C = l 1 ∨ · · · ∨ l n that was falsified. Resolve it with the reasons l ∨ φ of non-decision literals l until only decision variables are left.
Clause Learning (CL) Different variants of the procedure decision scheme Stop when only decision variables left. First UIP (Unique Implication Point) Stop when only one literal of current decision level left. Last UIP Stop when at the current decision level only the decision literal is left. First UIP has been found most useful single clause. Some solvers learn more than one clause.
Clause Learning (CL) Forgetting clauses In contrast to the plain DPLL, a main problem with CL is the very high number of learned clauses. Most SAT solvers forget clauses exceeding a length threshold in a regular interval to prevent the memory from filling up.
Heuristics for CL: VSIDS (zChaff) Variable State Independent Decaying Sum Initially the score s ( l ) of literal l is its number of occurrences in F . When conflict clause with l is added, increase s ( l ) . Periodically decay the scores by s ( l ) := r ( l ) + 0 . 5 s ( l ) where r ( l ) is the number of occurrences of l in conflict clauses after the previous decay. Always choose unassigned literal l with maximum s ( l ) . Newest version of zChaff and many other solvers use variants and extensions of VSIDS. The open-source MiniSAT solver decays 0.05 after every learned clause.
Heuristics for CL: VMTF (Siege) Variable Move to Front Initially order all variables a in a decreasing order according to their number of occurrences r ( a ) + r ( ¬ a ) in F . When deriving a conflict clause with a literal l update r ( l ) := r ( l ) + 1. Move some of the variables occurring in the conflict clause to the front of the list. Always choose an unassigned variable a from the beginning of the list, and set it true if r ( a ) > r ( ¬ a ) and false if r ( ¬ a ) > r ( a ) and break ties randomly.
Watched Literals for efficient implementation Efficiency of unit propagation is critical: most of the runtime in a SAT solver is spent doing it. Early SAT solvers kept track of the number of assigned literals in every clause. In the two literal watching scheme (zChaff [Zhang & Malik, 2002a]) keep track of only two literals in each clause: if both are un-assigned, it is not a unit clause. When l is set true, visit clauses in which l is watched. 1 Find an unassigned literal l ′ in a clause. 2 If found, make l ′ a watched literal: the clause is not unit. 3 If not found, check the other watched literal l 2 : if unassigned, l 2 is a unit clause.
Heavy-tailed runtime distributions Diagram from [Chen et al. 2001] On many classes of problems characterizes runtimes of a randomized algorithm on a single instance and runtimes of a deterministic algorithm on a class of instances.
Heavy-tailed runtime distributions Estimating the mean is problematic Diagram from [Gomes et al. 2000]
Heavy-tailed runtime distributions Cause A small number of wrong decisions lead to a part of the search tree not containing any solutions. Backtrack-style search needs a long time to traverse the search tree. Many short paths from the root node to a success leaf node. High probability of reaching a huge subtree with no solutions. These properties mean that average runtime is high, restarting the procedure after t seconds reduces the mean substantially, if t is close to the mean of the original distribution.
Restarts in SAT algorithms Restarts had been used in stochastic local search algorithms: Necessary for escaping local minima! Gomes et al. demonstrated the utility of restarts for systematic SAT solvers: Small amount of randomness in branching variable selection. Restart the algorithm after a given number of seconds.
Restarts with Clause-Learning Learned clauses are retained when doing the restart. Problem: Optimal restart policy depends on the runtime distribution, which is generally not known. Problem: Deletion of conflict clauses and too early restarts may lead to incompleteness. Practical solvers increase restart interval. Paper: The effect of restarts on the efficiency of clause learning [Huang, 2007]
Incremental SAT solving Many applications involve a sequence of SAT tests: Φ 0 = φ 0 ∧ G 0 Φ 1 = φ 0 ∧ φ 1 ∧ G 1 Φ 2 = φ 0 ∧ φ 1 ∧ φ 2 ∧ G 2 Φ 3 = φ 0 ∧ φ 1 ∧ φ 2 ∧ φ 3 ∧ G 3 Φ 4 = φ 0 ∧ φ 1 ∧ φ 2 ∧ φ 3 ∧ φ 4 ∧ G 4 Clauses learned from φ 0 ∧ · · · ∧ φ n − 1 (without G n − 1 ) are conflict clauses also for φ 0 ∧ · · · ∧ φ n ∧ G n . Recording and reusing these clauses sometimes dramatically speeds up testing Φ 0 , Φ 1 , . . . : incremental SAT solving. Many current SAT solvers support incremental solving.
Preprocessing Although there are more powerful inference algorithms, unit resolution has been used inside search algorithms because it is inexpensive. Other more expensive algorithms have proved to be useful as preprocessors that are run once before starting the DPLL procedure or other SAT algorithm. Preprocessors have been proposed based on restricted resolution rule (3-Resolution) [Li & Anbulagan, 1997b], implication graphs of 2-literal clauses [Brafman, 2001], hyperresolution [Bacchus & Winter, 2004], and non-increasing variable elimination resolution NiVER [Subbarayan & Pradhan, 2005].
3-Resolution Adding resolvents for clauses of length ≤ 3. ( x 1 ∨ x 2 ∨ x 3 ) ∧ (¯ x 1 ∨ x 2 ∨ ¯ x 4 ) ≡ ( x 2 ∨ x 3 ∨ ¯ x 4 )
2-SIMPLIFY [Brafman, 2001] Most clauses generated from planning, model-checking and many other applications contain 2 literals. SAT for 2-literal clauses is tractable. = l ∨ l ′ for sets F of 2-literal clauses is tractable. Testing F | Implication graph of 2-literal clauses Nodes The set of all literals. Edges For a clause l ∨ l ′ there are directed edges ( l , l ′ ) and ( l ′ , l ) .
2-SIMPLIFY [Brafman, 2001] Implication graphs A D A ∨ B B C ¬ B ∨ C ¬ C ∨ D ¬ D ∨ B ¬ C ¬ B ¬ D ∨ A ¬ D ¬ A
2-SIMPLIFY [Brafman, 2001] If l − → l then add the unit clause l (and simplify). 1 If l 1 − → l , . . . , l n − → l for the clause l 1 ∨ · · · ∨ l n then add 2 the unit clause l . Literals in one strongly connected component (SCC) are 3 equivalent. Choose one and replace others by it. The standard equivalence reduction with l ∨ l ′ and l ∨ l ′ is a special case of the SCC-based reduction.
Preprocessing with Hyperresolution HypRe [Bacchus & Winter, 2004] Binary Hyperresolution l 1 ∨ l ′ , . . . , l n ∨ l ′ l ∨ l 1 ∨ · · · ∨ l n , l ∨ l ′ Bacchus and Winter show that closure under hyperresolution coincides with the closure under the following. For all literals l and l ′ , if F ∪ { l } ⊢ UP l ′ then F := F ∪ { l ∨ l ′ } . One naive application of the above takes cubic time. For efficient implementation it is essential to reduce redundant computation. = ⇒ the HypRe preprocessor
Preprocessing with Resolution Non-increasing Variable Elimination Resolution [Subbarayan & Pradhan, 2005] With unrestricted resolution (the original Davis-Putnam procedure) clause sets often grow exponentially. Idea: Eliminate a variable if clause set does not grow. NiVER Choose variable a . 1 F a = { C ∈ F| a is one disjunct of C } and F ¬ a = { C ∈ F|¬ a is one disjunct of C } . OLD = F a ∪ F ¬ a 2 NEW = { φ 1 ∨ φ 2 | φ 1 ∨ a ∈ F a , φ 2 ∨ ¬ a ∈ F ¬ a } . If OLD is bigger than NEW , then F := ( F\ OLD ) ∪ NEW . 3
Algorithms for SAT Solving Stochastic Methods
Stochastic Methods: motivation Searching for satisfying assignments in a less hierarchic manner provides an alternative approach to SAT. Stochastic Local Search (SLS) algorithms: GSAT Random Walk: WalkSAT, AdaptNovelty + , G 2 WSAT Clause Weighting: SAPS, PAWS, DDFW, DDFW + Other approach: Survey Propagation (SP) [Mezard et al. , 2002]
Local Optima and Global Optimum in SLS
Flipping Coins: The “Greedy” Algorithm This algorithm is due to Koutsopias and Papadimitriou Main idea: flip variables till you can no longer increase the number of satisfied clauses. Procedure greedy( F ) 1: T = random( F ) // random assignment 2: repeat Flip any variable in T that increases the number of 3: satisfied clauses; 4: until no improvement possible 5: end
The GSAT Procedure The procedure is due to Selman et al. [1992]. Adds restarts to the simple “greedy” algorithm, and also allows sideways flips. Procedure GSAT( F , MAX_TRIES, MAX_FLIPS) 1: for i=1 to MAX_TRIES do T = random( F ) // random assignment 2: for j=1 to MAX_FLIPS do 3: if T satisfies F then 4: return T ; 5: Flip any variable in T that results in greatest increase 6: in number of satisfied clauses; 7: return “No satisfying assignment found”; 8: end
The WalkSAT Procedure The procedure is due to Selman et al. [1994] Procedure WalkSAT( F , MAX_TRIES, MAX_FLIPS, VSH) 1: for i=1 to MAX_TRIES do T = random( F ) // random assignment 2: for j=1 to MAX_FLIPS do 3: if T satisfies F then 4: return T ; 5: Choose an unsatisfied clause C ∈ T at random; 6: Choose a variable x ∈ C according to VSH; 7: Flip variable x in T ; 8: 9: return “No satisfying assignment found”; 10: end
Noise Setting WalkSAT variants depend on the setting of their noise parameter. Noise setting: to control the degree of greediness in the variable selection process. It takes value between zero and one. Novelty(p) [McAllester et al. , 1997]: if the best variable is not the most recently flipped one in C , then pick it. Otherwise, with probability p pick the second best variable and with 1-p pick the best one. Novelty + (p,wp) [Hoos, 1999]: with probability wp pick a variable from C and with 1-wp do Novelty(p).
AdaptNovelty + [Hoos, 2002] AdaptNovelty + automatically adjusts the noise level based on the detection of stagnation. Using Novelty + in the beginning of a run, noise parameter wp is set to 0. if no improvement in the objective function value during θ · m search steps, then increase wp: wp = wp + ( 1 − wp ) · φ . if the objective function value is improved, then decrease wp: wp = wp − wp · φ/ 2. default values: θ = 1/6 and φ = 0.2.
G 2 WSAT and AdaptG 2 WSAT G 2 WSAT [Li & Huang, 2005]: exploits promising decreasing variables and thus diminishes the dependence on noise settings. AdaptG 2 WSAT [Li et al. , 2007]: integrates the adaptive noise mechanism of AdaptNovelty + in G 2 WSAT. The current best variants of Random Walk approach.
Dynamic Local Search The Basic Idea Use clause weighting mechanism Increase weights on unsatisfied clauses in local minima in such a way that further improvement steps become possible Adjust weights periodically when no further improvement steps are available in the local neighborhood
Dynamic Local Search Brief History Breakout Method [Morris, 1993] Weighted GSAT [Selman et al. , 1993] Learning short-term clause weights for GSAT [Frank, 1997] Discrete Lagrangian Method (DLM) [Wah & Shang, 1998] Smoothed Descent and Flood [Schuurmans & Southey, 2000] Scaling and Probabilistic Smoothing (SAPS) [Hutter et al. , 2002] Pure Additive Weighting Scheme (PAWS) [Thornton et al. , 2004] Divide and Distribute Fixed Weight (DDFW) [Ishtaiwi et al. , 2005] Adaptive DDFW (DDFW+) [Ishtaiwi et al. , 2006]
Recommend
More recommend