1
play

1 Search Overview Backtracking Search Basic solution: DFS / - PDF document

Announcements Introduction to Artificial Intelligence Assignment due on Monday 11.59pm V22.0472-001 Fall 2009 Email search.py and searchAgent.py to me Lecture 5: Constraint Satisfaction Lecture 5: Constraint Satisfaction Problems II


  1. Announcements Introduction to Artificial Intelligence • Assignment due on Monday 11.59pm V22.0472-001 Fall 2009 • Email search.py and searchAgent.py to me Lecture 5: Constraint Satisfaction Lecture 5: Constraint Satisfaction Problems II • Next week’s classes taught by Prof. Geiger Rob Fergus – Dept of Computer Science, Courant Institute, NYU Many slides from Dan Klein, Stuart Russell or Andrew Moore 2 Today Reminder: CSPs • CSPs: • Variables • Efficient Solution of CSPs • Domains • Constraints • Implicit (provide code to compute) compute) • Explicit (provide a subset of the possible tuples) • Unary Constraints • Binary Constraints • N-ary Constraints 3 4 Example: N-Queens Example: Map-Coloring • Variables: • Formulation 2: • Variables: • Domain: • Constraints: adjacent regions must have different colors • Domains: Domains: • Constraints: • Solutions are assignments satisfying all constraints, e.g.: 6 1

  2. Search Overview Backtracking Search • Basic solution: DFS / backtracking • Add a new assignment • Filter by checking for immediate violations • Ordering: • Heuristics to choose variable order (MRV) • Heuristics to choose value order (LCV) Heuristics to choose value order (LCV) • Filtering: • Pre-filter unassigned domains after every assignment • Forward checking: remove values which immediately conflict with current assignments (makes MRV easy!) • Arc consistency – propagate indirect consequences of assignments • Backtracking = DFS + var-ordering + fail-on-violation • What are the choice points? 7 8 Improving Backtracking Improving Backtracking • General-purpose ideas give huge gains in speed • General-purpose ideas give huge gains in speed • Ordering: • Ordering: • Which variable should be assigned next? g • Which variable should be assigned next? g • In what order should its values be tried? • In what order should its values be tried? • Filtering: Can we detect inevitable failure early? • Filtering: Can we detect inevitable failure early? • Structure: Can we exploit the problem structure? • Structure: Can we exploit the problem structure? 9 10 Ordering: Degree Heuristic Ordering: Minimum Remaining Values • Tie-breaker among MRV variables • Minimum remaining values (MRV): • Degree heuristic: • Choose the variable with the fewest legal values • Choose the variable participating in the most constraints on remaining variables • Why min rather than max? • Also called “most constrained variable” • Why most rather than fewest constraints? • “Fail-fast” ordering 11 12 2

  3. Improving Backtracking Ordering: Least Constraining Value • Given a choice of variable: • General-purpose ideas give huge gains in speed • Choose the least constraining value • The one that rules out the fewest values in the remaining variables • Ordering: • Note that it may take some • Which variable should be assigned next? g computation to determine this! p • In what order should its values be tried? • Why least rather than most? • Filtering: Can we detect inevitable failure early? • Combining these heuristics makes 1000 queens feasible • Structure: Can we exploit the problem structure? 13 14 Filtering: Forward Checking Improving Backtracking NT Q WA SA NSW V • Idea: Keep track of remaining legal values for unassigned • General-purpose ideas give huge gains in speed variables (using immediate constraints) • Idea: Terminate when any variable has no legal values • Ordering: • Which variable should be assigned next? g • In what order should its values be tried? • Filtering: Can we detect inevitable failure early? • Structure: Can we exploit the problem structure? 15 16 Filtering: Constraint Propagation Consistency of An Arc NT NT Q Q WA WA SA SA NSW NSW V V An arc X → Y is consistent iff for every x in the tail there is some y in the • Forward checking propagates information from assigned to unassigned • head which could be assigned without violating a constraint variables, but doesn't provide early detection for all failures: Delete from tail! • NT and SA cannot both be blue! • Forward checking = Enforcing consistency of each arc pointing to the new assignment • Why didn’t we detect this yet? • Constraint propagation propagates from constraint to constraint 17 18 3

  4. Arc Consistency Arc Consistency of a CSP NT Q WA SA NSW V • A simple form of propagation makes sure all arcs are consistent: Delete from tail! • If X loses a value, neighbors of X need to be rechecked! • Arc consistency detects failure earlier than forward checking • Runtime: O(n 2 d 3 ), can be reduced to O(n 2 d 2 ) • What’s the downside of enforcing arc consistency? • … but detecting all possible future problems is NP-hard – why? • Can be run as a preprocessor or after each assignment 19 20 Limitations of Arc Consistency K-Consistency • Increasing degrees of consistency • After running arc • 1-Consistency (Node Consistency): Each consistency: single node’s domain has a value which meets that node’s unary constraints • Can have one solution • 2-Consistency (Arc Consistency): For each left pair of nodes, any consistent assignment to • Can have multiple C h l i l one can be extended to the other b t d d t th th • K-Consistency: For each k nodes, any solutions left consistent assignment to k-1 can be extended to the k th node. • Can have no solutions left (and not know it) • Higher k more expensive to compute • (You need to know the k=2 algorithm) What went wrong here? 21 22 Strong K-Consistency Improving Backtracking • Strong k-consistency: also k-1, k-2, … 1 consistent • General-purpose ideas give huge gains in speed • Claim: strong n-consistency means we can solve without backtracking! • Ordering: • Why? • Which variable should be assigned next? g • Choose any assignment to any variable • Choose a new variable • In what order should its values be tried? • By 2-consistency, there is a choice consistent with the first • Choose a new variable • By 3-consistency, there is a choice consistent with the first 2 • Filtering: Can we detect inevitable failure early? • … • Lots of middle ground between arc consistency and n- • Structure: Can we exploit the problem structure? consistency! (e.g. path consistency) 23 24 4

  5. Problem Structure Tree-Structured CSPs • Tasmania and mainland are independent subproblems • Identifiable as connected components of constraint graph • Suppose each subproblem has c variables out of n total • Theorem: if the constraint graph has no loops, the CSP can be solved in O(n d 2 ) time • Worst-case solution cost is O((n/c)(d c )), linear in n • Compare to general CSPs, where worst-case time is O(d n ) • E.g., n = 80, d = 2, c =20 • 2 80 = 4 billion years at 10 million • This property also applies to probabilistic reasoning (later): an important nodes/sec example of the relation between syntactic restrictions and the complexity of reasoning. • (4)(2 20 ) = 0.4 seconds at 10 million nodes/sec 25 26 Tree-Structured CSPs Tree-Structured CSPs • Choose a variable as root, order • Why does this work? variables from root to leaves such • Claim: After each node is processed leftward, all nodes to the right can be assigned in any way consistent with their parent. that every node’s parent precedes • Proof: Induction on position it in the ordering • Why doesn’t this algorithm work with loops? • For i = n : 2, apply RemoveInconsistent(Parent(X i ),X i ) • Note: we’ll see this basic idea again with Bayes’ nets • For i = 1 : n, assign X i consistently with Parent(X i ) • Runtime: O(n d 2 ) (why?) 27 28 Nearly Tree-Structured CSPs Tree Decompositions • Create a tree-structured graph of overlapping subproblems, each is a mega-variable • Solve each subproblem to enforce local constraints • Solve the CSP over subproblem mega-variables using our efficient tree-structured CSP algorithm M1 M2 M3 M4 ≠ ≠ ≠ ≠ Agree on shared vars Agree on shared vars Agree on shared vars WA NT NT Q Q NSW NSW Q ≠ ≠ ≠ ≠ ≠ ≠ ≠ ≠ • Conditioning: instantiate a variable, prune its neighbors' domains SA SA SA SA • Cutset conditioning: instantiate (in all ways) a set of variables such that the remaining constraint graph is a tree {(WA=r,SA=g,NT=b), {(NT=r,SA=g,Q=b), Agree: (M1,M2) ∈ (WA=b,SA=r,NT=g), (NT=b,SA=g,Q=r), {( (WA=g,SA=g,NT=g), (NT=g,SA=g,Q=g) ), …} …} …} Cutset size c gives runtime O( (d c ) (n-c) d 2 ), very fast for small c • 29 30 5

Recommend


More recommend