space of search strategies cse 473 artificial intelligence
play

Space of Search Strategies CSE 473: Artificial Intelligence Blind - PDF document

4/9/2012 Space of Search Strategies CSE 473: Artificial Intelligence Blind Search DFS, BFS, IDS Constraint Satisfaction Informed Search Informed Search Systematic: Uniform cost, greedy, A*, IDA* Daniel Weld Stochastic:


  1. 4/9/2012 Space of Search Strategies CSE 473: Artificial Intelligence  Blind Search  DFS, BFS, IDS Constraint Satisfaction  Informed Search Informed Search  Systematic: Uniform cost, greedy, A*, IDA* Daniel Weld  Stochastic: Hill climbing w/ random walk & restarts  Constraint Satisfaction  Backtracking=DFS, FC, k-consistency Slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer  Adversary Search 2 Recap: Constraint Satisfaction Recap: Search Problem  Kind of search in which  States  States are factored into sets of variables  configurations of the world  Search = assigning values to these variables  Successor function:  Goal test is encoded with constraints  function from states to lists of triples function from states to lists of triples   Gives structure to search space  Gives structure to search space (state, action, cost)  Exploration of one part informs others  Start state  Special techniques add speed  Goal test  Propagation  Variable ordering  Preprocessing 4 Constraint Satisfaction Problems Real-World CSPs  Assignment problems: e.g., who teaches what class  Timetabling problems: e.g., which class is offered when  Subset of search problems and where?  Hardware configuration  State is defined by  State is defined by  Gate assignment in airports  Gate assignment in airports  Transportation scheduling  Variables X i with values from a  Factory scheduling  Domain D (often D depends on i)  Fault diagnosis  … lots more!  Goal test is a set of constraints  Many real-world problems involve real-valued variables… 1

  2. 4/9/2012 Factoring States Chinese Food, Family Style  Model state’s (independent) parts, e.g.  Suppose k people… Suppose every meal for n people Has n dishes plus soup  Variables & Domains?  Soup =  Meal 1 =  Meal 1 =  Constraints?  Meal 2 = …  Meal n = 7 8 Chinese Constraint Network Crossword Puzzle Must be Hot&Sour  Variables & domains? Soup  Constraints? No Chicken Peanuts Appetizer Dish Total Cost < $40 No Pork Dish Vegetable Peanuts Seafood Rice Not Both Spicy Not Chow Mein 9 10 Standard Search Formulation Backtracking Example • States are defined by the values assigned so far • Initial state: the empty assignment, {} • Successor function: • assign value to an unassigned variable • Goal test: • the current assignment is complete & • satisfies all constraints 2

  3. 4/9/2012 Backtracking Search Backtracking Search Note 2: Only allow legal assignments at each point  Note 1: Only consider a single variable at each point  I.e. Ignore values which conflict previous assignments  Variable assignments are commutative, so fix ordering of variables  Might need some computation to eliminate such conflicts I.e., [WA = red then NT = blue] same as [NT = blue then WA = red] [ ]  “Incremental goal test”  What is branching factor of this search? “Backtracking Search” Backtracking Search Depth-first search for CSPs with these two ideas  One variable at a time, fixed order  Only trying consistent assignments Is called “Backtracking Search”  Basic uninformed algorithm for CSPs  Can solve n-queens for n  25  What are the choice points? NT Q Improving Backtracking Forward Checking WA SA NSW V  Idea: Keep track of remaining legal values for  General-purpose ideas give huge gains in unassigned variables (using immediate constraints) speed  Idea: Terminate when any variable has no legal values  Ordering:  Which variable should be assigned next? c a ab e s ou d be ass g ed e t  In what order should its values be tried?  Filtering: Can we detect inevitable failure early?  Structure: Can we exploit the problem structure? 3

  4. 4/9/2012 Forward Checking Forward Checking Q A Q B Q C Q D Q A Q B Q C Q D Row 1 Row 1 Q Row 2 Row 2 Row 3 Row 3 Row 4 Row 4 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 21 22 Forward Checking Forward Checking Q A Q B Q C Q D Q A Q B Q C Q D Row 1 Q Row 1 Q Row 2 Row 2 Prune inconsistent values Where can Q B Go? Row 3 Row 3 Row 4 Row 4 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 23 24 Forward Checking Forward Checking Q A Q B Q C Q D Q A Q B Q C Q D Row 1 Q Row 1 Q Row 2 Row 2 Prune inconsistent values Where can Q B Go? Row 3 Q Row 3 Row 4 Row 4 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 25 26 4

  5. 4/9/2012 Forward Checking Cuts the Search Space Are We Done? 4 16 16 64 256 28 27 NT NT Q Q Constraint Propagation Arc Consistency WA WA SA SA NSW NSW V V   Forward checking propagates information from assigned to adjacent Simplest form of propagation makes each arc consistent unassigned variables, but doesn't detect more distant failures:  X ฀ Y is consistent iff for every value x there is some allowed y • If X loses a value, neighbors of X need to be rechecked!  NT and SA cannot both be blue! • Arc consistency detects failure earlier than forward checking  Why didn’t we detect this yet? • What’s the downside of arc consistency?  Constraint propagation repeatedly enforces constraints (locally) • Can be run as a preprocessor or after each assignment Arc Consistency Limitations of Arc Consistency  After running arc consistency:  Can have one solution left  Can have multiple solutions left  Can have no solutions left (and not know it)  Runtime: O(n 2 d 3 ), can be reduced to O(n 2 d 2 ) What went  … but detecting all possible future problems is NP-hard – why? wrong here? [demo: arc consistency animation] 5

  6. 4/9/2012 K-Consistency* Ordering: Minimum Remaining Values  Increasing degrees of consistency  Minimum remaining values (MRV):  1-Consistency (Node Consistency):  Choose the variable with the fewest legal values Each single node’s domain has a value which meets that node’s unary constraints  2-Consistency (Arc Consistency): For y ( y) each pair of nodes, any consistent assignment to one can be extended to the other  K-Consistency: For each k nodes, any consistent assignment to k-1 can be extended to the k th node.  Why min rather than max?  Also called “most constrained variable”  Higher k more expensive to compute  (You need to know the k=2 algorithm)  “Fail-fast” ordering Ordering: Degree Heuristic Ordering: Least Constraining Value  Tie-breaker among MRV variables  Given a choice of variable:  Choose the least constraining  Degree heuristic: value  Choose the variable participating in the most  The one that rules out the constraints on remaining variables fewest values in the remaining variables variables  Note that it may take some computation to determine this!  Why least rather than most?  Combining these heuristics makes 1000 queens feasible  Why most rather than fewest constraints? Problem Structure Tree-Structured CSPs  Tasmania and mainland are  Choose a variable as root, order independent subproblems variables from root to leaves such  Identifiable as connected that every node's parent precedes components of constraint graph it in the ordering  Suppose each subproblem Suppose each subproblem has c variables out of n total  Worst-case solution cost is O((n/c)(d c )), linear in n  E.g., n = 80, d = 2, c =20  For i = n : 2, apply RemoveInconsistent(Parent(X i ),X i )  For i = 1 : n, assign X i consistently with Parent(X i )  2 80 = 4 billion years at 10 million nodes/sec  (4)(2 20 ) = 0.4 seconds at 10  Runtime: O(n d 2 ) million nodes/sec 6

Recommend


More recommend