today ce 473 artificial intelligence spring 2017
play

Today CE 473: Artificial Intelligence Spring 2017 A* Search A* - PDF document

3/29/17 Today CE 473: Artificial Intelligence Spring 2017 A* Search A* Search Heuristic Design Graph search Dieter Fox Based on slides from Pieter Abbeel & Dan Klein Multiple slides from Stuart Russell, Andrew Moore, Luke


  1. 3/29/17 Today CE 473: Artificial Intelligence Spring 2017 § A* Search A* Search § Heuristic Design § Graph search Dieter Fox Based on slides from Pieter Abbeel & Dan Klein Multiple slides from Stuart Russell, Andrew Moore, Luke Zettlemoyer Example: Pancake Problem Example: Pancake Problem Action: Flip over the top n pancakes Cost: Number of pancakes flipped Example: Pancake Problem General Tree Search State space graph with costs as weights 4 2 3 2 3 Action: flip top 4 Action: flip all four Path to reach goal: two 3 Cost: 4 Flip four, flip three Cost: 2 4 2 Total cost: 7 3 2 2 3 4 3 1

  2. 3/29/17 What is a Heuristic ? Example: Heuristic Function Heuristic: the largest pancake that is still out of place § An estimate of how close a state is to a goal § Designed for a particular search problem 3 h(x) 4 10 3 4 5 3 0 11.2 4 4 3 4 § Examples: Manhattan distance: 10+5 = 15 4 2 3 Euclidean distance: 11.2 Example: Heuristic Function Greedy Search h(x) Greedy Search Best First (Greedy) § Expand the node that seems § Strategy: expand a node b closest… that you think is closest to … a goal state § Heuristic: estimate of distance to nearest goal for each state § A common case: b § Best-first takes you straight … to the (wrong) goal § Worst-case: like a badly- guided DFS § What can go wrong? 2

  3. 3/29/17 Combining UCS and Greedy A* Search § Uniform-cost orders by path cost, or backward cost g(n) § Greedy orders by goal proximity, or forward cost h(n) 8 g = 0 h=6 S e h=1 g = 1 h=5 a 1 1 3 2 S a d G g = 9 h=1 g = 2 h=6 g = 4 b d e h=6 h=5 1 h=2 h=2 h=0 1 c b g = 3 h=7 g = 6 g = 10 h=2 c G d h=0 h=7 h=6 g = 12 G h=0 § A* Search orders by the sum: f(n) = g(n) + h(n) Example: Teg Grenager When should A* terminate? Is A* Optimal? 1 § Should we stop when we enqueue a goal? A 3 h = 6 2 A 2 h = 0 S h = 2 h = 7 G G S h = 3 h = 0 5 B 3 2 § What went wrong? h = 1 § Actual bad goal cost < estimated good path cost § No: only stop when we dequeue a goal § We need estimates to be less than or equal to actual costs! Optimality of A* Tree Search Admissible Heuristics Assume: § A heuristic h is admissible (optimistic) if: § A is an optimal goal node … § B is a suboptimal goal node § h is admissible where is the true cost to a nearest goal Claim: § Examples: § A will exit the fringe before B 15 4 § Coming up with admissible heuristics is most of what’s involved in using A* in practice. 3

  4. 3/29/17 Optimality of A* Tree Search Optimality of A* Tree Search Proof: Proof: … … § Imagine B is on the fringe § Imagine B is on the fringe § Some ancestor n of A is on the § Some ancestor n of A is on the fringe, too (maybe A!) fringe, too (maybe A!) § Claim: n will be expanded § Claim: n will be expanded before B before B 1. f(n) is less or equal to f(A) 1. f(n) is less or equal to f(A) 2. f(A) is less than f(B) Definition of f-cost B is suboptimal h = 0 at a goal Admissibility of h h = 0 at a goal Optimality of A* Tree Search UCS vs A* Contours Proof: § Uniform-cost expanded … § Imagine B is on the fringe in all directions § Some ancestor n of A is on the fringe, too (maybe A!) Start Goal § Claim: n will be expanded before B § A* expands mainly 1. f(n) is less or equal to f(A) toward the goal, but 2. f(A) is less than f(B) 3. n expands before B hedges its bets to § All ancestors of A expand ensure optimality Start Goal before B § A expands before B § A* search is optimal Which Algorithm? Which Algorithm? § Uniform cost search (UCS): § A*, Manhattan Heuristic: 4

  5. 3/29/17 Which Algorithm? Creating Admissible Heuristics § Best First / Greedy, Manhattan Heuristic: § Most of the work in solving hard search problems optimally is in coming up with admissible heuristics § Often, admissible heuristics are solutions to relaxed problems, where new actions are available 366 15 § Inadmissible heuristics are often useful too Creating Heuristics 8 Puzzle I § Heuristic: Number of tiles misplaced 8-puzzle: § h(start) = 8 § Is it admissible? Average nodes expanded when optimal path has length… § What are the states? …4 steps …8 steps …12 steps § How many states? § What are the actions? 3.6 x 10 6 UCS 112 6,300 § What states can I reach from the start state? TILES 13 39 227 § What should the costs be? 8 Puzzle II 8 Puzzle III § What if we had an easier 8- § How about using the actual cost as a puzzle where any tile could heuristic? slide any direction at any time, ignoring other tiles? § Would it be admissible? § Total Manhattan distance § Would we save on nodes expanded? § h(start) = 3 + 1 + 2 + … = 18 Average nodes expanded when optimal path has length… § What’s wrong with it? …4 steps …8 steps …12 steps § With A*: a trade-off between quality of estimate and work per node! TILES 13 39 227 § Admissible? MANHATTAN 12 25 73 5

  6. 3/29/17 Trivial Heuristics, Dominance A* Applications § Dominance: h a ≥ h c if § Pathing / routing problems § Resource planning problems § Robot motion planning § Heuristics form a semi-lattice: § Language analysis § Max of admissible heuristics is admissible § Machine translation § Speech recognition § Trivial heuristics § … § Bottom of lattice is the zero heuristic (what does this give us?) § Top of lattice is the exact heuristic Tree Search: Extra Work! Graph Search § Failure to detect repeated states can cause § In BFS, for example, we shouldn’t bother exponentially more work. Why? expanding some nodes (which, and why?) S e p d q e h r b c a a h r p q f q c p q f G a q c G a Graph Search A* Graph Search Gone Wrong § Idea: never expand a state twice State space graph Search tree § How to implement: S (0+2) § Tree search + set of expanded states (“closed set”) A A 1 § Expand the search tree node-by-node, but… 1 h=4 A (1+4) B (1+1) S S § Before expanding a node, check to make sure its state has never C C h=1 been expanded before 1 h=2 § If not new, skip it, if new add to closed set 2 C (2+1) C (3+1) 3 § Hint: in python, store the closed set as a set, not a list B B G (5+0) G (6+0) § Can graph search wreck completeness? Why/why not? h=1 G G § How about optimality? h=0 6

  7. 3/29/17 Consistency of Heuristics Optimality of A* Graph Search § Main idea: estimated heuristic costs ≤ actual § Sketch: consider what A* does with a costs consistent heuristic: A § Admissibility: heuristic cost ≤ actual cost to goal 1 § Nodes are popped with non-decreasing f- f ≤ 1 … C h=4 h=1 h(A) ≤ actual cost from A to G scores: for all n, n’ with n’ popped after n : f ≤ 2 h=2 f(n’) ≥ f(n) h=3 § Consistency: heuristic “arc” cost ≤ actual cost for f ≤ 3 § Proof by induction: (1) always pop the lowest f- each arc score from the fringe, (2) all new nodes have 3 larger (or equal) scores, (3) add them to the h(A) – h(C) ≤ cost(A to C) fringe, (4) repeat! § For every state s, nodes that reach s Consequences of consistency: § optimally are expanded before nodes that G reach s sub-optimally § The f value along a path never decreases h(A) ≤ cost(A to C) + h(C) § Result: A* graph search is optimal f(A) = g(A) + h(A) ≤ g(A) + cost(A to C) + h(C) = f(C) § A* graph search is optimal Optimality Summary: A* § Tree search: § A* uses both backward costs and § A* optimal if heuristic is admissible (and non-negative) (estimates of) forward costs § UCS is a special case (h = 0) § Graph search: § A* is optimal with admissible / consistent § A* optimal if heuristic is consistent heuristics § UCS optimal (h = 0 is consistent) § Consistency implies admissibility § Heuristic design is key: often use relaxed problems § In general, natural admissible heuristics tend to be consistent, especially if from relaxed problems 7

Recommend


More recommend