informed search
play

Informed Search Philipp Koehn 25 February 2020 Philipp Koehn - PowerPoint PPT Presentation

Informed Search Philipp Koehn 25 February 2020 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020 Heuristic 1 From Wikipedia: any approach to problem solving, learning, or discovery that employs a practical method not


  1. Informed Search Philipp Koehn 25 February 2020 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  2. Heuristic 1 From Wikipedia: any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect but sufficient for the immediate goals Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  3. Outline 2 ● Best-first search ● A ∗ search ● Heuristic algorithms – hill-climbing – simulated annealing – genetic algorithms (briefly) – local search in continuous spaces (very briefly) Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  4. 3 best-first search Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  5. Review: Tree Search 4 function T REE -S EARCH ( problem,fringe ) returns a solution, or failure fringe ← I NSERT ( M AKE -N ODE ( I NITIAL -S TATE [ problem ]), fringe ) loop do if fringe is empty then return failure node ← R EMOVE -F RONT ( fringe ) if G OAL -T EST [ problem ] applied to S TATE ( node ) succeeds return node fringe ← I NSERT A LL ( E XPAND ( node , problem ), fringe ) ● Search space is in form of a tree ● Strategy is defined by picking the order of node expansion Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  6. Best-First Search 5 ● Idea: use an evaluation function for each node – estimate of “desirability” ⇒ Expand most desirable unexpanded node ● Implementation: fringe is a queue sorted in decreasing order of desirability ● Special cases – greedy search – A ∗ search Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  7. Romania 6 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  8. Romania 7 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  9. Greedy Search 8 ● State evaluation function h ( n ) ( heuristic ) = estimate of cost from n to the closest goal ● E.g., h SLD ( n ) = straight-line distance from n to Bucharest ● Greedy search expands the node that appears to be closest to goal Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  10. Romania with Step Costs in km 9 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  11. Greedy Search Example 10 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  12. Greedy Search Example 11 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  13. Greedy Search Example 12 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  14. Greedy Search Example 13 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  15. Properties of Greedy Search 14 ● Complete? No, can get stuck in loops, e.g., with Oradea as goal, Iasi → Neamt → Iasi → Neamt → Complete in finite space with repeated-state checking ● Time? O ( b m ) , but a good heuristic can give dramatic improvement ● Space? O ( b m ) —keeps all nodes in memory ● Optimal? No Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  16. 15 a* search Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  17. A ∗ Search 16 ● Idea: avoid expanding paths that are already expensive ● State evaluation function f ( n ) = g ( n ) + h ( n ) – g ( n ) = cost so far to reach n – h ( n ) = estimated cost to goal from n – f ( n ) = estimated total cost of path through n to goal ● A ∗ search uses an admissible heuristic – i.e., h ( n ) ≤ h ∗ ( n ) where h ∗ ( n ) is the true cost from n – also require h ( n ) ≥ 0 , so h ( G ) = 0 for any goal G ● E.g., h SLD ( n ) never overestimates the actual road distance ● Theorem: A ∗ search is optimal Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  18. A ∗ Search Example 17 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  19. A ∗ Search Example 18 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  20. A ∗ Search Example 19 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  21. A ∗ Search Example 20 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  22. A ∗ Search Example 21 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  23. A ∗ Search Example 22 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  24. A ∗ Search Example 23 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  25. A ∗ Search Example 24 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  26. A ∗ Search Example 25 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  27. A ∗ Search Example 26 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  28. A ∗ Search Example 27 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  29. Optimality of A ∗ 28 ● Suppose some suboptimal goal G 2 has been generated and is in the queue ● Let n be an unexpanded node on a shortest path to an optimal goal G f ( G 2 ) = g ( G 2 ) since h ( G 2 ) = 0 > g ( G ) since G 2 is suboptimal ≥ f ( n ) since h is admissible ● Since f ( G 2 ) > f ( n ) , A ∗ will never terminate at G 2 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  30. Properties of A ∗ 29 ● Complete? Yes, unless there are infinitely many nodes with f ≤ f ( G ) ● Time? Exponential in [relative error in h × length of solution] ● Space? Keeps all nodes in memory ● Optimal? Yes—cannot expand f i + 1 until f i is finished A ∗ expands all nodes with f ( n ) < C ∗ A ∗ expands some nodes with f ( n ) = C ∗ A ∗ expands no nodes with f ( n ) > C ∗ Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  31. Admissible Heuristics 30 ● E.g., for the 8-puzzle Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  32. Admissible Heuristics 31 ● E.g., for the 8-puzzle – h 1 ( n ) = number of misplaced tiles – h 2 ( n ) = total Manhattan distance (i.e., no. of squares from desired location of each tile) ● h 1 ( S ) =? ● h 2 ( S ) =? Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  33. Admissible Heuristics 32 ● E.g., for the 8-puzzle – h 1 ( n ) = number of misplaced tiles – h 2 ( n ) = total Manhattan distance (i.e., no. of squares from desired location of each tile) ● h 1 ( S ) =? 6 ● h 2 ( S ) =? 4+0+3+3+1+0+2+1 = 14 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  34. Dominance 33 ● If h 2 ( n ) ≥ h 1 ( n ) for all n (both admissible) → h 2 dominates h 1 and is better for search ● Typical search costs ( d = depth of solution for 8-puzzle) d = 14 IDS = 3,473,941 nodes A ∗ ( h 1 ) = 539 nodes A ∗ ( h 2 ) = 113 nodes d = 24 IDS ≈ 54,000,000,000 nodes A ∗ ( h 1 ) = 39,135 nodes A ∗ ( h 2 ) = 1,641 nodes ● Given any admissible heuristics h a , h b , h ( n ) = max ( h a ( n ) ,h b ( n )) is also admissible and dominates h a , h b Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  35. Relaxed Problems 34 ● Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem ● If the rules of the 8-puzzle are relaxed so that a tile can move anywhere ⇒ h 1 ( n ) gives the shortest solution ● If the rules are relaxed so that a tile can move to any adjacent square ⇒ h 2 ( n ) gives the shortest solution ● Key point: the optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  36. Relaxed Problems 35 ● Well-known example: travelling salesperson problem (TSP) ● Find the shortest tour visiting all cities exactly once Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  37. Relaxed Problems 36 ● Well-known example: travelling salesperson problem (TSP) ● Find the shortest tour visiting all cities exactly once ● Minimum spanning tree – can be computed in O ( n 2 ) – is a lower bound on the shortest (open) tour Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  38. Summary: A* 37 ● Heuristic functions estimate costs of shortest paths ● Good heuristics can dramatically reduce search cost ● Greedy best-first search expands lowest h – incomplete and not always optimal ● A ∗ search expands lowest g + h – complete and optimal – also optimally efficient (up to tie-breaks, for forward search) ● Admissible heuristics can be derived from exact solution of relaxed problems Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

  39. 38 iterative improvement algorithms Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020

Recommend


More recommend