chapter two
play

Chapter Two Problem Solving Using Search Defining the Problem How - PowerPoint PPT Presentation

Chapter Two Problem Solving Using Search Defining the Problem How do you represent a problem so that the computer can solve it? Even before that, how do you define the problem with enough precision so that you can figure out how to


  1. Chapter Two Problem Solving Using Search

  2. Defining the Problem • How do you represent a problem so that the computer can solve it? • Even before that, how do you define the problem with enough precision so that you can figure out how to represent it?

  3. State Space • First have to develop a mapping from game space world of pieces and the geometric pattern on board, to a data structure that captures the essence of the current game state. • Start with an initial State. A combination of the initial state and the set of operators make up a state space. The sequence of states produced Is called the path. We have to detect the goal state. • Often we want to reach the goal with lowest possible cost. The cost function is usually denoted by g. • Effective search algorithms must cause motion in a controlled systematic manner. A systematic search that doesn’t use info about the problem is called brute-force or blind search , others are called heuristic or informed search . Here we can make better choices where to expand next. • An algorithm is optimal if it finds the best solution, complete if it guarantees a solution if one exists, efficient in terms of time and space complexity.

  4. Search Strategies • Breadth-First Search • Depth-First Search • The SearchNode

  5. Breadth-First Search • Searches a state space by constructing a hierarchical tree. • The algorithm defines a way to move through the structure, examining the values at nodes in a controlled and systematic way to find a node that offers a solution to the problem. • Algorithm: – create a queue and add the first SearchNode to it. – Loop – if the queue is empty, quit. – remove the first SearchNode from the queue – if the SearchNode contains the goal state, then exit with the SearchNode as the solution – for each child of the current SearchNode, add the new SearchNode to the back of the queue.

  6. Depth-First Search • Instead of completely searching each level of the tree before going deeper, follow a single branch of the tree down as many levels as possible until it either reaches a solution of a dead end. • Algorithm: – create a queue and add the first SearchNode to it. – Loop: – If the queue is empty, quit – Remove the first SearchNode from the queue – If the SearchNode contains the goal state, then exit with the SearchNode as the solution – For each child of the current SearchNode: add the new SearchNode to the front of the queue.

  7. Search Application International Falls Grand Forks Bemidji Duluth Fargo St.Cloud Minneapolis Wausau Green Bay LaCrosse Madison Rochester Sioux Falls Milwaukee Dubuque Rockford Chicago

  8. Breadth First Search • Chicago to Rochester – Milwaukee • Madison – LaCrosse • Green Bay – LaCrosse – Wausau – Rockford • Dubuque – LaCrosse – Rochester (Goal State) • Madison • Path: Chicago, Rockford, Dubuque, Rochester

  9. Depth First Search • Chicago to Rochester – Milwaukee • Green Bay – LaCrosse » Minneapolis » - St. Claude » - Fargo » - Grand Forks » - International Falls » - Sioux Falls » - Bemidji » - Duluth » Rochester (Goal State) • Path: Chicago, Milwaukee, Green Bay, LaCrosse, Rochester

  10. Iterative Deepening Search-1 • Depth 0: Chicago • Depth 1: Chicago » Rockford » Milwaukee • Depth 2: Chicago » Rockford -Dubuque -Madison » Milwaukee -Madison -Green Bay

  11. Iterative Deepening Search-2 • Depth 3: Chicago – Milwaukee • Madison – LaCrosse • Green Bay – LaCrosse – Wausau – Rockford • Dubuque – LaCrosse – Rochester (Goal State) • Path: Chicago, Rockford, Dubuque, Rochester

  12. Heuristic Search • Generate and Test • Best First Search • Greedy Search • A* Search

  13. Generate and Test 1. Generate a possible solution 2. Test to see if this is actually a solution 3. If a solution has been found ,quit. Otherwise, return to step 1

  14. Best First Search • Is just a General Search • minimum-cost nodes are expanded first • we basically choose the node that appears to be best according to the evaluation function Two basic approaches: • Expand the node closest to the goal • Expand the node on the least-cost solution path

  15. Greedy Search • “ … minimize estimated cost to reach a goal” • a heuristic function calculates such cost estimates h(n) = estimated cost of the cheapest path from the state at node n to a goal state

  16. Straight-line distance • The straight-line distance heuristic function is a for finding route-finding problem h SLD (n) = straight-line distance between n and the goal location

  17. State h(n) B A 366 75 B 374 C 329 D 244 140 A E 253 99 E F 178 F G 193 118 H 9 8 211 80 I 0 C G 111 97 D H I 101 A E B C h=253 h=329 h=374

  18. State h(n) A A 366 B 374 C 329 D 244 E 253 F 178 G 193 h=253 E C B H 98 I 0 F G A h = 366 h = 178 h = 193

  19. State h(n) A A 366 B 374 C 329 D 244 E 253 h = 253 E C B F 178 G 193 H 98 I 0 A F G h =366 h=178 h=193 E I h = 253 h = 0

  20. State h(n) Optimality A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 A- E- F- I = 450 B vs. 75 140 A 99 E A- E- G- H- I = 418 F 118 211 80 C G 111 97 D H I 101

  21. Completeness • Greedy Search is incomplete • Worst-case time complexity O(b m ) Straight-line h (n) distance A 6 A B 5 Starting node C 7 B D 0 D C Target node

  22. A* search B 75 140 A 99 E F 118 f ( n ) = g ( n ) + h ( n ) 80 211 C G 111 97 D H I 101 • h = heuristic function • g = uniform-cost search State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0

  23. “ Since g(n) gives the path from the start node to node n, and h(n) is the estimated cost of the cheapest path from n to the goal, we have… ” f(n) = estimated cost of the cheapest solution through n

  24. f ( n ) = g ( n ) + h ( n ) A f =0 + 366 A = 366 E B C f=140 + 253 f = 75 +374 f = 118+329 =393 f = 140+253 =449 =447 E C B = 393 F G A f = 220+193 f = 280+366 f =239+178 = 413 = 646 =417

  25. f =0 + 366= 366 A f = 140 + 253 E C B = 393 f =220+193 A F G =413 f = 317+98 E H f = 300+253 = 415 = 553

  26. f =0 + 366= 366 f ( n ) = g ( n ) + h ( n ) A State h(n) f = 140 + 253 = 393 A 366 E C B B 374 C 329 D 244 E 253 F 178 f =220+193 G 193 A F G H 98 =413 I 0 f = 317+98 E H f = 300+253 = 415 = 553 f = 418+0 I = 418

  27. Remember earlier A A- E- G- H- I = 418 E G H f = 418+0 I = 418

  28. Genetic Algorithm-1 • Use Biological process metaphor • The problem state must be encoded into a string called a chromosome ( usually binary) • An Evaluation function that takes that encoded string as input and produces “Goodness” or biological fitness score. • This score is then used to rank a set of strings (individuals) • The individuals that are most fit are randomly selected to survive and even procreate into the next generation

  29. Genetic Algorithm-2 • Genetically inspired operators are used: mutation and cross over. • The mutation operator performs random mutation or changes in the chromosome. In the usual binary string case, few bits are randomly flapped • The crossover operator takes two particular fit individuals and combine their genetic material forming two new children

  30. Genetic Algorithm-3 • Example – Parent A: 0010010011 – Parent B: 1100001010 – Mutate (A): 0011010001 – Crossover (A,B) Child1: 0010001010 Child2: 1100010011

Recommend


More recommend