36 1 relaxed planning graphs
play

36.1 Relaxed Planning Graphs 34. Planning Formalisms 35.36. - PowerPoint PPT Presentation

Foundations of Artificial Intelligence May 9, 2016 36. Automated Planning: Delete Relaxation Heuristics Foundations of Artificial Intelligence 36.1 Relaxed Planning Graphs 36. Automated Planning: Delete Relaxation Heuristics 36.2 Maximum


  1. Foundations of Artificial Intelligence May 9, 2016 — 36. Automated Planning: Delete Relaxation Heuristics Foundations of Artificial Intelligence 36.1 Relaxed Planning Graphs 36. Automated Planning: Delete Relaxation Heuristics 36.2 Maximum and Additive Heuristics Martin Wehrle 36.3 FF Heuristic Universit¨ at Basel May 9, 2016 36.4 Summary M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 1 / 24 M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 2 / 24 36. Automated Planning: Delete Relaxation Heuristics Relaxed Planning Graphs Automated Planning: Overview Chapter overview: planning ◮ 33. Introduction 36.1 Relaxed Planning Graphs ◮ 34. Planning Formalisms ◮ 35.–36. Planning Heuristics: Delete Relaxation ◮ 35. Delete Relaxation ◮ 36. Delete Relaxation Heuristics ◮ 37.–38. Planning Heuristics: Abstraction ◮ 39.–40. Planning Heuristics: Landmarks M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 3 / 24 M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 4 / 24

  2. 36. Automated Planning: Delete Relaxation Heuristics Relaxed Planning Graphs 36. Automated Planning: Delete Relaxation Heuristics Relaxed Planning Graphs Relaxed Planning Graphs Relaxed Planning Graphs (Continued) ◮ goal vertices G i if v i ∈ V i for all v ∈ G ◮ relaxed planning graphs: represent which variables in Π + ◮ graph can be constructed for arbitrary many layers can be reached and how ◮ graphs with variable layers V i and action layers A i but stabilizes after a bounded number of layers � V i +1 = V i and A i +1 = A i (Why?) ◮ variable layer V 0 contains the variable vertex v 0 for all v ∈ I ◮ action layer A i +1 contains the action vertex a i +1 for action a ◮ directed edges: if V i contains the vertex v i for all v ∈ pre ( a ) ◮ from v i to a i +1 if v ∈ pre ( a ) (precondition edges) ◮ variable layer V i +1 contains the variable vertex v i +1 ◮ from a i to v i if v ∈ add ( a ) (effect edges) if previous variable layer contains v i , ◮ from v i to G i if v ∈ G (goal edges) or previous action layer contains a i +1 with v ∈ add ( a ) ◮ from v i to v i +1 (no-op edges) German: relaxierter Planungsgraph, Variablenknoten, German: Zielknoten, Vorbedingungskanten, Effektkanten, Aktionsknoten Zielkanten, No-Op-Kanten M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 5 / 24 M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 6 / 24 36. Automated Planning: Delete Relaxation Heuristics Relaxed Planning Graphs 36. Automated Planning: Delete Relaxation Heuristics Relaxed Planning Graphs Illustrative Example Illustrative Example: Relaxed Planning Graph We will write actions a with pre ( a ) = { p 1 , . . . , p k } , add ( a ) = { a 1 , . . . , a l } , del ( a ) = ∅ and cost ( a ) = c a 0 a 1 a 2 a 3 a 1 a 1 a 1 as � p 1 , . . . , p k → a 1 , . . . , a l � c b 0 b 1 b 2 b 3 a 2 a 2 V = { a , b , c , d , e , f , g , h } c 0 c 1 c 2 c 3 I = { a } a 3 a 3 G = { c , d , e , f , g } d 0 d 1 d 2 d 3 a 4 a 4 A = { a 1 , a 2 , a 3 , a 4 , a 5 , a 6 } e 0 e 1 e 2 e 3 G a 1 = � a → b , c � 3 a 5 a 2 = � a , c → d � 1 f 0 f 1 f 2 f 3 a 6 a 3 = � b , c → e � 1 g 0 g 1 g 2 g 3 a 4 = � b → f � 1 a 5 = � d → e , f � 1 h 0 h 1 h 2 h 3 a 6 = � d → g � 1 M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 7 / 24 M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 8 / 24

  3. 36. Automated Planning: Delete Relaxation Heuristics Relaxed Planning Graphs 36. Automated Planning: Delete Relaxation Heuristics Relaxed Planning Graphs Generic Relaxed Planning Graph Heuristic Concrete Examples for Generic RPG Heuristic Heuristic Values from Relaxed Planning Graph Many planning heuristics fit this general template. function generic-rpg-heuristic ( � V , I , G , A � , s ): Π + := � V , s , G , A + � In this course: ◮ maximum heuristic h max (Bonet & Geffner, 1999) for k ∈ { 0 , 1 , 2 , . . . } : rpg := RPG k (Π + ) [relaxed planning graph to layer k ] ◮ additive heuristic h add (Bonet, Loerincs & Geffner, 1997) if rpg contains a goal node: ◮ Keyder & Geffner’s (2008) variant of the FF heuristic h FF Annotate nodes of rpg . (Hoffmann & Nebel, 2001) if termination criterion is true: German: Maximum-Heuristik, additive Heuristik, FF-Heuristik return heuristic value from annotations else if graph has stabilized: remark: return ∞ ◮ The most efficient implementations of these heuristics do not use explicit planning graphs, � general template for RPG heuristics but rather alternative (equivalent) definitions. � to obtain concrete heuristic: instantiate highlighted elements M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 9 / 24 M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 10 / 24 36. Automated Planning: Delete Relaxation Heuristics Maximum and Additive Heuristics 36. Automated Planning: Delete Relaxation Heuristics Maximum and Additive Heuristics Maximum and Additive Heuristics ◮ h max and h add are the simplest RPG heuristics. 36.2 Maximum and Additive Heuristics ◮ Vertex annotations are numerical values. ◮ The vertex values estimate the costs ◮ to make a given variable true ◮ to reach and apply a given action ◮ to reach the goal M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 11 / 24 M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 12 / 24

  4. 36. Automated Planning: Delete Relaxation Heuristics Maximum and Additive Heuristics 36. Automated Planning: Delete Relaxation Heuristics Maximum and Additive Heuristics Maximum and Additive Heuristics: Filled-in Template Maximum and Additive Heuristics: Intuition h max and h add computation of annotations: intuition: ◮ costs of variable vertices: ◮ variable vertices: 0 in layer 0; ◮ choose cheapest way of reaching the variable otherwise minimum of the costs of predecessor vertices ◮ action/goal vertices: ◮ h max is optimistic: assumption: ◮ costs of action and goal vertices: maximum ( h max ) or sum ( h add ) of predecessor vertex costs; when reaching the most expensive precondition variable, we can reach the other precondition variables in parallel for action vertices a i , also add cost ( a ) (hence maximization of costs) termination criterion: ◮ h add is pessimistic: assumption: ◮ stability: terminate if V i = V i − 1 and costs of all vertices all precondition variables must be reached completely in V i equal corresponding vertex costs in V i − 1 independently of each other (hence summation of costs) heuristic value: ◮ value of goal vertex in the last layer M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 13 / 24 M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 14 / 24 36. Automated Planning: Delete Relaxation Heuristics Maximum and Additive Heuristics 36. Automated Planning: Delete Relaxation Heuristics Maximum and Additive Heuristics Illustrative Example: h max Illustrative Example: h add a 0 a 1 a 2 a 3 a 0 a 1 a 2 a 3 0 0 0 0 0 0 0 0 a 1 a 1 a 1 a 1 a 1 a 1 3 3 3 3 3 3 b 0 b 1 b 2 b 3 b 0 b 1 b 2 b 3 3 3 3 3 3 3 a 2 a 2 a 2 a 2 4 4 4 4 c 0 c 1 3 c 2 3 c 3 3 c 0 c 1 3 c 2 3 c 3 3 a 3 a 3 a 3 a 3 4 4 7 7 d 0 d 1 d 2 d 3 d 0 d 1 d 2 d 3 4 4 4 4 a 4 a 4 a 4 a 4 4 4 4 4 e 0 e 1 e 2 4 e 3 4 5 e 0 e 1 e 2 7 e 3 5 21 G G a 5 a 5 5 5 f 0 f 1 f 2 f 3 f 0 f 1 f 2 f 3 4 4 4 4 a 6 a 6 5 5 g 0 g 1 g 2 g 3 g 0 g 1 g 2 g 3 5 5 h 0 h 1 h 2 h 3 h 0 h 1 h 2 h 3 h max ( { a } ) = 5 h add ( { a } ) = 21 M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 15 / 24 M. Wehrle (Universit¨ at Basel) Foundations of Artificial Intelligence May 9, 2016 16 / 24

Recommend


More recommend