planning with state dependent action costs
play

Planning with State-Dependent Action Costs ICAPS 2016 Tutorial - PowerPoint PPT Presentation

Planning with State-Dependent Action Costs ICAPS 2016 Tutorial Robert Mattmller Florian Geier June 13, 2016 Background Compilation Relaxations Abstractions Summary Part I Theory June 13, 2016 Robert Mattmller, Florian Geier


  1. State-Dependent Action Costs Compilation III: “EVMDD-Based Action Decomposition” Background State-Dependent Action Costs Compilation III Edge-Valued Multi-Valued Decision Diagrams exploit as much additive decomposability as possible Compilation multiply out variable domains where inevitable Relaxations Technicalities: Abstractions fix variable ordering Summary perform Shannon and isomorphism reduction Properties: ✧ always possible worst-case exponential blow-up, but as good as it gets ❛ plan lengths not preserved, costs preserved ❛ as before: action ordering, all partial effects at end! ❛ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 16 / 64

  2. State-Dependent Action Costs Compilation III: “EVMDD-Based Action Decomposition” Background State-Dependent Action Costs Edge-Valued Multi-Valued Decision Diagrams Compilation III provides optimal combination of sequential and Compilation parallel action decomposition, given fixed variable ordering. Relaxations Abstractions Question: How to find such decompositions automatically? Summary Answer: Figure for Compilation III basically a reduced ordered edge-valued multi-valued decision diagram (EVMDD)! [Lai et al., 1996; Ciardo and Siminiceanu, 2002] June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 17 / 64

  3. EVMDDs Edge-Valued Multi-Valued Decision Diagrams Background State-Dependent Action Costs EVMDDs: Edge-Valued Multi-Valued Decision Diagrams Decision diagrams for arithmetic functions Compilation Decision nodes with associated decision variables Relaxations Edge weights: partial costs contributed by facts Abstractions Summary Size of EVMDD compact in many “typical” cases Properties: ✧ satisfy all requirements for Compilation III, even (almost) uniquely determined by them ✧ already have well-established theory and tool support ✧ detect and exhibit additive structure in arithmetic functions June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 18 / 64

  4. EVMDDs Edge-Valued Multi-Valued Decision Diagrams Background State-Dependent Action Costs Edge-Valued Multi-Valued Consequence: Decision Diagrams Compilation represent cost functions as EVMDDs Relaxations exploit additive structure exhibited by them Abstractions draw on theory and tool support for EVMDDs Summary Two perspectives on EVMDDs: graphs specifying how to decompose action costs data structures encoding action costs (used independently from compilations) June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 19 / 64

  5. EVMDDs Edge-Valued Multi-Valued Decision Diagrams Background Example (EVMDD Evaluation) State-Dependent Action Costs cost a = xy 2 + z + 2 Edge-Valued D x = D z = { 0 , 1 } , D y = { 0 , 1 , 2 } Multi-Valued Decision Diagrams Compilation Relaxations 2 Abstractions x Directed acyclic graph Summary 1 0 Dangling incoming edge 0 Single terminal node 0 y Decision nodes with: 0 0 2 1 0 1 4 decision variables edge label z edge weights 0 1 0 1 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 20 / 64

  6. EVMDDs Edge-Valued Multi-Valued Decision Diagrams Background Example (EVMDD Evaluation) State-Dependent Action Costs cost a = xy 2 + z + 2 Edge-Valued D x = D z = { 0 , 1 } , D y = { 0 , 1 , 2 } Multi-Valued Decision Diagrams Compilation Relaxations 2 Abstractions x Summary 1 0 0 s = { x �→ 1 , y �→ 2 , z �→ 0 } y 0 cost a ( s ) = 0 2 1 0 1 4 z 0 1 0 1 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 20 / 64

  7. EVMDDs Edge-Valued Multi-Valued Decision Diagrams Background Example (EVMDD Evaluation) State-Dependent Action Costs cost a = xy 2 + z + 2 Edge-Valued D x = D z = { 0 , 1 } , D y = { 0 , 1 , 2 } Multi-Valued Decision Diagrams Compilation Relaxations 2 Abstractions x Summary 1 0 0 s = { x �→ 1 , y �→ 2 , z �→ 0 } y 0 cost a ( s ) = 2 + 0 2 1 0 1 4 z 0 1 0 1 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 20 / 64

  8. EVMDDs Edge-Valued Multi-Valued Decision Diagrams Background Example (EVMDD Evaluation) State-Dependent Action Costs cost a = xy 2 + z + 2 Edge-Valued D x = D z = { 0 , 1 } , D y = { 0 , 1 , 2 } Multi-Valued Decision Diagrams Compilation Relaxations 2 Abstractions x Summary 1 0 0 s = { x �→ 1 , y �→ 2 , z �→ 0 } y 0 cost a ( s ) = 2 + 0 + 0 2 1 0 1 4 z 0 1 0 1 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 20 / 64

  9. EVMDDs Edge-Valued Multi-Valued Decision Diagrams Background Example (EVMDD Evaluation) State-Dependent Action Costs cost a = xy 2 + z + 2 Edge-Valued D x = D z = { 0 , 1 } , D y = { 0 , 1 , 2 } Multi-Valued Decision Diagrams Compilation Relaxations 2 Abstractions x Summary 1 0 0 s = { x �→ 1 , y �→ 2 , z �→ 0 } y 0 cost a ( s ) = 2 + 0 + 4 + 0 2 1 0 1 4 z 0 1 0 1 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 20 / 64

  10. EVMDDs Edge-Valued Multi-Valued Decision Diagrams Background Example (EVMDD Evaluation) State-Dependent Action Costs cost a = xy 2 + z + 2 Edge-Valued D x = D z = { 0 , 1 } , D y = { 0 , 1 , 2 } Multi-Valued Decision Diagrams Compilation Relaxations 2 Abstractions x Summary 1 0 0 s = { x �→ 1 , y �→ 2 , z �→ 0 } y 0 cost a ( s ) = 2 + 0 + 4 + 0 = 6 0 2 1 0 1 4 z 0 1 0 1 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 20 / 64

  11. EVMDDs Edge-Valued Multi-Valued Decision Diagrams Background State-Dependent Action Costs Edge-Valued Multi-Valued Decision Diagrams Compilation Properties of EVMDDs: Relaxations ✧ Existence for finitely many finite-domain variables Abstractions Summary ✧ Uniqueness/canonicity if reduced and ordered ✧ Basic arithmetic operations supported (Lai et al., 1996; Ciardo and Siminiceanu, 2002) June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 21 / 64

  12. EVMDDs Arithmetic operations on EVMDDs Background State-Dependent Given arithmetic operator ⊗ ∈ { + , − , · ,... } , EMVDDs E 1 , E 2 . Action Costs Edge-Valued Compute EVMDD E = E 1 ⊗E 2 . Multi-Valued Decision Diagrams Compilation Implementation: procedure apply ( ⊗ , E 1 , E 2 ) : Relaxations Base case: single-node EVMDDs encoding constants Abstractions Summary Inductive case: apply ⊗ recursively: push down edge weights recursively apply ⊗ to corresponding children pull up excess edge weights from children Time complexity [Lai et al., 1996]: additive operations: product of input EVMDD sizes in general: exponential June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 22 / 64

  13. Background Compilation Relaxations Abstractions Summary Section Compilation June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 23 / 64

  14. EVMDD-Based Action Compilation Background Compilation Relaxations Example (EVMDD-based action compilation) Abstractions Let a = � pre , eff � , cost a = xy 2 + z + 2 . Summary Auxiliary variables: One semaphore variable σ with D σ = { 0 , 1 } for entire planning task. One auxiliary variable α = α a with D α a = { 0 , 1 , 2 , 3 , 4 } for action a . Replace a by new auxiliary actions (similarly for other actions). June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 24 / 64

  15. EVMDD-Based Action Compilation Background Example (EVMDD-based action compilation, ctd.) Compilation Relaxations a pre = � pre ∧ σ = 0 ∧ α = 0 , Abstractions σ = 1 ∧ α = 1 � , cost = 2 α = 0 Summary a 1 , x = 0 = � α = 1 ∧ x = 0 , α = 3 � , 2 cost = 0 x α = 1 a 1 , x = 1 = � α = 1 ∧ x = 1 , α = 2 � , cost = 0 0 1 a 2 , y = 0 = � α = 2 ∧ y = 0 , α = 3 � , 0 cost = 0 y α = 2 a 2 , y = 1 = � α = 2 ∧ y = 1 , α = 3 � , cost = 1 0 0 2 1 a 2 , y = 2 = � α = 2 ∧ y = 2 , α = 3 � , 0 1 4 cost = 4 z α = 3 a 3 , z = 0 = � α = 3 ∧ z = 0 , α = 4 � , cost = 0 0 1 0 1 a 3 , z = 1 = � α = 3 ∧ z = 1 , α = 4 � , cost = 1 α = 4 0 a eff = � α = 4 , eff ∧ σ = 0 ∧ α = 0 � , cost = 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 25 / 64

  16. EVMDD-Based Action Compilation Background Let Π be an SDAC-task and Π ′ the result of EVMDD-based action Compilation compilation applied to Π . Relaxations Abstractions Proposition Summary Π ′ has only state-independent costs. Proposition Size of Π ′ is polynomial in size of Π times size of largest EVMDD used in compilation. Proposition Π and Π ′ admit the same plans (modulo replacement of actions by action sequences). Optimal plan costs are preserved. June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 26 / 64

  17. Background Compilation Relaxations Relaxed Planning Graph Abstractions Section Summary Relaxations June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 27 / 64

  18. Relaxation Heuristics Background Compilation Relaxations Relaxed Planning Graph Abstractions Summary We know: Delete-relaxation heuristics informative in classical planning. Question: Also informative in SDAC planning? June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 28 / 64

  19. Relaxation Heuristics Background Compilation Relaxations Definition (Classical additive heuristic h add ) Relaxed Planning Graph Abstractions Summary h add ∑ h add ( Facts ) = ( fact ) s s fact ∈ Facts  if fact ∈ s 0  h add ( fact ) = s achiever a of fact [ h add ( pre ( a ))+ cost a ] min otherwise s  Question: How to generalize h add to SDAC? June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 29 / 64

  20. Relaxations with SDAC Background Example Compilation Relaxations Relaxed Planning a = �⊤ , x = 1 � cost a = 2 − 2 y Graph Abstractions b = �⊤ , y = 1 � cost b = 1 Summary s = { x �→ 0 , y �→ 0 } h add ( y = 1 ) = 1 s h add ( x = 1 ) = ? s June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 30 / 64

  21. Relaxations with SDAC Background Example Compilation Relaxations Relaxed Planning a = �⊤ , x = 1 � cost a = 2 − 2 y Graph Abstractions b = �⊤ , y = 1 � cost b = 1 Summary s = { x �→ 0 , y �→ 0 } h add ( y = 1 ) = 1 s h add ( x = 1 ) = ? s a : 2 00 10 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 30 / 64

  22. Relaxations with SDAC Background Example Compilation Relaxations Relaxed Planning a = �⊤ , x = 1 � cost a = 2 − 2 y Graph Abstractions b = �⊤ , y = 1 � cost b = 1 Summary s = { x �→ 0 , y �→ 0 } h add ( y = 1 ) = 1 s h add ( x = 1 ) = ? s a : 2 00 10 b : 1 a : 0 ⇒ cheaper! 00 01 11 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 30 / 64

  23. Relaxations with SDAC Background Compilation Minimize over all situations where a is applicable. Relaxations Relaxed Planning Definition (Additive heuristic h add for SDAC) Graph Abstractions Summary  0 if fact ∈ s  h add ( fact ) = s achiever a of fact [ h add ( pre ( a ))+ cost a ] min otherwise s  June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 31 / 64

  24. Relaxations with SDAC Background Compilation Minimize over all situations where a is applicable. Relaxations Relaxed Planning Definition (Additive heuristic h add for SDAC) Graph Abstractions Summary  0 if fact ∈ s  h add ( fact ) = ( pre ( a ))+ Cost s s achiever a of fact [ h add a ] min otherwise s  Cost s s )+ h add a = min s ∈ S a [ cost a ( ˆ ( ˆ s )] s ˆ S a : set of partial states over variables in cost function | S a | exponential in number of variables in cost function June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 31 / 64

  25. Relaxations with SDAC Background Compilation Properties of h add for SDAC: Relaxations Relaxed Planning Good: classical h add on compiled task = Graph Abstractions generalized h add on SDAC-task Summary Bad: exponential blow-up Computing h add for SDAC: Option 1: Compute classical h add on compiled task. Option 2: Compute Cost s a directly. Plug EVMDDs as subgraphs into RPG � efficient computation of h add June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 32 / 64

  26. Option 2: RPG Compilation Background Compilation Relaxations Relaxed Planning Graph Abstractions ∧ + 2 Summary x ∨ ∧ + 0 ∧ + 0 cost a = xy 2 + z + 2 y ∨ ∧ + 0 ∧ + 1 ∧ + 4 z ∨ + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  27. Option 2: RPG Compilation Background Compilation Relaxations Relaxed Planning Graph Abstractions ∧ + 2 Summary x ∨ variable nodes become ∧ + 0 ∧ + 0 ∨ -nodes y ∨ weights become ∧ -nodes ∧ + 0 ∧ + 1 ∧ + 4 z ∨ + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  28. Option 2: RPG Compilation Background Compilation Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 Summary x ∨ ∧ + 0 ∧ + 0 Augment with input nodes y ∨ ∧ + 0 ∧ + 1 ∧ + 4 z ∨ + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  29. Option 2: RPG Compilation Background Compilation Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 Summary x ∨ ∧ + 0 ∧ + 0 Ensure complete y ∨ ∨ evaluation + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 z ∨ + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  30. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 Summary x ∨ ∧ + 0 ∧ + 0 Insert h add values y ∨ ∨ + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 z ∨ + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  31. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 Summary x ∨ Evaluate nodes: ∧ + 0 ∧ + 0 ∧ : ∑ ( parents ) + weight y ∨ ∨ ∨ : min ( parents ) + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 z ∨ + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  32. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 2 Summary x ∨ Evaluate nodes: ∧ + 0 ∧ + 0 ∧ : ∑ ( parents ) + weight y ∨ ∨ ∨ : min ( parents ) + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 z ∨ + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  33. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 2 Summary 2 x ∨ Evaluate nodes: ∧ + 0 ∧ + 0 ∧ : ∑ ( parents ) + weight y ∨ ∨ ∨ : min ( parents ) + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 z ∨ + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  34. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 2 Summary 2 x ∨ 2 Evaluate nodes: ∧ + 0 ∧ + 0 12 ∧ : ∑ ( parents ) + weight y ∨ ∨ ∨ : min ( parents ) + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 z ∨ + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  35. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 2 Summary 2 x ∨ 2 Evaluate nodes: ∧ + 0 ∧ + 0 12 12 ∧ : ∑ ( parents ) + weight y 2 ∨ ∨ ∨ : min ( parents ) + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 z ∨ + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  36. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 2 Summary 2 x ∨ 2 Evaluate nodes: ∧ + 0 ∧ + 0 12 12 ∧ : ∑ ( parents ) + weight y 2 ∨ ∨ ∨ : min ( parents ) + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 ∞ ∞ 18 13 8 7 z ∨ + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  37. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 2 Summary 2 x ∨ 2 Evaluate nodes: ∧ + 0 ∧ + 0 12 12 ∧ : ∑ ( parents ) + weight y 2 ∨ ∨ ∨ : min ( parents ) + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 ∞ ∞ 18 13 8 7 z ∨ 7 + 0 ∧ + 1 ∧ 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  38. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 2 Summary 2 x ∨ 2 Evaluate nodes: ∧ + 0 ∧ + 0 12 12 ∧ : ∑ ( parents ) + weight y 2 ∨ ∨ ∨ : min ( parents ) + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 ∞ ∞ 18 13 8 7 z ∨ 7 + 0 ∧ + 1 ∧ 9 10 0 , Output ∨ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  39. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 x = 0 x = 1 z = 0 z = 1 Relaxed Planning Graph Abstractions ∧ + 2 2 Summary 2 x ∨ 2 Evaluate nodes: ∧ + 0 ∧ + 0 12 12 ∧ : ∑ ( parents ) + weight y 2 ∨ ∨ ∨ : min ( parents ) + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 ∞ ∞ 18 13 8 7 z ∨ 7 + 0 ∧ + 1 ∧ 9 10 0 , Output ∨ 9 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  40. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 z = 0 z = 1 x = 0 x = 1 Cost s a = Relaxed Planning Graph s )+ h add [ cost a ( ˆ ( ˆ s )] min s s ∈ S a ˆ Abstractions ∧ + 2 2 Summary 2 x ∨ 2 ∧ + 0 ∧ + 0 12 12 y 2 ∨ ∨ + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 ∞ ∞ 18 13 8 7 z ∨ 7 + 0 ∧ + 1 ∧ 9 10 0 , Output ∨ 9 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  41. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 z = 0 z = 1 x = 0 x = 1 Cost s a = Relaxed Planning Graph s )+ h add [ cost a ( ˆ ( ˆ s )] min s s ∈ S a ˆ Abstractions ∧ + 2 2 cost a = xy 2 + z + 2 Summary 2 x ∨ s = { x �→ 1 , y �→ 2 , z �→ 0 } ˆ 2 ∧ + 0 ∧ + 0 12 12 y 2 ∨ ∨ + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 ∞ ∞ 18 13 8 7 z ∨ 7 + 0 ∧ + 1 ∧ 9 10 0 , Output ∨ 9 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  42. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 z = 0 z = 1 x = 0 x = 1 Cost s a = Relaxed Planning Graph s )+ h add [ cost a ( ˆ ( ˆ s )] min s s ∈ S a ˆ Abstractions ∧ + 2 2 cost a = xy 2 + z + 2 Summary 2 x ∨ s = { x �→ 1 , y �→ 2 , z �→ 0 } ˆ 2 ∧ + 0 ∧ + 0 12 12 y 2 ∨ ∨ cost a ( ˆ s ) = + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 1 · 2 2 + 0 + 2 = 6 ∞ ∞ 18 13 8 7 z ∨ = 2 + 0 + 4 + 0 7 + 0 ∧ + 1 ∧ h add ( ˆ s ) = 0 + 1 + 2 = 3 9 10 s 0 , Output ∨ 9 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  43. Option 2: Computing Cost s a Background Compilation 6 ∞ 1 10 0 2 2 Relaxations Input y = 0 y = 1 y = 2 z = 0 z = 1 x = 0 x = 1 Cost s a = Relaxed Planning Graph s )+ h add [ cost a ( ˆ ( ˆ s )] min s s ∈ S a ˆ Abstractions ∧ + 2 2 cost a = xy 2 + z + 2 Summary 2 x ∨ s = { x �→ 1 , y �→ 2 , z �→ 0 } ˆ 2 ∧ + 0 ∧ + 0 12 12 y 2 ∨ ∨ cost a ( ˆ s ) = + 0 ∧ + 0 ∧ + 0 ∧ ∧ + 0 ∧ + 1 ∧ + 4 1 · 2 2 + 0 + 2 = 6 ∞ ∞ 18 13 8 7 z ∨ = 2 + 0 + 4 + 0 7 + 0 ∧ + 1 ∧ h add ( ˆ s ) = 0 + 1 + 2 = 3 9 10 s 0 , Output ∨ 9 Cost s a = 6 + 3 = 9 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 33 / 64

  44. Additive Heuristic Background Compilation Relaxations RPG compilation: Relaxed Planning Graph RPG subgraph in each layer for each action Abstractions Summary Connect subgraphs with precondition graphs Link outputs to next proposition layer Good: classical h add on compiled task = generalized h add on SDAC-task = cost value computed using RPG compilation June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 34 / 64

  45. Background Compilation Relaxations Abstractions Cartesian Abstractions Counterexample- Section Guided Abstraction Refinement Summary Abstractions June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 35 / 64

  46. Abstraction Heuristics Background Compilation Relaxations Abstractions Cartesian Abstractions Question: Why consider abstraction heuristics? Counterexample- Guided Abstraction Refinement Summary Answer: admissibility � optimality June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 36 / 64

  47. Abstraction Heuristics Background a : 1 Compilation Relaxations Abstractions Cartesian a : 2 Abstractions Counterexample- Guided Abstraction Refinement Summary June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 37 / 64

  48. Abstraction Heuristics Background a : 1 Compilation Relaxations a :? Abstractions Cartesian a : 2 Abstractions Counterexample- Guided Abstraction Refinement Summary Question: What are the abstract action costs? June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 37 / 64

  49. Abstraction Heuristics Background a : 1 Compilation Relaxations a : 1 Abstractions Cartesian a : 2 Abstractions Counterexample- Guided Abstraction Refinement Summary Question: What are the abstract action costs? Answer: For admissibility, abstract cost of a should be cost a ( s abs ) = cost a ( s ) . min concrete state s abstracted to s abs June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 37 / 64

  50. Abstraction Heuristics Background a : 1 Compilation Relaxations a : 1 Abstractions Cartesian a : 2 Abstractions Counterexample- Guided Abstraction Refinement Summary Question: What are the abstract action costs? Answer: For admissibility, abstract cost of a should be cost a ( s abs ) = cost a ( s ) . min concrete state s abstracted to s abs Problem: exponentially many states in minimization Aim: Compute cost a ( s abs ) efficiently (given EVMDD for cost a ( s ) ). June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 37 / 64

  51. Cartesian Abstractions Background We will see: possible if the abstraction is Cartesian or coarser. Compilation (Includes projections and domain abstractions.) Relaxations Abstractions Cartesian Abstractions Counterexample- Guided Abstraction Refinement Summary June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 38 / 64

  52. Cartesian Abstractions Background We will see: possible if the abstraction is Cartesian or coarser. Compilation (Includes projections and domain abstractions.) Relaxations Abstractions Cartesian Definition (Cartesian abstraction) Abstractions Counterexample- Guided Abstraction A set of states s abs is Cartesian if it is of the form Refinement Summary D 1 ×···× D n , where D i ⊆ D i for all i = 1 ,..., n . An abstraction is Cartesian if all abstract states are Cartesian sets. [Seipp and Helmert, 2013] Intuition: Variables are abstracted independently. � exploit independence when computing abstract costs! June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 38 / 64

  53. Cartesian Abstractions Background Example (Cartesian abstraction) Compilation Relaxations Cartesian abstraction over x , y Abstractions Cartesian Abstractions Counterexample- Guided Abstraction Refinement y = 0 y = 1 y = 2 Summary x = 0 00 01 02 x = 1 10 11 12 s abs x = 2 20 21 22 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 39 / 64

  54. Cartesian Abstractions Background Example (Cartesian abstraction) Compilation Cost x + y + 1 Relaxations Cartesian abstraction over x , y Abstractions (edges consistent with s abs ) Cartesian Abstractions Counterexample- Guided Abstraction Refinement y = 0 y = 1 y = 2 Summary 1 x = 0 00 01 02 x 0 1 2 1 0 2 x = 1 10 11 12 y s abs 0 2 1 x = 2 20 21 22 1 0 2 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 39 / 64

  55. Cartesian Abstractions Background Example (Cartesian abstraction) Compilation Cost x + y + 1 Relaxations Cartesian abstraction over x , y Abstractions (edges consistent with s abs ) Cartesian Abstractions Counterexample- Guided Abstraction Refinement y = 0 y = 1 y = 2 Summary 1 x = 0 00 01 02 x 0 1 2 1 0 2 x = 1 10 11 12 y s abs 0 2 1 x = 2 20 21 22 1 0 2 cost = 4 cost = 5 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 39 / 64

  56. Cartesian Abstractions Background Example (Cartesian abstraction) Compilation Cost x + y + 1 Relaxations Cartesian abstraction over x , y Abstractions (edges consistent with s abs ) Cartesian Abstractions Counterexample- Guided Abstraction Refinement y = 0 y = 1 y = 2 Summary min = 1 1 x = 0 00 01 02 x 0 1 2 1 0 2 x = 1 10 11 12 y s abs 0 2 1 x = 2 20 21 22 1 0 2 cost = 4 cost = 5 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 39 / 64

  57. Cartesian Abstractions Background Example (Cartesian abstraction) Compilation Cost x + y + 1 Relaxations Cartesian abstraction over x , y Abstractions (edges consistent with s abs ) Cartesian Abstractions Counterexample- Guided Abstraction Refinement y = 0 y = 1 y = 2 Summary min = 1 1 x = 0 00 01 02 x 0 1 2 1 0 2 x = 1 10 11 12 min = 3 y s abs 0 2 1 x = 2 20 21 22 1 0 2 cost = 4 cost = 5 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 39 / 64

  58. Cartesian Abstractions Background Example (Cartesian abstraction) Compilation Cost x + y + 1 Relaxations Cartesian abstraction over x , y Abstractions (edges consistent with s abs ) Cartesian Abstractions Counterexample- Guided Abstraction Refinement y = 0 y = 1 y = 2 Summary min = 1 1 x = 0 00 01 02 x 0 1 2 1 0 2 x = 1 10 11 12 min = 3 y s abs 0 2 1 x = 2 20 21 22 1 0 2 min = 4 cost = 4 cost = 5 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 39 / 64

  59. Cartesian Abstractions Background What happens here? or: Compilation Relaxations Why does the topsort EVMDD traversal correctly compute Abstractions cost a ( s abs ) ? Cartesian Abstractions 1 For each Cartesian state s abs and each variable x , Counterexample- Guided Abstraction Refinement each value d ∈ D x is either consistent with s abs or not. Summary 2 This implies: at all decision nodes associated with variable x , some outgoing edges are enabled, others are disabled. This is independent from all other decision nodes/variables. 3 This allows local minimizations over linearly many edges instead of global minimization over exponentially many paths in the EVMDD when computing minimum costs. � polynomial in EVMDD size! June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 40 / 64

  60. Cartesian Abstractions Not Cartesian! Background If abstraction not Cartesian: two variables can be Compilation independent in cost function ( � compact EVMDD), but Relaxations dependent in abstraction. Abstractions � cannot consider independent parts of the EVMDD separately. Cartesian Abstractions Counterexample- Guided Abstraction Refinement Summary June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 41 / 64

  61. Cartesian Abstractions Not Cartesian! Background If abstraction not Cartesian: two variables can be Compilation independent in cost function ( � compact EVMDD), but Relaxations dependent in abstraction. Abstractions � cannot consider independent parts of the EVMDD separately. Cartesian Abstractions Counterexample- Example (Non-Cartesian abstraction) Guided Abstraction Refinement cost : x + y + 1 , cost ( s abs ) = 2 , local minim.: 1 � underestimate! Summary y = 0 y = 1 y = 2 1 x = 0 00 01 02 x 0 2 1 x = 1 0 1 2 10 11 12 y 0 2 x = 2 1 20 21 22 1 0 2 s abs = ( x � = y ) 0 June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 41 / 64

  62. Counterexample-Guided Abstraction Refinement Background Wanted: principled way of computing Cartesian abstractions. Compilation Relaxations � Counterexample-Guided Abstraction Refinement (CEGAR) Abstractions Cartesian Abstractions Counterexample- Initial Guided Abstraction Refinement abstraction Summary no plan Search plan unsolvable plan flaws no flaws Analyze Refine plan plan found abstraction June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 42 / 64

  63. Counterexample-Guided Abstraction Refinement Background Compilation Possible flaws in abstract plan: Relaxations Abstractions 1 Concrete state does not fit abstract state Cartesian Abstractions (concrete and abstract traces diverge) Counterexample- Guided Abstraction Refinement 2 Action not applicable in concrete state Summary 3 Trace completed, but goal not reached June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 43 / 64

  64. Counterexample-Guided Abstraction Refinement Background Compilation Possible flaws in abstract plan: Relaxations Abstractions 1 Concrete state does not fit abstract state Cartesian Abstractions (concrete and abstract traces diverge) Counterexample- Guided Abstraction Refinement 2 Action not applicable in concrete state Summary 3 Trace completed, but goal not reached Here, we need to consider a further type of flaw: 4 Cost-mismatch flaw: Action more costly in concrete state than in abstract state June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 43 / 64

  65. Counterexample-Guided Abstraction Refinement Background Compilation Possible flaws in abstract plan: Relaxations Abstractions 1 Concrete state does not fit abstract state Cartesian Abstractions (concrete and abstract traces diverge) Counterexample- Guided Abstraction Refinement 2 Action not applicable in concrete state Summary 3 Trace completed, but goal not reached Here, we need to consider a further type of flaw: 4 Cost-mismatch flaw: Action more costly in concrete state than in abstract state � resolve cost-mismatch flaws with additional refinement. June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 43 / 64

  66. Counterexample-Guided Abstraction Refinement Background Example (Cost-mismatch flaw) Compilation Relaxations b : 1 a : 1 01 Abstractions Cartesian Abstractions Counterexample- Guided Abstraction 00 10 11 Refinement a : 1 Summary a = �⊤ , x ∧ y � , cost a = 2 x + 1 s 0 = 10 b = �⊤ , ¬ x ∧ y � , cost b = 1 s ⋆ = x ∧ y June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 44 / 64

  67. Counterexample-Guided Abstraction Refinement Background Example (Cost-mismatch flaw) Compilation Relaxations b : 1 a : 1 01 Abstractions Cartesian Abstractions Counterexample- Guided Abstraction 00 10 11 Refinement a : 1 Summary a = �⊤ , x ∧ y � , cost a = 2 x + 1 s 0 = 10 b = �⊤ , ¬ x ∧ y � , cost b = 1 s ⋆ = x ∧ y Optimal abstract plan: � a � (abstract cost 1 ) June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 44 / 64

  68. Counterexample-Guided Abstraction Refinement Background Example (Cost-mismatch flaw) Compilation Relaxations b : 1 a : 1 01 Abstractions Cartesian Abstractions Counterexample- Guided Abstraction 00 10 11 Refinement a : 1 Summary a = �⊤ , x ∧ y � , cost a = 2 x + 1 s 0 = 10 b = �⊤ , ¬ x ∧ y � , cost b = 1 s ⋆ = x ∧ y Optimal abstract plan: � a � (abstract cost 1 ) This is also a concrete plan (concrete cost 3 ) June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 44 / 64

  69. Counterexample-Guided Abstraction Refinement Background Example (Cost-mismatch flaw) Compilation Relaxations b : 1 a : 1 01 Abstractions Cartesian Abstractions Counterexample- Guided Abstraction 00 10 11 Refinement a : 1 Summary a = �⊤ , x ∧ y � , cost a = 2 x + 1 s 0 = 10 b = �⊤ , ¬ x ∧ y � , cost b = 1 s ⋆ = x ∧ y Optimal abstract plan: � a � (abstract cost 1 ) This is also a concrete plan (concrete cost 3 ) But optimal concrete plan: � b , a � (concr. and abstract cost 2 ) June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 44 / 64

  70. Background Compilation Relaxations Abstractions Summary Section Summary June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 45 / 64

  71. Summary Background Compilation Summary: EVMDDs Relaxations compact representation of cost functions Abstractions Summary exhibit additive structure Recall: motivating challenges compiling SDAC away � solved! EVMDD-based action compilation preserves h add and h abs SDAC-aware h values � possible! h add RPG embedding Cartesian abstraction heuristics June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 46 / 64

  72. Future Work Background Compilation Relaxations Abstractions Summary Future Work: Other delete-relaxation heuristics such as h FF Static and dynamic EVMDD variable orders June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 47 / 64

  73. Libraries PDDL Part II Practice June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 48 / 64

  74. Libraries MEDDLY pyevmdd PDDL Section Libraries June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 49 / 64

  75. EVMDD Libraries MEDDLY Libraries MEDDLY pyevmdd MEDDLY: Multi-terminal and Edge-valued PDDL Decision Diagram LibrarY Authors: Junaid Babar and Andrew Miner Language: C++ License: open source (LGPLv3) Advantages: many different types of decision diagrams mature and efficient Disadvantages: documentation Code: http://meddly.sourceforge.net June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 50 / 64

  76. EVMDD Libraries pyevmdd Libraries MEDDLY pyevmdd: EVMDD library for Python pyevmdd PDDL Authors: RM and FG Language: Python License: open source (GPLv3) Disadvantages: restricted to EVMDDs neither mature nor optimized Purpose: our EVMDD playground Code: https://github.com/robertmattmueller/pyevmdd Documentation: http://pyevmdd.readthedocs.io/en/latest/ June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 51 / 64

  77. Libraries PDDL Section PDDL June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 52 / 64

  78. PDDL Representation Libraries PDDL Usual way of representing costs in PDDL: effects (increase (total-cost) (<expression>)) metric (minimize (total-cost)) Custom syntax: Besides :parameters , :precondition , and :effect , actions may have field :cost (<expression>) June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 53 / 64

  79. Gripper Libraries PDDL initial state goal state June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 54 / 64

  80. Colored Gripper Libraries PDDL initial state goal state Colored rooms and balls Cost of move increases if ball color differs from its room color Goal did not change! June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 54 / 64

  81. Colored Gripper Libraries PDDL initial state goal state Colored rooms and balls Cost of move increases if ball color differs from its room color Goal did not change! cost ( move ) = ∑ room ∑ ( at ( ball , room ) ∧ ( red ( ball )) ∧ ( blue ( room )) ball + ∑ room ∑ ( at ( ball , room ) ∧ ( blue ( ball )) ∧ ( red ( room )) ball June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 54 / 64

  82. EVMDD-Based Action Compilation Libraries PDDL Example (EVMDD-based action compilation) Let a = � pre , eff � , cost a = xy 2 + z + 2 . Auxiliary variables: One semaphore variable σ with D σ = { 0 , 1 } for entire planning task. One auxiliary variable α = α a with D α a = { 0 , 1 , 2 , 3 , 4 } for action a . Replace a by new auxiliary actions (similarly for other actions). June 13, 2016 Robert Mattmüller, Florian Geißer – Planning with State-Dependent Action Costs 55 / 64

Recommend


More recommend