what i won t talk about
play

What I wont talk about Luc De Raedt (KULeuven) Dagstuhl Seminar - PowerPoint PPT Presentation

What I wont talk about Luc De Raedt (KULeuven) Dagstuhl Seminar on ML and Formal Methods September 2017 TaCLe Learning constraints in spreadsheets and tabular data Sergey Paramonov, Samuel Kolb, Tias Guns, Luc De Raedt KU Leuven


  1. What I won’t talk about Luc De Raedt (KULeuven) Dagstuhl Seminar on ML and Formal Methods September 2017

  2. TaCLe Learning constraints in spreadsheets and tabular data Sergey Paramonov, Samuel Kolb, Tias Guns, Luc De Raedt KU Leuven (Machine Learning 2017, ECMLPKDD Track, CIKM 2017 demo track)

  3. Reverse Engineering Formulae / Constraints Illustration

  4. SERIES ( T 1 [: , 1]) T 2 [1 , :] = SUM col ( T 1 [: , 3:7]) T 1 [: , 1] = RANK ( T 1 [: , 5]) ∗ T 2 [2 , :] = AVERAGE col ( T 1 [: , 3:7]) T 1 [: , 1] = RANK ( T 1 [: , 6]) ∗ T 2 [3 , :] = MAX col ( T 1 [: , 3:7]) T 1 [: , 1] = RANK ( T 1 [: , 10]) ∗ T 2 [4 , :] = MIN col ( T 1 [: , 3:7]) T 1 [: , 8] = RANK ( T 1 [: , 7]) T 4 [: , 2] = SUM col ( T 1 [: , 3:6]) T 1 [: , 8] = RANK ( T 1 [: , 3]) ∗ T 4 [: , 4] = P REV ( T 4 [: , 4]) + T 4 [: , 2] − T 4 [: , 3] T 1 [: , 8] = RANK ( T 1 [: , 4]) ∗ T 5 [: , 2] = LOOKUP ( T 5 [: , 3] , T 1 [: , 2] , T 1 [: , 1]) ∗ T 1 [: , 7] = SUM row ( T 1 [: , 3:6]) T 5 [: , 3] = LOOKUP ( T 5 [: , 2] , T 1 [: , 1] , T 1 [: , 2]) T 1 [: , 10] = SUMIF ( T 3 [: , 1] , T 1 [: , 2] , T 3 [: , 2]) T 1 [: , 11] = MAXIF ( T 3 [: , 1] , T 1 [: , 2] , T 3 [: , 2]) (b) Constraints learned for the tables above, except 19 ALLDIFFERENT , 2 PERMUTATION and 5 FOR- EIGNKEY and 5 ASCENDING constraints not shown. Constraints marked with ∗ were not present in the original spreadsheets. We are working on learning constraints and CSPs

  5. What I will talk about Luc De Raedt (KULeuven) Dagstuhl Seminar on ML and Formal Methods August 2017

  6. Dynamic Probabilistic Logic Programs Luc De Raedt (KULeuven) Dagstuhl Seminar on ML and Formal Methods August 2017

  7. Dynamics: Evolving Networks • Travian : A massively multiplayer real-time strategy game • Commercial game run by TravianGames GmbH • ~3.000.000 players spread over different “worlds” • ~25.000 players in one world [Thon et al. MLJ 11] 7

  8. World Dynamics Alliance 3 P 6 Fragment of world with Alliance 4 Alliance 2 P 9 985 777 P 5 ~10 alliances 932 P 3 871 ~200 players P 2 837 950 744 644 878 ~600 cities 946 1081 864 913 alliances color-coded Can we build a model 1081 1077 of this world ? 915 1040 1073 Can we use it for playing 1090 895 1090 better ? 1087 942 1093 955 621 1054 804 [Thon, Landwehr, De Raedt, ECML08] 1084 830 770 786 8

  9. World Dynamics P 6 Fragment of world with Alliance 4 985 Alliance 2 783 ~10 alliances 888 875 ~200 players P 5 844 950 761 644 878 ~600 cities P 3 946 P 2 948 864 913 alliances color-coded Can we build a model 951 938 of this world ? 917 988 1073 Can we use it for playing 1090 904 961 better ? 1087 946 1061 959 632 924 820 [Thon, Landwehr, De Raedt, ECML08] 935 762 770 794 9

  10. World Dynamics P 10 P 6 Alliance 2 P 7 Fragment of world with Alliance 4 P 5 985 P 3 807 ~10 alliances 920 824 ~200 players 877 986 838 712 895 ~600 cities P 2 947 1024 905 959 alliances color-coded Can we build a model 1032 1049 of this world ? 931 1002 987 Can we use it for playing 1090 918 1026 better ? 1087 958 1081 977 701 994 835 [Thon, Landwehr, De Raedt, ECML08] 1026 781 779 808 10

  11. Learning relational affordances Learn probabilistic model From two object interactions Moldovan et al. ICRA 12, 13, 14, PhD 15 Nitti et al MLJ 15, 17 (forthcoming), Phd 16 Generalize to N Shelf grasp Shelf Shelf tap push

  12. ProbLog by example: A bit of gambling h • toss (biased) coin & draw ball from each urn • win if (heads and a red ball) or (two balls of same color) probabilistic fact : heads is true with annotated disjunction : first ball is red 0.4 :: heads. 
 probabilistic choices probability 0.4 (and false with 0.6) with probability 0.3 and blue with 0.7 0.3 :: col(1,red); 0.7 :: col(1,blue) <- true. 0.2 :: col(2,red); 0.3 :: col(2,green); 
 0.5 :: col(2,blue) <- true. 
 annotated disjunction : second ball is red with logical rule encoding probability 0.2, green with 0.3, and blue with 0.5 win :- heads, col(_,red). consequences background knowledge win :- col(1,C), col(2,C). 12

  13. Possible Worlds 0.4 :: heads. 0.3 :: col(1,red); 0.7 :: col(1,blue) <- true. 0.2 :: col(2,red); 0.3 :: col(2,green); 0.5 :: col(2,blue) <- true. win :- heads, col(_,red). win :- col(1,C), col(2,C). (1 − 0.4) × 0.3 × 0.3 (1 − 0.4) × 0.3 × 0.2 0.4 × 0.3 × 0.3 H R G R R R G W W 13

  14. Questions 0.4 :: heads. 0.3 :: col(1,red); 0.7 :: col(1,blue) <- true. 0.2 :: col(2,red); 0.3 :: col(2,green); 0.5 :: col(2,blue) <- true. win :- heads, col(_,red). win :- col(1,C), col(2,C). marginal probability • Probability of win ? 
 conditional probability • Probability of win given col(2,green) ? 
 • Most probable world where win is true? MPE inference Inference is #P-complete — weighted model counting 14

  15. Distributional Clauses (DC) • Discrete- and continuous-valued random variables random variable with Gaussian distribution length(Obj) ~ gaussian(6.0,0.45) :- type(Obj,glass). comparing values of stackable(OBot,OTop) :- 
 ≃ length(OBot) ≥ ≃ length(OTop), 
 random variables ≃ width(OBot) ≥ ≃ width(OTop). ontype(Obj,plate) ~ finite([0 : glass, 0.0024 : cup, 
 0 : pitcher, 0.8676 : plate, 
 0.0284 : bowl, 0 : serving, 
 0.1016 : none]) 
 :- obj(Obj), on(Obj,O2), type(O2,plate). random variable with discrete distribution [Gutmann et al, TPLP 11; Nitti et al, IROS 13] 15

  16. Magnetic scenario ● 3 object types: magnetic, ferromagnetic, nonmagnetic ● Nonmagnetic objects do not interact ● A magnet and a ferromagnetic object attract each other ● Magnetic force that depends on the distance ● If an object is held magnetic force is compensated. 16

  17. 
 Magnetic scenario ● 3 object types: magnetic, ferromagnetic, nonmagnetic type(X) t ~ finite([1/3:magnet,1/3:ferromagnetic,1/3:nonmagnetic]) ← object(X). ● 2 magnets attract or repulse interaction(A,B) t ~ finite([0.5:attraction,0.5:repulsion]) ← 
 object(A), object(B), A<B,type(A) t = magnet,type(B) t = magnet. ● Next position after attraction pos(A) t+1 ~ gaussian(middlepoint(A,B) t ,Cov) ← 
 near(A,B) t , not(held(A)), not(held(B)), 
 interaction(A,B) t = attr, c/dist(A,B) t 2 > friction(A) t . pos(A) t+1 ~ gaussian(pos(A) t ,Cov) ← not( attraction(A,B) ). 17

  18. 18

  19. Learning relational affordances Learn probabilistic model From two object interactions Moldovan et al. ICRA 12, 13, 14, PhD 15 Nitti et al MLJ 15, 17 (forthcoming), Phd 16 Generalize to N Shelf grasp Shelf Shelf tap push

  20. What is an affordance ? Clip 8: Relational O before (l), and E after the action execution (r). Table 1: Example collected O , A , E data for action in Figure 8 Object Properties Action E ff ects shape O Main : sprism displX O Main : 10 . 33 cm shape O Sec : sprism displY O Main : − 0 . 68 cm tap (10) distX O Main ,O Sec : 6 . 94 cm displX O Sec : 7 . 43 cm distY O Main ,O Sec : 1 . 90 cm displY O Sec : − 1 . 31 cm • Formalism — related to STRIPS / to PDDL but models delta • but also joint probability model over A, E, O

  21. Relational Affordance Learning Learning the Structure of Dynamic Hybrid Relational Models 
 ● Nitti, Ravkic, et al. ECAI 2016 − Captures relations/affordances − Suited to learn affordances in 
 robotics set-up, continuous and discrete variables − Planning in hybrid robotics domain DDC Tree learner action(X)

  22. Planning [Nitti et al ECML 15]

  23. Conclusions • Static version (~ prob. data and knowledge bases) • Dynamic formalism is related to PDDL, can represent relational MDPs, can be learned (small problems) and can be used for planning — continuing work on scaling up • We can learn these formalisms 23

  24. Questions that I have • What kind of relationships exist between PCTL and Prob. Planning ? • What verification techniques work with relational worlds ? (relational MDPs versus propositional ones) • I d like to impose constraints on what is being learned … how do I do that ? 24

  25. Curse of Dimension e d c move(e,c) • Flat represention e a b • No notions of objects and d c relations among the move(e,floor) d c a b objects e a b • Generalization (similar move(c,e) situations / individuals)? c • Parameter Reduction / ILP, Porto, Portugal, 2004 e #blocks #states Compression ? d – on(a,b) for 10 blocks 3 13 • <150 values 5 501 a b • 58,941,091 states 8 394,353 10 58,941,091

Recommend


More recommend