representing knowledge
play

Representing Knowledge Dustin Smith MIT Media Lab July 2008 - PowerPoint PPT Presentation

Representing Knowledge Dustin Smith MIT Media Lab July 2008 Commonsense Computing MIT MediaLab Big questions How to get it? How to use it? Commonsense Computing MIT MediaLab 2 Big questions How to get it? How to use it? Commonsense


  1. Representing Relations Math Equivalently, we could use a bit string: Primitive Types: Primitive Types: 2 0 = INSECT 1 = INSECT 2 1 = ALIV E 2 = ALIVE 2 2 = MOSTLY WATER 3 = MOSTLY-WATER 5 = IS-STRUGGLING-ARTIST 2 3 = ARTIST 7 = WORKS-IN-E15-320G 2 4 = E 15320 G 11 = STUDIES-AI 2 5 = STUDIESAI Composite Types: 110110 = Dustin 462 = DUSTIN (2 x 3 x 7 x 11) a ∩ b =logical AND a <= b if b[i] = 1 then a[i] =1 a ∪ b = logical OR

  2. Representing Relations Math Equivalently, we could use a bit string: Primitive Types: Primitive Types: 2 0 = INSECT 1 = INSECT 2 1 = ALIV E 2 = ALIVE 2 2 = MOSTLY WATER 3 = MOSTLY-WATER 5 = IS-STRUGGLING-ARTIST 2 3 = ARTIST 7 = WORKS-IN-E15-320G 2 4 = E 15320 G 11 = STUDIES-AI - These markers only 2 5 = STUDIESAI Composite Types: represent conjunctions! 110110 = Dustin 462 = DUSTIN (2 x 3 x 7 x 11) (ands) a ∩ b =logical AND a <= b if b[i] = 1 then a[i] =1 a ∪ b = logical OR

  3. Representing Relations People tried this! - Masterman (1961) - defined 15,000 words in terms of 1000 primitives - Schank (1975) - reduced the number of primitive acts to 11

  4. Representing Relations Math People tried this! - Masterman (1961) - defined 15,000 words in terms of 1000 primitives - Schank (1975) - reduced the number of primitive acts to 11 But,,, - No linguistic/psychological evidence for universal set of primitives - Languages have families of synonyms (glad, happy, cheerful) with slightly different meanings -- not disjoint either-or groups.

  5. Representing Items Math Propositional Logic Sentences are statements that have a truth value S → { true, false } P = it will rain today Q = dustin will wear an umbrella P → Q ?

  6. Representing Items Math Logic: Sentential Connectives Connective Symbol English Not ¬ P not P And P ∨ Q P and Q Or P ∧ Q P or Q Implies P → Q If P then Q Q if P Q is a necessary condition of P P is a su ffi cient condition of Q Biconditional P is a su ffi cient and necessary condition of Q P ↔ Q Q is a su ffi cient and necessary condition of P

  7. Representing Items Math Logic: Inference Inference is driving new knowledge from old knowledge. Deduction , is a set of rules for truth-preserving transformations over logical statements.

  8. Representing Items Math Propositional Logic, rules of inference Valid proposition: If elephants have wings then, 2+2 = 5

  9. Representing Items CS/AI Some representational ideas are in the semantics of the programming language (OOP, Von Neumann, functional), and the programmer can also extend them. Abstraction allows us to name increasingly complicated procedures and data types. Hiding the implementation complexity behind a simple name -- separating the representation from the function. Dog object feed(*food) bark() pet(*instrument) pee() name { hidden weight from you } Commonsense Computing MIT MediaLab 35 bark() bark()

  10. Representing Items CS/AI Inheritance: Commonsense Computing MIT MediaLab 36

  11. Representing Items CS/AI Inheritance: class Animal gender = FMALE; end Dog object feed(*food) bark() class Dog < Animal pet(*instrument) pee() name, weight = “,0 name Animal { hidden bark(); pee(); weight from you } end gender Commonsense Computing MIT MediaLab 37

  12. Representing Items Language Three main views of category representations: 1. Sufficient and necessary conditions / logic. (categories like “game” have no common properties) 2. Exemplars - all instances stored 3. Prototypes - one best representative Commonsense Computing MIT MediaLab 38

  13. Representing Items Language Exemplars Prototypes Commonsense Computing MIT MediaLab 39

  14. Representing Items Language In language items are words , objects are like nouns. Their meaning is context- dependent. Words have various semantic traits when interacting with other words. Commonsense Computing MIT MediaLab 40

  15. Representing Items Language Semantic trait : degree and mode of participation: 1. criterial textual entailment 2. expected 3. possible 4. unexpected It’s a dog -| It’s an animal 5. excluded It’s a dog -| It’s a fish Commonsense Computing MIT MediaLab 41

  16. Representing Items Language Semantic trait : degree and mode of participation: It’s a dog, but it can bark. 1. criterial It’s a dog, but it can’t bark. 2. expected expected/unexpected traits 3. possible 4. unexpected It’s a dog, but it can sing. 5. excluded It’s a dog, but it can’t sing. Commonsense Computing MIT MediaLab 42

  17. Representing Items Language Semantic trait : degree and mode of participation: It’s a dog, but it can bark. 1. criterial It’s a dog, but it can’t bark. 2. expected expected/unexpected traits 3. possible 4. unexpected It’s a dog, but it can sing. 5. excluded It’s a dog, but it can’t sing. Expressive paradoxes Commonsense Computing MIT MediaLab 43

  18. Representing Items Language Semantic trait : degree and mode of participation: 1. criterial 2. expected if both 2+3 are expressive 3. possible paradox, then it’s “possible” 4. unexpected It’s a dog and it’s brown (normal) 5. excluded It’s a dog, but it’s brown (why shouldn’t it be?) Commonsense Computing MIT MediaLab 44

  19. Representing Items Learning Relations Combining Processes Commonsense Computing MIT MediaLab 45

  20. Representing Items Learning Relations Combining Processes Commonsense Computing MIT MediaLab 45

  21. Learning Items Philo. - Deduction: Derive new knowledge by exploiting the structure of old knowledge All Splash Students are Smart John is a Splash Student

  22. Learning Items Philo. - Deduction: Derive new knowledge by exploiting the structure of old knowledge All Splash Students are Smart John is a Splash Student John is Smart

  23. Learning Items Philo. - Deduction: Derive new knowledge by exploiting the structure of old knowledge All Splash Students are Smart John is a Splash Student - by applying inference meta-rules ( modus ponens) John is Smart

  24. Learning Items Philo. - Deduction: Derive new knowledge by exploiting the structure of old knowledge - Induction: Learning generalizations to fit data John is Smart, in Splash and 10 years old. Lisa is Smart, in Splash and 10 years old. Joe is Smart, in Splash and 10 years old.

  25. Learning Items Philo. - Deduction: Derive new knowledge by exploiting the structure of old knowledge - Induction: Learning generalizations to fit data John is Smart, in Splash and 10 years old. Lisa is Smart, in Splash and 10 years old. Joe is Smart, in Splash and 10 years old. People in Spash are Smart

  26. Learning Items Philo. - Deduction: Derive new knowledge by exploiting the structure of old knowledge - Induction: Learning generalizations to fit data John is Smart, in Splash and 10 years old. Lisa is Smart, in Splash and 10 years old. Joe is Smart, in Splash and 10 years old. People named John, Lisa or Joe are smart?

  27. Learning Items Philo. - Deduction: Derive new knowledge by exploiting the structure of old knowledge - Induction: Learning generalizations to fit data John is Smart, in Splash and 10 years old. Lisa is Smart, in Splash and 10 years old. Joe is Smart, in Splash and 10 years old. People 10 years old are smart

  28. Learning Items Philo. Problems of induction - Induction is never justified. Hume’s problem: will the sun rise tomorrow? Assume the past is like the future? - Inductions are always biased. A priori , all hypotheses are equally likely? - Accidental versus law-like hypotheses. Which properties can be generalized to larger classes?

  29. Learning Items Philo. Concept Learning - Functionally, a concept is a mental representation that divides the world into positive and negative classes. f ( x ) → { true, false } chair

  30. Learning Items Philo. Concept Learning - Functionally, a concept is a mental representation that divides the world into positive and negative classes. f ( x ) → { true, false } chair

  31. Bruner, Goodnow, Austin (1956) slides from Josh Tenenbaum’s 9.66

  32. Describing the Microworld - Shapes = { , , } - Number = { 1, 2, 3 } - Texture = { Shaded, Light, Dark} - Frame = {Single, Double, Triple} Number of Concepts: Bruner, Goodnow, Austin (1956)

  33. Describing the Microworld - Shapes = { , , } - Number = { 1, 2, 3 } - Texture = { Shaded, Light, Dark} - Frame = {Single, Double, Triple} Number of Concepts: = | Shapes | × | Number | × | Texture | × | frame | Bruner, Goodnow, Austin (1956)

  34. Describing the Microworld - Shapes = { , , } - Number = { 1, 2, 3 } - Texture = { Shaded, Light, Dark} - Frame = {Single, Double, Triple} Number of Concepts: = | Shapes | × | Number | × | Texture | × | frame | = 3 × 3 × 3 × 3 = 81 Bruner, Goodnow, Austin (1956)

  35. Describing the Microworld - Shapes = { , , } - Number = { 1, 2, 3 } - Texture = { Shaded, Light, Dark} - Frame = {Single, Double, Triple} 9 Number of Concepts: 9 = | Shapes | × | Number | × | Texture | × | frame | = 3 × 3 × 3 × 3 = 81 Bruner, Goodnow, Austin (1956)

  36. slides from Josh Tenenbaum’s 9.66

  37. + slides from Josh Tenenbaum’s 9.66

  38. + slides from Josh Tenenbaum’s 9.66

  39. + slides from Josh Tenenbaum’s 9.66

  40. + slides from Josh Tenenbaum’s 9.66

  41. + + slides from Josh Tenenbaum’s 9.66

  42. + + slides from Josh Tenenbaum’s 9.66

  43. + + slides from Josh Tenenbaum’s 9.66

  44. + + + slides from Josh Tenenbaum’s 9.66

  45. + “striped and three borders” + + slides from Josh Tenenbaum’s 9.66

  46. Describing the Microworld - Shapes = { , , } - Number = { 1, 2, 3 } - Texture = { Shaded, Light, Dark} - Frame = {Single, Double, Triple} Number of percepts: Bruner, Goodnow, Austin (1956)

  47. Describing the Microworld - Shapes = { , , } - Number = { 1, 2, 3 } - Texture = { Shaded, Light, Dark} - Frame = {Single, Double, Triple} Number of percepts: = | Shapes | × | Number | × | Texture | × | frame | Bruner, Goodnow, Austin (1956)

  48. Describing the Microworld - Shapes = { , , } - Number = { 1, 2, 3 } - Texture = { Shaded, Light, Dark} - Frame = {Single, Double, Triple} Number of percepts: = | Shapes | × | Number | × | Texture | × | frame | = 3 × 3 × 3 × 3 = 81 Bruner, Goodnow, Austin (1956)

  49. Describing the Microworld - Shapes = { , , } - Number = { 1, 2, 3 } - Texture = { Shaded, Light, Dark} - Frame = {Single, Double, Triple} 9 Number of percepts: 9 = | Shapes | × | Number | × | Texture | × | frame | = 3 × 3 × 3 × 3 = 81 Bruner, Goodnow, Austin (1956)

  50. generalization lattice

  51. Occam’s Razor - Favor the simple hypotheses when multiple ones fit the data f ( x ) → θ 1 x + θ 2 = f ( x ) → θ 1 x 2 + θ 2 x + θ 3 = f ( x ) → θ 1 x 7 + θ 2 x 6 + θ 3 x 5 + θ 4 x 4 + θ 5 x 3 + θ 6 x 2 + θ 7 x + θ 8 =

  52. Occam’s Razor - Favor the simple hypotheses when multiple ones fit the data f ( x ) → θ 1 x + θ 2 = f ( x ) → θ 1 x 2 + θ 2 x + θ 3 = f ( x ) → θ 1 x 7 + θ 2 x 6 + θ 3 x 5 + θ 4 x 4 + θ 5 x 3 + θ 6 x 2 + θ 7 x + θ 8 =

  53. Occam’s Razor - Favor the simple hypotheses when multiple ones fit the data f ( x ) → θ 1 x + θ 2 = f ( x ) → θ 1 x 2 + θ 2 x + θ 3 = f ( x ) → θ 1 x 7 + θ 2 x 6 + θ 3 x 5 + θ 4 x 4 + θ 5 x 3 + θ 6 x 2 + θ 7 x + θ 8 =

  54. Occam’s Razor - Favor the simple hypotheses when multiple ones fit the data f ( x ) → θ 1 x + θ 2 = f ( x ) → θ 1 x 2 + θ 2 x + θ 3 = f ( x ) → θ 1 x 7 + θ 2 x 6 + θ 3 x 5 + θ 4 x 4 + θ 5 x 3 + θ 6 x 2 + θ 7 x + θ 8 =

  55. Occam’s Razor - Favor the simple hypotheses when multiple ones fit the data f ( x ) → θ 1 x + θ 2 = f ( x ) → θ 1 x 2 + θ 2 x + θ 3 = f ( x ) → θ 1 x 7 + θ 2 x 6 + θ 3 x 5 + θ 4 x 4 + θ 5 x 3 + θ 6 x 2 + θ 7 x + θ 8 =

  56. Occam’s Razor - Favor the simple hypotheses when multiple ones fit the data f ( x ) → θ 1 x + θ 2 = f ( x ) → θ 1 x 2 + θ 2 x + θ 3 = f ( x ) → θ 1 x 7 + θ 2 x 6 + θ 3 x 5 + θ 4 x 4 + θ 5 x 3 + θ 6 x 2 + θ 7 x + θ 8 =

Recommend


More recommend