history of ai current trends prospective trajectories
play

History of AI, Current Trends, Prospective Trajectories Winter - PowerPoint PPT Presentation

History of AI, Current Trends, Prospective Trajectories Winter Academy on Artificial Intelligence and International Law Asser Institute 11 February 2019 Giovanni Sileno g.sileno@uva.nl What is Artificial Intelligence? What is Artificial


  1. Who was at the Darmouth Workshop (1956)? ● A remarkable group of ~20 scientist and engineers, including: – John McCarty (LISP language, situation calculus, non-monotonic logics) – Marvin Minsky (frames, perceptron, society of minds) – Herbert Simon (logic theorist, general problem solver, bounded rationality) – Allen Newell (logic theorist, general problem solver, the knowledge level) – Ray Solomonoff (father of algorithmic probability, algorithmic information theory) – Arthur Lee Samuel (first machine learning algorithm for checkers) – W. Ross Ashby (pioneer in cybernetics, law of requisite variety) – Claude Shannon (father of information theory) – John Nash (father of game theory) future nobel prizes

  2. Who was at the Darmouth Workshop (1956)? ● A remarkable group of ~20 scientist and engineers, including: – John McCarty (LISP language, situation calculus, non-monotonic logics) – Marvin Minsky (frames, perceptron, society of minds) – Herbert Simon (logic theorist, general problem solver, bounded rationality) – Allen Newell (logic theorist, general problem solver, the knowledge level) – Ray Solomonoff (father of algorithmic probability, algorithmic information theory) – Arthur Lee Samuel (first machine learning algorithm for checkers) – W. Ross Ashby (pioneer in cybernetics, law of requisite variety) – Claude Shannon (father of information theory) – John Nash (father of game theory) The workshop brought no tangible result, but resulted in a shift from semantic approaches to symbolic processing . future nobel prizes

  3. Who was at the Darmouth Workshop (1956)? ● A remarkable group of ~20 scientist and engineers, including: – John McCarty (LISP language, situation calculus, non-monotonic logics) – Marvin Minsky (frames, perceptron, society of minds) – Herbert Simon (logic theorist, general problem solver, bounded rationality) – Allen Newell (logic theorist, general problem solver, the knowledge level) – Ray Solomonoff (father of algorithmic probability, algorithmic information theory) – Arthur Lee Samuel (first machine learning algorithm for checkers) – W. Ross Ashby (pioneer in cybernetics, law of requisite variety) – Claude Shannon (father of information theory) a strong agenda – John Nash (father of game theory) The workshop brought no tangible result, but resulted in a shift from semantic approaches to symbolic processing . future nobel prizes

  4. logicist reasoning and decision-making AI AS ENGINEERING OF THE “MIND” induction of functions from data empiricist

  5. monolithical logicist systems logic reasoning and decision-making AI AS ENGINEERING OF THE “MIND” heterogeneous systems induction of functions from data homogeneous systems artificial neural networks (ANNs) monolithical empiricist systems probability

  6. monolithical logicist systems “Neats” elegant solutions, provably correct reasoning and decision-making “Scruffies” ad-hoc solutions, empirical evaluation AI AS ENGINEERING OF THE “MIND” heterogeneous systems induction of functions from data homogeneous systems characteristics of most people at the Darmouth workshop monolithical empiricist systems

  7. monolithical logicist systems “Neats” elegant solutions, provably correct reasoning and decision-making “Scruffies” ad-hoc solutions, empirical evaluation AI AS ENGINEERING OF THE “MIND” heterogeneous systems induction of functions from data homogeneous systems characteristics of most people at the Darmouth workshop There were few researchers working on neural networks, and monolithical empiricist systems more in general learning was not brought to the foreground.

  8. What/who stayed in the background ● In the words of another remarkable researcher (who was invited but could not go): – John Holland (neural networks, pioneer of complex adaptive systems and genetic algorithms) [It resulted that] “there was very little interest in learning. In my honest opinion, this held up AI in quite a few ways. It would have been much better if Rosenblatt’s Perceptron work, or in particular Samuels’ checkers playing system, or some of the other early machine learning work, had had more of an impact. In particular, I think there would have been less of this notion that you can just put it all in as expertise ” [..] “it’s still not absolutely clear to me why the other approaches fell away. Perhaps there was no forceful advocate.” P. Husbands. (2008). An Interview With John Holland. In P. Husbands, O. Holland, & M. Wheeler (Eds.), The Mechanical Mind in History (pp. 383–396).

  9. Ingredients for many stories of shining success and dramatic fall in AI

  10. Ingredients for many stories of shining success and dramatic fall in AI ● societal needs ● strong advocates ● initial unexpected successes ● adequate computational technologies

  11. Ingredients for many stories of shining success and dramatic fall in AI ● societal needs ● strong advocates ● initial unexpected successes ● adequate computational technologies ● financial resources

  12. Ingredients for many stories of shining success and dramatic fall in AI ● societal needs ● strong advocates ● initial unexpected successes raising expectations ● adequate computational technologies illusions and then delusions ● financial resources hype cycle

  13. Ingredients for many stories of shining success and dramatic fall in AI ● societal needs ● strong advocates ● initial unexpected successes raising expectations ● adequate computational technologies illusions and then delusions ● financial resources ● but still (most of the times) there are concrete achievements. They just become infrastructure : invisible, but necessary.

  14. Computational intelligence

  15. Algorithm = Logic + Control “An algorithm can be regarded as consisting of – a logic component , which specifies the knowledge to be used in solving problems, and – a control component , which determines the problem- solving strategies by means of which that knowledge is used. The logic component determines the meaning of the algorithm whereas the control component only affects its efficency.” Kowalski, R. (1979). Algorithm = Logic + Control. Communications of the ACM, 22(7), 424–436.

  16. Imperative style of programming: you command the directions

  17. Imperative style of programming: you command the directions

  18. Imperative style of programming: you command the directions ● What if the labyrinth changes?

  19. Declarative style of programming: you give just the labyrinth. the computer finds the way.

  20. Declarative style of programming: you give just the labyrinth. the computer finds the way. ● For instance, via trial , error and backtracking .

  21. Declarative style of programming: you give just the labyrinth. the computer finds the way. ● For instance, via trial , error and backtracking .

  22. Declarative style of programming: you give just the labyrinth. the computer finds the way. ● For instance, via trial , error and backtracking .

  23. Declarative style of programming: you give just the labyrinth. the computer finds the way. ● For instance, via trial , error and backtracking .

  24. Initial WELL-DEFINED PROBLEM state KNOWLEDGE Declarative style of programming: you give just the labyrinth. the computer finds the way. ● For instance, via trial , error and backtracking . PROBLEM-SOLVING METHOD Goal state

  25. Well-defined problems & problem spaces P r o b l e m s a r e w e l l - d e f i n e d w h e n t h e r e i s a s i m p l e t e s t t o c o n c l u d e w h e t h e r a s o l u t i o n i s a s o l u t i o n . J . M c C a r t h y ( 1 9 5 6 ) T h e i n v e r s i o n o f f u n c t i o n s d e f i n e d b y T u r i n g m a c h i n e s . A u t o m a t a S t u d i e s , A n n a l s o f M a t h e m a t i c a l S t u d i e s , 3 4 : 1 7 7 – 1 8 1 . P e o p l e s o l v e p r o b l e m s b y s e a r c h i n g t h r o u g h a p r o b l e m s p a c e , c o n s i s t i n g o f t h e i n i t i a l s t a t e , t h e g o a l s t a t e , a n d a l l p o s s i b l e s t a t e s i n b e t w e e n . N e w e l l , A . , & S i m o n , H . A . ( 1 9 7 2 ) . H u m a n p r o b l e m s o l v i n g .

  26. Problem and solution spaces S P p r o b l e m s s o l u t i o n s p a c e s p a c e p s

  27. Problem and solution spaces S P p r o b l e m a b s t r a c t t y p e s o l u t i o n r e f i n e m e n t a b s t r a c t i o n p r o b l e m s s o l u t i o n s p a c e s p a c e p s

  28. Problem and solution spaces S P p r o b l e m a b s t r a c t t y p e s o l u t i o n r e f i n e m e n t a b s t r a c t i o n p r o b l e m s s o l u t i o n s p a c e s p a c e p s s o l u t i o n

  29. Defining the problem... An old lady wants to visit her friend in a neighbouring village. She takes her car, but halfway the engine stops after some hesitations. On the side of the road she tries to restart the engine, but to no avail. Wh i c h i s t h e p r o b l e m h e r e ? B r e u k e r , J . ( 1 9 9 4 ) . C o m p o n e n t s o f p r o b l e m s o l v i n g a n d t y p e s o f p r o b l e m s . A F u t u r e f o r K n o w l e d g e A c q u i s i t i o n , 8 6 7 , 1 1 8 – 1 3 6 .

  30. from ill-defined to well-defined problems... B r e u k e r , J . ( 1 9 9 4 ) . C o m p o n e n t s o f p r o b l e m s o l v i n g a n d t y p e s o f p r o b l e m s . A F u t u r e f o r K n o w l e d g e A c q u i s i t i o n , 8 6 7 , 1 1 8 – 1 3 6 .

  31. Suite of problem types m o d e l l i n g B r e u k e r , J . ( 1 9 9 4 ) . C o m p o n e n t s o f p r o b l e m s o l v i n g a n d t y p e s o f p r o b l e m s . A F u t u r e f o r K n o w l e d g e A c q u i s i t i o n , 8 6 7 , 1 1 8 – 1 3 6 .

  32. Suite of problem types m o d e l l i n g d e s i g n s t r u c t u r a l v i e w : s y s t e m B r e u k e r , J . ( 1 9 9 4 ) . C o m p o n e n t s o f p r o b l e m s o l v i n g a n d t y p e s o f p r o b l e m s . A F u t u r e f o r K n o w l e d g e A c q u i s i t i o n , 8 6 7 , 1 1 8 – 1 3 6 .

  33. Suite of problem types b e h a v i o u r a l v i e w : s y s t e m + e n v i r o n m e n t p l a n n i n g m o d e l l i n g d e s i g n s t r u c t u r a l v i e w : s y s t e m B r e u k e r , J . ( 1 9 9 4 ) . C o m p o n e n t s o f p r o b l e m s o l v i n g a n d t y p e s o f p r o b l e m s . A F u t u r e f o r K n o w l e d g e A c q u i s i t i o n , 8 6 7 , 1 1 8 – 1 3 6 .

  34. Suite of problem types b e h a v i o u r a l v i e w : s y s t e m + e n v i r o n m e n t p l a n n i n g s c h e d u l i n g m o d e l l i n g a s s i g n m e n t c o n f i g u r a t i o n d e s i g n s t r u c t u r a l v i e w : s y s t e m B r e u k e r , J . ( 1 9 9 4 ) . C o m p o n e n t s o f p r o b l e m s o l v i n g a n d t y p e s o f p r o b l e m s . A F u t u r e f o r K n o w l e d g e A c q u i s i t i o n , 8 6 7 , 1 1 8 – 1 3 6 .

  35. Suite of problem types b e h a v i o u r a l v i e w : s y s t e m + e n v i r o n m e n t p l a n n i n g a s s e s s m e n t m o d e l l i n g a s s i g n m e n t d e s i g n s t r u c t u r a l v i e w : s y s t e m B r e u k e r , J . ( 1 9 9 4 ) . C o m p o n e n t s o f p r o b l e m s o l v i n g a n d t y p e s o f p r o b l e m s . A F u t u r e f o r K n o w l e d g e A c q u i s i t i o n , 8 6 7 , 1 1 8 – 1 3 6 .

  36. Suite of problem types b e h a v i o u r a l v i e w : s y s t e m + e n v i r o n m e n t p l a n n i n g a s s e s s m e n t m o d e l l i n g a s s i g n m e n t d e s i g n m o n i t o r i n g s t r u c t u r a l v i e w : s y s t e m B r e u k e r , J . ( 1 9 9 4 ) . C o m p o n e n t s o f p r o b l e m s o l v i n g a n d t y p e s o f p r o b l e m s . A F u t u r e f o r K n o w l e d g e A c q u i s i t i o n , 8 6 7 , 1 1 8 – 1 3 6 .

  37. Suite of problem types b e h a v i o u r a l v i e w : s y s t e m + e n v i r o n m e n t p l a n n i n g a s s e s s m e n t m o d e l l i n g a s s i g n m e n t d e s i g n m o n i t o r i n g d i a g n o s i s s t r u c t u r a l v i e w : s y s t e m B r e u k e r , J . ( 1 9 9 4 ) . C o m p o n e n t s o f p r o b l e m s o l v i n g a n d t y p e s o f p r o b l e m s . A F u t u r e f o r K n o w l e d g e A c q u i s i t i o n , 8 6 7 , 1 1 8 – 1 3 6 .

  38. Suite of problem types b e h a v i o u r a l v i e w : s y s t e m + e n v i r o n m e n t p l a n n i n g a s s e s s m e n t m o d e l l i n g a s s i g n m e n t d e s i g n m o n i t o r i n g d i a g n o s i s s t r u c t u r a l v i e w : s y s t e m ● AI researchers studied problem-solving methods and B r e u k e r , J . ( 1 9 9 4 ) . C o m p o n e n t s o f p r o b l e m s o l v i n g a n d t y p e s o f p r o b l e m s . A associated knowledge structures for each problem type. F u t u r e f o r K n o w l e d g e A c q u i s i t i o n , 8 6 7 , 1 1 8 – 1 3 6 .

  39. What is Knowledge in AI? ● K n o w l e d g e i s w h a t w e a s c r i b e t o a n a g e n t t o p r e d i c t i t s b e h a v i o u r f o l l o w i n g p r i n c i p l e s o f r a t i o n a l i t y . N o t e : t h i s k n o w l e d g e r e p r e s e n t a t i o n i s n o t i n t e n d e d t o b e a n a c c u r a t e , p h y s i c a l m o d e l . N e w e l l , A . ( 1 9 8 2 ) . T h e K n o w l e d g e L e v e l . A r t i f i c i a l I n t e l l i g e n c e , 1 8 ( 1 ) , 8 7 – 1 2 7 .

  40. Data, Information, Knowledge ● D a t a : u n i n t e r p r e t e d s i g n a l s o r s y m b o l s

  41. Data, Information, Knowledge ● D a t a : u n i n t e r p r e t e d s i g n a l s o r s y m b o l s ● I n f o r ma t i o n : d a t a w i t h a d d e d m e a n i n g

  42. Data, Information, Knowledge ● D a t a : u n i n t e r p r e t e d s i g n a l s o r s y m b o l s ● I n f o r ma t i o n : d a t a w i t h a d d e d m e a n i n g ● K n o w l e d g e : a l l d a t a a n d i n f o r m a t i o n t h a t p e o p l e u s e t o a c t , a c c o m p l i s h t a s k s a n d t o c r e a t e n e w i n f o r m a t i o n ( e . g . k n o w - h o w , - w h y , - w h o , - w h e r e a n d - w h e n ) .

  43. Expert system (rule base) if flower and seed then phanerogam if phanerogam and bare-seed then fir if phanerogam and 1-cotyledon then monocotyledonous if phanerogam and 2-cotyledon then dicotyledonous if monocotyledon and rhizome then thrush if dicotyledon then anemone if monocotyledon and ¬rhizome then lilac if leaf and flower then cryptogamous if cryptogamous and ¬root then foam if cryptogamous and root then fern if ¬leaf and plant then thallophyte if thallophyte and chlorophyll then algae if thallophyte and ¬ chlorophyll then fungus if ¬leaf and ¬flower and ¬plant then colibacille rhizome + flower + seed + 1-cotyledon ?

  44. Frames ● F r a m e s a r e " s t e r e o t y p e d " k n o w l e d g e u n i t s r e p r e s e n t i n g s i t u a t i o n s , o b j e c t s o r e v e n t s o r ( c l a s s e s ) s e t s o f s u c h e n t i t i e s . (base for the Obiect-Oriented Programming paradigm )

  45. Semantic Networks (used in contemporary Semantic Web technologies )

  46. In sum ● Symbolic AI presents transparent techniques to effectively model and solve problems that can be described in symbolic terms (where expertise can be verbalized) . ● All IT systems of organizations today rely on some of the technologies introduced or emerged during the first AI wave. ● But these results are much inferior than what promised.. (even more in the 70s).

  47. A p h y s i c a l s y m b o l s y s t e m h a s t h e n e c e s s a r y a n d s u f f i c i e n t m e a n s f o r g e n e r a l i n t e l l i g e n t a c t i o n A l l e n N e w e l l a n d H e r b e r t A . S i m o n C o m p u t e r S c i e n c e a s E m p i r i c a l I n q u i r y : S y m b o l s a n d S e a r c h ( 1 9 7 6 )

  48. Acknowledged limitations ● knowledge acquisition bottleneck ● scaling or modularity ● tractability (e.g. ramification problem ) ● symbol grounding

  49. Acknowledged limitations ● natural language ● knowledge acquisition bottleneck ● scaling or modularity ● sensory-motor tasks – computer vision, ● tractability (e.g. ramification problem ) – speech recognition, ● symbol grounding – actuator control

  50. Acknowledged limitations ● natural language ● knowledge acquisition bottleneck ● scaling or modularity ● sensory-motor tasks – computer vision, ● tractability (e.g. ramification problem ) – speech recognition, ● symbol grounding – actuator control Hacking solutions ● Scruffies never believed the mind was a monolithical system, so they tinkered with heuristics, ad-hoc methods, and opportunistically with logic (“neat shells for scruffy approaches”).

  51. ( t h e f i r s t c h a t b o t ) E L I Z A We i z e n b a u m ~ 1 9 6 5 Still running e.g. on: https://www.masswerk.at/elizabot/eliza.html

  52. S H R D L U Wi n o g r a d ~ 1 9 6 9 ● D e e p e r l i n g u i s t i c u n d e r s t a n d i n g ● b u t l i m i t e d t o s i m p l e b l o c k s w o r l d s

  53. Acknowledged limitations ● natural language ● knowledge acquisition bottleneck ● scaling or modularity ● sensory-motor tasks – computer vision, ● tractability (e.g. ramification problem ) – speech recognition, ● symbol grounding – actuator control Hacking solutions Scruffies never believed the mind was a monolithical system, so they tinkered with heuristics, ad-hoc methods, and opportunistically with logic (“neat shells for scruffy approaches”). – but these successes were impossible to generalize.

  54. AI Winter (early 70s/80s) ● After a series of critical reports, funding to AI projects reduced massively. Researchers started to seek other names for their own research fields.

  55. ● Facing overwhelming difficulties to go beyond from toy problems, radically different paradigms started to be (re)considered, renouncing to symbolic representations. ● As Rodney Brooks famously put it: “Elephants don't play chess”

  56. The revenge of machine learning

  57. Machine learning M a c h i n e l e a r n i n g i s a p r o c e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t o i m p r o v e f r o m e x p e r i e n c e . according to well-defined criteria

  58. Machine learning M a c h i n e l e a r n i n g i s a p r o c e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t o i m p r o v e f r o m e x p e r i e n c e . ● Rather then writing a program, here the developer has to collect adequate training data and decide a ML method. ML method learning data vs parameters adaptation program ML black box INPUT OUTPUT INPUT OUTPUT

  59. Machine learning M a c h i n e l e a r n i n g i s a p r o c e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t o i m p r o v e f r o m e x p e r i e n c e . ● Rather then writing a program, here the developer has to collect adequate training data and decide a ML method. ML method learning data vs parameters adaptation program ML black box INPUT OUTPUT INPUT OUTPUT ● Unfortunately, an adequate parameter adaptation can be highly data-demanding , especially for rich inputs.

  60. Machine learning & co. M a c h i n e l e a r n i n g i s a p r o c e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t o i m p r o v e f r o m e x p e r i e n c e . ● Many learning methods are available, but studied and used by different communities! ● Neural networks are only one among many. (In some situations, evolutionary algorithms can also be of use for this task) Nice video applying evolutionary algorithms: https://www.youtube.com/watch?v=pgaEE27nsQw

  61. Neural Networks timeline

  62. Neural Networks timeline one year after Dartmouth!

  63. Neural Networks timeline Minsky and Papert matematically prove that the Perceptron could not model “exclusive or”

  64. Neural Networks timeline Backpropagation and the addition of layers solved the problem

  65. Neural Networks timeline reuse of previous training possible → fine-tuning

Recommend


More recommend