Artificial intelligence • “Artificial Intelligence is the science of PHILOSOPHY OF ARTIFICIAL making machines do things that would INTELLIGENCE require intelligence if done by men” Marvin Minsky Prof.Dr John-Jules Meyer – Weak AI thesis Dr Menno Lievers – Strong AI thesis 2 Weak AI Thesis Strong AI Thesis • The computer is (only) a powerful aiding • An adequately programmed computer has a cognitive state - computer programs explain tool for the study of the human mind human cognition • It is possible to devise machines that behave • It is possible to construct machines that like people and possess human capabilities , perform useful “intelligent” tasks such as the ability to think, reason, ..., play assisting human users chess, walk, ..., have emotions, pain, ... – possible?? – Difficult enough?! – desirable?! 3 4 Can a machine think? The Turing Test • A human A communicates by email • Try to first answer the question ‘ in principle ’ , independent of available technology with a human B and a computer C • Is consciousness necessary for thinking? • A poses questions to both B and C to discover which is the human – Human mental processes are often non-conscious • 'sleeping problem solver' • If A doesn’t succeed to distinguish B • 'blindsight' and C, the computer C passes the • You may replace ‘ thinking ’ or ‘ being Turing Test intelligent ’ by ’ displaying cognitive activity ’ 5 6 1
Has the Turing Test been passed The Turing Test Set-Up already? • Turing test: based on link between ’ thinking' en 'conversation' • Two famous ‘ conversation ’ programs: C B – ELIZA – PARRY • ELIZA and PARRY are based on relatively simple pattern matching algoritms: this is not thinking …?!! A 7 8 Objections against the Turing Objections against the Turing Test Test 1. Chimpanzee objection: chimpanzees, dolphins, ... 3. simulation objection: simulated X ≠ X. This objection will not pass the Turing Test, while they are says that thinking cannot ever be simulated obviously intelligent and able to think! So a negative perfectly result does not say anything about being able to 4. Black Box objection: the external behaviours are think / being intelligent. equal does not imply that the processes are 2. sensory versus verbal communication: the TT only themselves equal! concerns verbal communication: no test of the SUPERPARRY: program containing all conversations of computer ’ s ability to relate words to things in the length ≤ 100 words: is finite in principle and programmable; will pass the Turing test; however, does not think !?! world. 9 10 Conclusion?! Can we improve the Turing Test ? • In any case we need the following criteria: – Output criterion : competition between two ‘ agents ’ – Design criterion : it is not about the human- like way of thinking, think also of hypothetical aliens (or animals…) 11 12 2
What is thinking / intelligence? Symbol System Hypothesis • thinking is an intentional notion, it has goal/action- • thinking = 'being massively adaptable' directed; it has to do with explaining and predicting of • Is this achievable using digital computers ? behaviour −−− > planning, being flexible, adaptable – I.o.w. if we can make machines ‘ think ’ , is a digital computer • Generalise this notion: it is about being 'massively the right kind of machine ? adaptable' → this notion is applicable to non- • symbol system hypothesis (SSH): yes!: traditional matters such as extraterrestrial – a universal symbol system (= general-purpose stored- intelligence, animals, computers / machines (artificial program computer): symbol manipulator operating by executing fundamental operations, such as branch, delete, intelligence) output, input, compare, shift, write, copy is a 'massively ∴ "robots are able to think" may then be a sensible adaptable' machine statement 13 14 Intelligent systems GOFAI recipe for an IS • An intelligent ('massively adaptable ’ ) system (IS) 1. Use a sufficiently expressive, inductively defined, should be able to: compositional language to represent 'real-world' objects, events, actions, relations, etc. – Generate plans – Analyze situations 2. Construct an adequate representation of the – Deliberate decisions world and the processes in it in a universal – Reason and revise 'beliefs' symbol system (USS) : extensive Knowledge – Use analogies Base (KB) – Weigh conflicts of interest, preferences 3. Use suitable input devices to obtain symbolic – Decide rationally on the basis of imperfect information representation of environmental stimuli – Learn, categorize 15 16 GOFAI recipe for an IS • The SSH says: 4. Employ complex sequences of the fundamental operations of the USS to be applied to the symbol structures of the inputs and the KB , yielding new symbol structures (some of these are designated as output ) 5. This output is a symbolic representation of • In this way a thinking (= massively adaptable) response to the input. A suitable robot body can machine is obtained! be used to ‘ translate ’ the symbols into real behaviour / action 17 18 3
Doubts about the SSH Status SSH • How can such a machine really understand? • the SSH is an interesting conjecture, that may appear strange, but may be true after all (there are more strange things that are held • Or wonder whether a sentence is true? to be true: e.g. relativity theory, quantum mechanics...); however: • or desire something? – Is there any evidence by the state of the art in AI?: • Not (yet): all AI at the moment is rather limited ; the • ... Etc. original GPS project has more or less failed, and modern AI is not yet sufficiently convincing(?!) – Philosophical (analytical) considerations (Searle) 19 20 Strong Symbol System Philosophical objections against Hypothesis (SSSH) Strong AI & SSH: Searle • SSH: computers (i.e.. univ. symbol • Is the question whether a computer is suitable device for thinking an empirical one? manipulators) can think • Searle: the question whether a symbol manipulating • SSSH: ONLY computers (univ. symbol device can think is not empirical, but analytical , and manipulators) can think, i.e. the only things can be answered negatively : capable of thinking are univ. symb. manip.; – a universal symbol manipulator (USS) operates purely ergo, the human mind is a univ. symb. syntactically and is not able to really understand what it is doing! manip., a computer!!! – syntax is insufficient for dealing with semantics (= – The SSSH is even more controversial than the " understanding of what symbols actually mean") SSH. 21 22 Searle’s Gedankenexperiment The Chinese room • John Searle tries to argue by means of a Gedankenexperiment that a computer cannot think, or more precisely, cannot ダソまめキずそぜゑわボ perform an intelligent task, such as e.g. answer questions in Chinese about a Answers in Text with Chinese text, and really understand what Chinese questions Sam it is doing. in Chinese Suppose we have a computer program Sam capable to answer questions in Chinese about Chinese texts 23 24 4
The Chinese room The Chinese room • Chinese room argument: – Joe in the room executing the computer program Sam manually, does not understand the story nor ダソまめキずそぜゑわボ the questions, nor the answers: only manipulation of meaningless symbols: "Sam 'run' on a human Answers in Text with Chinese computer" questions Joe – Executing the program does not enable Joe to in Chinese understand the story, questions, etc., ergo executing the program does not enable the Replace computer program Sam by computer to understand the story, questions etc. ! human Joe executing the program instructions 25 26 Chinese room: Searle’s But …?!? conclusion • But… cannot we ‘prove’ in the same way • running a program does not lead to that humans (i.e. our brains) cannot understanding, believing, intending, thinking think …?!? …! – Let the global population (5 billion people) simulate a brain B with its 100 billion neurons: then each person controls some • "merely manipulating symbols will not enable 20 neurons the manipulating device to understand X, – No person knows what B is thinking… believe Y, think Z..." – So, neither do(es) (the neurons in) brain B. ∴ the SSH is FALSE ! 27 28 The “Systems Reply” Counter-objection • 'The systems reply': Not only the symbol • Searle contra de systems reply: manipulator Joe is concerned but the system 1. Joe does not understand, but Joe + paper + as a whole : it could be possible that the whole pencil would understand ?!? (cynically) system does understand! 2. Let Joe learn all rules of the program by heart; then there is no ‘ bigger ’ system any more of which Joe is part; in fact everything is part of Joe in that case! 29 30 5
Recommend
More recommend