ASSC7 http://www.cs.memphis.edu/˜assc7/ May/June 2003 Association for the Scientific Study of Consciousness Architecture-based Philosophy of Mind What kind of virtual machine is capable of human consciousness? Aaron Sloman http://www.cs.bham.ac.uk/˜axs/ School of Computer Science The University of Birmingham These slides will be available online at http://www.cs.bham.ac.uk/research/cogaff/talks/#talk25 See also: the Cognition and Affect Web Site http://www.cs.bham.ac.uk/research/cogaff/ This is a revised version of a talk presented at ECAP03 http://www.cs.bham.ac.uk/research/cogaff/talks/#talk23 in March 2003 and at The University of Notre Dame in April 2003. (These slides are still in draft form. Work in progress. Criticisms welcome.) VM-consciousness Slide 1 Revised: June 7, 2004 THEMES 1. Do we know what we mean by “consciousness”? 2. What are information-processing machines? 3. What are virtual machines? 4. How do virtual machines relate to physical machines? 5. How can events in virtual machines be causes, especially of physical events? 6. Atomic State Functionalism vs Virtual Machine Functionalism i.e. AFM vs VMF. Property supervenience vs mechanism supervenience. 7. Architectures for virtual machines in biological organisms. 8. Consciousness and other aspects of mind as biological phenomena: originally products of evolution, though artificial versions are possible too. 9. Different (virtual machine) information-processing architectures support different varieties of consciousness (and different varieties of motivation, perception, learning, emotion, ...). 10. Some support qualia: the subjects of certain sorts of predication. VM-consciousness Slide 2 Revised: June 7, 2004
The importance of virtual machines Many, and probably all, researchers in psychology, brain science, ethology, refer to states, events, processes and entities in virtual machines when they talk about experiences, decisions, intentions, thoughts, learning, feelings, emotions. The concepts used are not definable in the language of the physical sciences but they refer to real phenomena which are are implemented or realised in the physical world. By having a clearer view of what virtual machines are, what they can do, and under what conditions they exist, scientists may come up with better, more complete, more powerful, explanatory theories. This requires adopting the “design stance”. By clarifying the nature of virtual machines, their relationships to phenomena studied by the physical sciences, and especially their causal powers, we can clarify many old philosophical puzzles and explain why they arise naturally in intelligent, reflective, systems with human-like virtual machines. VM-consciousness Slide 3 Revised: June 7, 2004 This is (in part) an exercise in ontology-building You need an ontology to see, to experience, to want, to choose, to learn. Necker cube Duck-rabbit Experiencing the Necker flip requires only a spatial/geometric ontology involving 2-D and 3-D structures and relationships. Experiencing the duck-rabbit flip requires a far more subtle ontology involving categories of animals and their functional parts and their relationships to the environment. What are the “neural correlates” for the ontology used in experiencing something as FACING TO THE LEFT ? Does “facing” in an animal involve having the capacity to percieve, to act, etc.? VM-consciousness Slide 4 Revised: June 7, 2004
Ontologies in scientists and in their objects of study. The point being made is multi-faceted: � Anything that perceives, observes, processes information, must use an ontology (whether it is appropriate for the task or not). � So things that are conscious use an ontology which both makes possible and limits the sorts of things of which they can be conscious. � Likewise scientists studying anything use an ontology and if they use a poor ontology they may ask the wrong questions and come up with poor theories. � This applies as much to scientists studying consciousness as to all others. I suggest that much of the scientific study of consciousness makes use of an impoverished ontology, both for describing what needs to be explained and for constructing explanatory theories. VM-consciousness Slide 5 Revised: June 7, 2004 IF THERE ARE NEURAL CORRELATES OF CONSCIOUSNESS THEY MUST INCLUDE NEURAL CORRELATES FOR THE ONTOLOGIES USED How is an ontology implemented in a brain? Some ontologies are relatively simple and can be mapped into two or three (or more) dimensional structures: E.g. � An ontology for characterising edge-features in a visual field in terms of location, orientation, contrast, width, etc. could be expressed as a high-dimensional array, where each location represents a vector of values – such things have been found in the visual cortex. � Additional mechanisms would be required to interpret those vectors, i.e. provide semantics for them. � In brains, “early” visual regions seem to do this for the simplest, lowest-level visual features. � A clever interpreting mechanism could even make it scale-invariant. � Doing it for collections of relationships between low level features such as those suggesting a rabbit silhouette or a duck silhouette is a totally different matter, involving combinatorially explosive search spaces, and as yet unknown solutions to problems of representation. � Perhaps a generative mechanism could overcome this. VM-consciousness Slide 6 Revised: June 7, 2004
Example: Visual reasoning in humans Think about this problem: No points are common to the triangle and the circle. Suppose the triangle changes its size and shape and moves around in the surface, and circle also changes size and location. They could come into contact. If a vertex touches the circle, or one side becomes a tangent to the circle, there will be one point common to both figures. If one vertex moves into the circle and the rest of the triangle is outside the circle how many points are common to the circle and the triangle? How many different numbers How do you answer the question of contact points can there be? in red? What forms of representation do you use? What mechanisms operate on them? Does the answer change if the circle turns into an ellipse? VM-consciousness Slide 7 Revised: June 7, 2004 What goes on when the circle/triangle question is answered Most people claim to do it using visualisation, in their heads. This requires the ability to experience: � empty space as containing possible paths of motion, � fixed objects as things that can move, rotate and change their shape, � curves and lines as things that can intersect and touch. As well as the ability: � to cause various imagined motions � detect and count the resulting contact points � check that all possibilities have been covered. The ability to answer the questions requires use of an ontology that includes not only static structures (line, circle, vertex, lines crossing, lines touching) but also kinds of motions on a plane, and the effects that can be caused by such motions. The ability to check that all possibilities have been covered requires an ontology including classes of phenomena with different features and the ability to check that examples considered suffice to represent every possible class of that sort (e.g. every possible class of configurations giving a different number of contact points). Can any other animal do this? This is closely related to the perception of affordances. VM-consciousness Slide 8 Revised: June 7, 2004
How is visual problem solving capability implemented in brains? As far as I know, current methods of investigating such things can at most show which brain regions are active when visualisation is used to solve a problem. Showing where something happens does not explain what is done or how it is done. Can any other animal solve geometric problems by visualising transformations? How do nest-building birds (e.g. magpies) decide how to insert the next twig? Does Betty, the hook-making crow, understand the possibility of bending a straight wire to make a hook before she does it, and if so how does she grasp this possibility? See the video here: http://news.bbc.co.uk/1/hi/sci/tech/2178920.stm For further discussion, including the problem of understanding how Mr Bean can remove his underpants without removing his trousers, see A. Sloman, (2001) Diagrams in the mind, in Diagrammatic Representation and Reasoning, Eds. M. Anderson, B. Meyer & P. Olivier, Springer-Verlag, Berlin, also on CogAff web site. The paper claims that some of the reasoning switches between metrical and topological relationships. I don’t think anyone knows either how these capabilities are implemented in brains or how to give them to machines. Part of the problem is to characterise the capabilities accurately. VM-consciousness Slide 9 Revised: June 7, 2004 Different organisms need different ontologies. Different scientists need different ontologies. What sort of ontology is needed for a scientific study of consciousness? An ontology for scientists studying mentality must include, among other things: � Forms of representation � Information (semantic, not Shannon-Weaver) � Information-relations ( e.g. contradiction, consistency, entailment, equivalence ... ) � Information-processing mechanisms � Architectures � Ontologies � Control � Virtual machine � .... etc .... See unpublished paper by Sloman and Chrisley on “ontological blindness” in biologists and engineers (e.g. biologically inspired roboticists), on the Cognition and Affect web site. http://www.cs.bham.ac.uk/research/cogaff/ VM-consciousness Slide 10 Revised: June 7, 2004
Recommend
More recommend