� � � � � � � � � � Lecture 3 An Introduction to Multiagent Systems 1 Agent Architectures LECTURE 3: DEDUCTIVE REASONING AGENTS Introduce the idea of an agent as a computer system capable of flexible autonomous action . Briefly discuss the issues one needs to address in order to build An Introduction to Multiagent Systems agent-based systems. http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ Three types of agent architecture : – symbolic/logical; – reactive; – hybrid. 1 http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ Lecture 3 An Introduction to Multiagent Systems Lecture 3 An Introduction to Multiagent Systems We want to build agents, that enjoy the properties of autonomy, reactiveness, pro-activeness, and social ability that we talked Originally (1956-1985), pretty much all agents designed within AI about earlier. were symbolic reasoning agents. This is the area of agent architectures . Its purest expression proposes that agents use explicit logical Maes defines an agent architecture as: reasoning in order to decide what to do. ‘[A] particular methodology for building [agents]. It specifies how . . . the agent can be decomposed Problems with symbolic reasoning led to a reaction against this into the construction of a set of component modules and how these modules should be made to interact. The total set of modules and their interactions has to provide an answer to the question of — the so-called reactive agents movement, 1985–present. how the sensor data and the current internal state of the agent determine the actions . . . and future internal state of the agent. An architecture encompasses techniques and algorithms that support this From 1990-present, a number of alternatives proposed: hybrid methodology.’ architectures, which attempt to combine the best of reasoning Kaelbling considers an agent architecture to be: and reactive architectures. ‘[A] specific collection of software (or hardware) modules, typically designated by boxes with arrows indicating the data and control flow among the modules. A more abstract view of an architecture is as a general methodology for designing particular modular decompositions for particular tasks.’ http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ 2 http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ 3
✁ � � � ✁ ✂ ✆ � ✆ ☎ � � ✄ ✂ � � � ✂ � Lecture 3 An Introduction to Multiagent Systems Lecture 3 An Introduction to Multiagent Systems 2 Symbolic Reasoning Agents If we aim to build an agent in this way, there two key problems to be solved: The classical approach to building agents is to view them as a 1. The transduction problem : particular type of knowledge-based system, and bring all the that of translating the real world into an accurate, adequate associated (discredited?!) methodologies of such systems to symbolic description, in time for that description to be useful. bear. . . . vision, speech understanding, learning. This paradigm is known as symbolic AI . 2. The representation/reasoning problem : We define a deliberative agent or agent architecture to be one that of how to symbolically represent information about that: complex real-world entities and processes, and how to get agents to reason with this information in time for the results – contains an explicitly represented, symbolic model of the to be useful. world; . . . knowledge representation, automated reasoning, – makes decisions (for example about what actions to perform) automatic planning. via symbolic reasoning. 4 5 http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ Lecture 3 An Introduction to Multiagent Systems Lecture 3 An Introduction to Multiagent Systems 2.1 Deductive Reasoning Agents How can an agent decide what to do using theorem proving? Most researchers accept that neither problem is anywhere near solved. Basic idea is to use logic to encode a theory stating the best action to perform in any given situation. Underlying problem lies with the complexity of symbol manipulation algorithms in general: many (most) search-based Let: symbol manipulation algorithms of interest are highly intractable . be this theory (typically a set of rules); – Because of these problems, some researchers have looked to be a logical database that describes the current state of the – alternative techniques for building agents; we look at these later. world; – Ac be the set of actions the agent can perform; mean that can be proved from using . – http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ 6 http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ 7
✟ ✞ ✟ ✒ ✡ ☞ ✏ ✕ ✟ ✒ ☞ ✖ ✟ ✒ ✞ ✟ ✞ ✟ ✡ � ✟ ✞ ☞ ✟ ✞ ✟ ✌ ✏ ✍ ☞ ☞ � ✁ ☞ ✒ ☞ ✞ ✞ ✟ ✒ ✞ ✟ ✓ ✕ ✟ ☞ � � ✞ ✏ ☞ ✏ ✗ ✟ ✖ ✒ ✕ ✞ ✟ ☞ ✗ ✟ ✞ ✞ ✟ ✒ ✡ ☞ ✗ ✕ ☞ ✎ ✟ ✞ ✝ ✂ ✄ ☎ ✞ ✟ ✝ ✂ ✠ ✄ ✞ ✟ � � ☞ ✟ ☞ ✞ Lecture 3 An Introduction to Multiagent Systems Lecture 3 An Introduction to Multiagent Systems try to find an action explicitly prescribed */ or each a Ac do Do a if then return a An example: The Vacuum World. end-if Goal is for the robot to clear up all dirt. end-for try to find an action not excluded */ dirt dirt (0,2) (1,2) (2,2) or each a Ac do Do a if then (0,1) (1,1) (2,1) ☎☛✡ return a end-if (0,0) (1,0) (2,0) end-for return null /* no action found */ 8 9 http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ Lecture 3 An Introduction to Multiagent Systems Lecture 3 An Introduction to Multiagent Systems Rules for determining what to do: Use 3 domain predicates in this exercise: In x y x y agent is at In Facing north Dirt Do forward Dirt x y x y ✟✔✓ there is dirt at ✞✑✏ ✞✑✏ In Facing north Dirt Do forward Facing d the agent is facing direction d ✟✔✓ ✞✑✏ ✞✑✏ In Facing north Dirt Do turn ✟✔✓ ✞✑✏ ✞✑✏ Possible actions: In Facing east Do forward ✞✑✏ Ac turn forward suck . . . and so on! Using these rules (+ other obvious ones), starting at the NB: turn means “turn right”. robot will clear up dirt. http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ 10 http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ 11
� ✟ � � � � � � ✖ � ☞ ✏ ✞ � � � � Lecture 3 An Introduction to Multiagent Systems Lecture 3 An Introduction to Multiagent Systems 2.2 AGENT0 and PLACA Problems: – how to convert video camera input to Dirt ? Much of the interest in agents from the AI community has arisen from Shoham’s notion of agent oriented programming (AOP). – decision making assumes a static environment: calculative rationality. AOP a ‘new programming paradigm, based on a societal view of – decision making using first-order logic is undecidable ! computation’. Even where we use propositional logic, decision making in the The key idea that informs AOP is that of directly programming worst case means solving co-NP-complete problems. agents in terms of intentional notions like belief, commitment, and intention. (NB: co-NP-complete = bad news!) The motivation behind such a proposal is that, as we humans Typical solutions: use the intentional stance as an abstraction mechanism for – weaken the logic; representing the properties of complex systems. – use symbolic, non-logical representations; In the same way that we use the intentional stance to describe – shift the emphasis of reasoning from run time to design time . humans, it might be useful to use the intentional stance to program machines. We now look at some examples of these approaches. 12 13 http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ Lecture 3 An Introduction to Multiagent Systems Lecture 3 An Introduction to Multiagent Systems Shoham suggested that a complete AOP system will have 3 components: AGENT 0 is implemented as an extension to LISP . – a logic for specifying agents and describing their mental states; Each agent in AGENT 0 has 4 components: – an interpreted programming language for programming – a set of capabilities (things the agent can do); agents; – a set of initial beliefs; – an ‘agentification’ process, for converting ‘neutral applications’ – a set of initial commitments (things the agent will do); and (e.g., databases) into agents. – a set of commitment rules . Results only reported on first two components. The key component, which determines how the agent acts, is the Relationship between logic and programming language is commitment rule set. semantics . We will skip over the logic(!), and consider the first AOP language, AGENT 0 . http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ 14 http://www.csc.liv.ac.uk/˜mjw/pubs/imas/ 15
Recommend
More recommend