today references
play

Today References See Russell and Norvig, chapter, 2 and 7 Russell - PowerPoint PPT Presentation

1 2 Today References See Russell and Norvig, chapter, 2 and 7 Russell and Norvig Kinds of Agents D. Dennett. Kinds of Minds . Weidenfeld and Nicolson, London, 1996. Logical Agents Michael Wooldridge. An introduction to Multi-Agent


  1. 1 2 Today References See Russell and Norvig, chapter, 2 and 7 Russell and Norvig • Kinds of Agents D. Dennett. Kinds of Minds . Weidenfeld and Nicolson, London, 1996. • Logical Agents Michael Wooldridge. An introduction to Multi-Agent Systems . • Propositional Logic Wiley, 2002 R. Fagin, J.Y. Halpern, Y. Moses, and M. Vardi. Reasoning about Knowledge . MIT Press, 1995. The Springer series of volumes on intelligent agents: see dis.cs.umass.edu/atal/books/ . Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 3 4 Properties of Agents The Intentional Stance Daniel Dennett has proposed that: • autonomy : the agent can evolve on its own, without being directly controlled from outside. The intentional stance is the strategy of interpreting the behavior • social interaction : agents usually interact with other agents, of an entity (person, animal, artifact, whatever) by treating it as sometimes in cooperation, and sometimes in competition. if it were a rational agent who governed its “choice” of “action” by a “consideration” of its “beliefs” and “desires”. • reaction : a reactive agent is one that takes account of its environment, and responds to changes in the environment. Kinds of Minds, p 27 • goal-directed : the agent has its own goals, and takes initiatives in order to meet these goals. We get a stronger notion of agent if we follow this up, and design agents with extra properties. Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008

  2. 5 6 Intelligent Agents Intentional Systems Here we attribute to agents the mental attitudes suggested by Dennett Here “Intentional” means the property of mental attitudes like belief, – beliefs, motivations, obligations, etc. desire etc. whereby they link up to things in the world about which we have beliefs, desires etc. In addition, they may have: • mobility : the agent is able to displace itself physically (eg around Intentional systems are, by definition, all and only those the Net). entities whose behavior is predictable/explicable from the intentional stance. • rationality : the agent will always act so as to work towards achieving its goals (with respect to its beliefs). Kinds of Minds, p 34 • distribution : various agents are physically separate (eg hosted by different processors). Examples thermostats, amoebas, bats, people, and chess-playing computers, . . . (Dennett) Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 7 8 Example Rationality Take for vacuum cleaner: Fixed performance measure evaluates the environment sequence – one point per square cleaned up in time T ? • Percepts: location and contents, e.g., [ A, Dirty ] – one point per clean square per time step, minus one per move? – penalize for > k dirty squares? A rational agent chooses whichever action maximizes the expected • Actions: Left , Right , Suck , NoOp value of the performance measure given the percept sequence to date What is the right way to organise the actions dependent on the Rational � = omniscient percept history? Rational � = clairvoyant Rational � = successful But: Rationality leads to exploration, learning, autonomy . . . Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008

  3. 9 10 PEAS Environment types To design a rational agent, we must specify the task environment Solitaire Backgammon e-shopping Taxi Yes Yes No No Observable Consider, e.g., the task of designing an automated taxi: Deterministic Yes No Partly No what are: Performance measure, Environment, Actuators, Sensors? No No No No Episodic Performance measure eg safety, destination, profits, legality, comfort, Static Yes Semi Semi No Environment eg streets/motorways, traffic, pedestrians, weather, . . . Discrete Yes Yes Yes No Yes No Yes (except auctions) No Single-agent Actuators eg steering, accelerator, brake, horn, speaker/display, . . . The environment type largely determines the agent design Sensors eg video, accelerometers, gauges, engine sensors, GPS, . . . The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 11 12 Agent types Knowledge bases Four basic types in order of increasing generality: domain−independent algorithms Inference engine – simple reflex agents Knowledge base domain−specific content – reflex agents with state – goal-based agents Knowledge base = set of sentences in a formal language – utility-based agents All these can be turned into learning agents Declarative approach to building an agent (or other system): Tell it what it needs to know Then it can Ask itself what to do—answers should follow from the KB Agents can be viewed at the knowledge level i.e., what they know, regardless of how implemented Or at the implementation level i.e., data structures in KB and algorithms that manipulate them Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008

  4. 13 14 A simple knowledge-based agent Wumpus World PEAS description Performance measure function KB-Agent ( percept ) returns an action gold +1000, death -1000 static : KB , a knowledge base -1 per step, -10 for using the arrow t , a counter, initially 0, indicating time Breeze Environment 4 Stench PIT Tell ( KB , Make-Percept-Sentence ( percept, t )) Squares adjacent to wumpus are smelly Breeze action ← Ask ( KB , Make-Action-Query ( t )) Breeze 3 Squares adjacent to pit are breezy PIT Stench Tell ( KB , Make-Action-Sentence ( action , t )) Gold Glitter iff gold is in the same square t ← t + 1 Breeze Stench 2 Shooting kills wumpus if you are facing it return action Shooting uses up the only arrow Breeze Breeze 1 PIT Grabbing picks up gold if in same square START The agent must be able to: Releasing drops the gold in same square 1 2 3 4 Represent states, actions, etc.; Incorporate new percepts Sensors Breeze, Glitter, Smell Update internal representations of the world Deduce hidden properties of the world, and appropriate actions Actuators Left turn, Right turn, Forward, Grab, Release, Shoot Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 15 16 Wumpus world characterisation Exploring a wumpus world Observable?? No—only local perception Deterministic?? Yes—outcomes exactly specified Static?? Yes—Wumpus and Pits do not move Discrete?? Yes Single-agent?? Yes—Wumpus is essentially a natural feature B OK A OK OK A Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008

  5. 17 18 Exploring a wumpus world Exploring a wumpus world P? P? B OK P? B OK P? A A S OK OK OK OK A A A Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 19 20 Exploring a wumpus world Exploring a wumpus world P? P? P P B OK P? B OK P? OK OK A A A OK S OK OK S OK W W A A A A Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008

  6. 21 22 Exploring a wumpus world Exploring a wumpus world P? OK P? OK P P B OK P? OK B OK P? BGS OK OK OK A A A A A S S OK OK OK OK W W A A A A Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Other tight spots 23 24 Logic in general P? Breeze in (1,2) and (2,1) Logics are formal languages for representing information ⇒ no safe actions such that conclusions can be drawn B OK P? P? A Syntax defines the sentences in the language Assuming pits uniformly distributed, (2,2) has pit w/ prob 0.86, vs. 0.31 OK B OK Semantics define the “meaning” of sentences; P? A A i.e., define truth of a sentence in a world E.g., the language of arithmetic Smell in (1,1) x + 2 ≥ y is a sentence; x 2 + y > is not a sentence ⇒ cannot move Can use a strategy of coercion: x + 2 ≥ y is true iff the number x + 2 is not less than the number y shoot straight ahead x + 2 ≥ y is true in a world where x = 7 , y = 1 wumpus was there ⇒ dead ⇒ safe S x + 2 ≥ y is false in a world where x = 0 , y = 6 wumpus wasn’t there ⇒ safe A Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008 Alan Smaill Fundamentals of Artificial Intelligence Nov 10, 2008

Recommend


More recommend