Artificial Intelligence Today’s Class Class 2: Intelligent Agents • What’s an agent? Agency is the capacity of • Definition of an agent individuals to act • Rationality and autonomy independently and to • Types of agents make their own free • Properties of environments choices. • Broadly: a thing that does something, with agency Dr. Cynthia Matuszek – CMSC 671 3 1 3 What is an Agent? How Do You Design an Agent? • An intelligent agent is: • An intelligent agent: • A (usually) autonomous entity which… • Perceives its environment via sensors • Observes an environment (the world) • Acts upon that environment with its actuators (or Shows effectors ) • Acts on its environment in order to achieve goals “agency” • Properties: • An intelligent agent may learn • Autonomous • Not always • Reactive to the • A simple “reflex agent” still counts as an agent environment • Behaves in a rational manner • Pro-active (goal-directed) • Not “optimal” • Interacts with other agents via the environment 5 6 5 6 Human Sensors/Percepts, Human Sensors/Percepts, Actuators/Actions Actuators/Actions • Sensors: • Sensors: • Eyes (vision), ears (hearing), skin (touch), tongue (gustation), nose • Eyes (vision), ears (hearing), skin (touch), tongue (gustation), nose (olfaction), neuromuscular system (proprioception), … (olfaction), neuromuscular system (proprioception), … The Point: • Percepts: “that which is perceived” • Percepts: “that which is perceived” • At the lowest level – electrical signals from these sensors • At the lowest level – electrical signals from these sensors • Percepts and actions need • After preprocessing – objects in the visual field (location, textures, colors, • After preprocessing – objects in the visual field (location, textures, colors, to be carefully defined …), auditory streams (pitch, loudness, direction), … …), auditory streams (pitch, loudness, direction), … • Sometimes at different • Actuators/effectors: • Actuators/effectors: levels of abstraction! • Limbs, digits, eyes, tongue, … • Limbs, digits, eyes, tongue, … • Actions: • Actions: • Lift a finger, turn left, walk, run, carry an object, … • Lift a finger, turn left, walk, run, carry an object, … 7 8 7 8 1
Rationality E.g.: Automated Taxi • Percepts: Video, sonar, speedometer, odometer, engine • An ideal rational agent , in every possible world state, does action(s) that maximize its expected performance sensors, keyboard input, microphone, GPS, … • Based on: • Actions: Turn, accelerate, brake, speak, display, … • The percept sequence (world state) • Goals: Maintain safety, reach destination, maximize • Its knowledge (built-in and acquired) profits (fuel, tire wear), obey laws, provide passenger comfort, … • Rationality includes information gathering • If you don’t know something, find out! • Environment: U.S. urban streets, freeways, traffic, • No “rational ignorance” pedestrians, weather, customers, … • Need a performance measure Different aspects of driving may require • False alarm (false positive) and false dismissal (false negative) rates, speed, different types of agent programs. resources required, effect on environment, constraints met, user satisfaction, … 9 10 9 10 PEAS PEAS • Agents must have: • Agent: Part-picking robot • P erformance measure • Performance measure: Percentage of parts in correct bins • E nvironment • Environment: Conveyor belt with parts, bins • A ctuators • Actuators: Jointed arm and hand • S ensors • Must first specify the setting for intelligent agent • Sensors: Camera, joint angle sensors design 13 14 PEAS: Setting Autonomy • Specifying the setting • An autonomous system is one that: • Determines its own behavior • Consider designing an automated taxi driver: • Not all its decisions are included in its design • Performance measure? Safe, fast, legal, comfortable trip, • It is not autonomous if all decisions are made by its maximize profits designer according to a priori decisions • Environment? Roads, other traffic, pedestrians, customers • “Good” autonomous agents need: • Actuators? Steering wheel, accelerator, brake, signal, horn • Enough built-in knowledge to survive • The ability to learn • Sensors? Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard • In practice this can be a bit slippery 18 16 18 2
Some Types of Agent Some Types of Agent 1. Table-driven agents 4. Agents with goals • Use a percept sequence/action table to find the next action • Have internal state information, plus… • Implemented by a (large) lookup table • Goal information about desirable situations 2. Simple reflex agents • Agents of this kind can take future events into consideration • Based on condition-action rules 5. Utility-based agents • Implemented with a production system • Base their decisions on classic axiomatic utility theory • Stateless devices which do not have memory of past world states • In order to act rationally 3. Agents with memory • Have internal state • Used to keep track of past states of the world 19 20 19 20 (1) Table-Driven Agents (2) Simple Reflex Agents • Table lookup of: • Rule-based reasoning • Percept-action pairs mapping • To map from percepts to optimal action • Every possible state à best action • Each rule handles a collection of perceived states • “If your rook is threatened…” • Problems: • Too big to generate and store (chess: 10 120 ) • Problems • Don’t know non-perceptual parts of state • Still usually too big to generate and to store • E.g., background knowledge • Still no knowledge of non-perceptual parts of state • Not adaptive to changes in the environment • Still not adaptive to changes in the environment • Must update entire table • Change by updating collection of rules • No looping • Actions still not conditional on previous state • Can’t condition actions on previous actions/states www.quora.com/How-do-you-know-if-your-chess-pieces-are-in-strategic-positions 22 21 22 (3) Agents With Memory (1) Table-Driven/Reflex Agent • Encode “internal state” of the world • Used to remember the past (earlier percepts) • Why? • Sensors rarely give the whole state of the world at each input • So, must build up environment model over time • “State” is used to encode different “worlds” • Different worlds generate the same (immediate) percepts • Requires ability to represent change in the world • Could represent just the latest state • But then can’t reason about hypothetical courses of action 24 23 24 3
(3) Architecture for an (4) Goal-Based Agents Agent with Memory • Choose actions that achieve a goal • Which may be given, or computed by the agent • A goal is a description of a desirable state • Need goals to decide what situations are “good” • Keeping track of the current state is often not enough • Deliberative instead of reactive • Must consider sequences of actions to get to goal • Involves thinking about the future • “What will happen if I do...?” 25 27 25 27 (4) Architecture for (5) Utility-Based Agents Goal-Based Agent • How to choose from multiple alternatives? • What action is best? • What state is best? • Goals à crude distinction between “happy” / “unhappy” states • Often need a more general performance measure (how “happy”?) • Utility function gives success or happiness at a given state • Can compare choice between: • Conflicting goals • Likelihood of success • Importance of goal (if achievement is uncertain) 28 29 28 29 (4) Architecture for a complete Properties of Environments utility-based agent • Fully observable/Partially observable • If an agent’s sensors give it access to the complete state of the environment , the environment is fully observable • Such environments are convenient • No need to keep track of the changes in the environment • No need to guess or reason about non-observed things • Such environments are also rare in practice 30 31 30 31 4
Recommend
More recommend