artificial intelligence
play

Artificial Intelligence Simulation Engines 2008 Chalmers University - PowerPoint PPT Presentation

Artificial Intelligence Simulation Engines 2008 Chalmers University of Technology Markus Larsson markus.larsson@slxgames.com 08-11-24 Simulation Engines 2008, Markus Larsson 1 History of AI AI concerns itself with understanding


  1. Artificial Intelligence Simulation Engines 2008 Chalmers University of Technology Markus Larsson markus.larsson@slxgames.com 08-11-24 Simulation Engines 2008, Markus Larsson 1

  2. History of AI  AI concerns itself with understanding intelligent entities  Unlike psychology or philosophy, AI deals with how to build these intelligent entities as well  Young research area  Defined in 1956  However, has connections to definitions by classic Greek philosophers such as Plato and Aristotle  Has gone through many turbulent phases  Almost childish enthusiasm in the early days  Depressive state after a while  More realistic outlook today 08-11-24 Simulation Engines 2008, Markus Larsson 2

  3. AI in games Most computer games are played against some form of  opponent When not playing against another human, in many cases a  computer-controlled opponent is needed  AI in games provide the human player with a challenging opponent or ally without requiring the presence of another human A keyword is “challenging”   Not necessarily proficient or complex  Do not use unnecessarily advanced techniques Often the AI needs to be tunable for different difficulty levels  There are no style points for being true to the field of AI. Cheap  tricks are good 08-11-24 Simulation Engines 2008, Markus Larsson 3

  4. Agents  An agent is an autonomous and independent entity that, much like a human being:  Collects information about its surroundings  Draws conclusions  Makes decisions  Executes actions  Very useful in AI  Easy and natural semantical conception of a sentient game entity  Lends themselves well to object-oriented design 08-11-24 Simulation Engines 2008, Markus Larsson 4

  5. A model for AI in games A useful model for our continued discussion  Perception   The agent collects information about the surrounding using its “senses” Decision   The agent analyzed the collected data, builds an understanding of the situation and then makes a decision Action   Given the decision, translates into a number of separate steps needed to accomplish the goal 08-11-24 Simulation Engines 2008, Markus Larsson 5

  6. Cheating  The golden rule of games AI  Cheat as much as you can get away with!  There is no incentive for AI programmers to “play straight”  The job is to create a worthy opponent  No rules of conduct  Cheating can be done in all parts of the model 08-11-24 Simulation Engines 2008, Markus Larsson 6

  7. Cheating  Perception  The most basic cheat is to give the computer access to an internal representation of the world instead of having it interpret the world itself  It is often useful to give the AI more information than what the player has access to (exact positions etc)  Decision  More difficult are with cheating  Agents that are not visible to the player can often entirely ignore the decision phase 08-11-24 Simulation Engines 2008, Markus Larsson 7

  8. Action  Action  It can often be useful to work on different sets of rules than players  For instance, a computer-controlled combat pilot might use a more simplified flight model than human players  If you cheat, make sure it is not obvious to the player!  The ultimate cheat is to script behaviors 08-11-24 Simulation Engines 2008, Markus Larsson 8

  9. Perception  Perception provides the agent with information about its surrounding environment using sensors  Can be anything from a simple photosensitive sensor that detects light to a full vision system  In the context of games, the agents rarely perceive the environment on their own, instead they look at the common scene graphs etc  Topics of interest  Identify different ways to access an enemy position  Find places to hide  Identify threats (windows, doors, etc) 08-11-24 Simulation Engines 2008, Markus Larsson 9

  10. Perception  Perceptive tasks depend on the type of game of course, but tasks tend to relate to the perception of the topology of the environment  Manual topology markup  Level editors add topological information to the 3D world (places to hide, places to shoot, patrol routes, pathfinding information, etc)  Automatic topology analysis  Analysis of the world to automatically identify access points, paths, hiding places, etc  Often at least partially done in a pre-processing step 08-11-24 Simulation Engines 2008, Markus Larsson 10

  11. Decision  Given our input from the perception phase, we want to make a decision  Finite state machines  Rule-based transition system  Fuzzy state machine  Rule-based transition system based on fuzzy logic  Artificial life  Simulation of artificial life forms and behavior  Neural network  Network-structure for interpreting input and giving output (learning architecture) 08-11-24 Simulation Engines 2008, Markus Larsson 11

  12. Action In the decision phase, we come up with a general decision for our high-level  plan In the action phase, we execute the plan through low-level actions  Example  An AI infantry commander comes up with the decision to “take hill 241”. The action phase  then translates this into first finding the shortest path to hill 241 (staying in cover from enemy fire), issuing the movement commands to his soldiers, assuming a combat formation when approaching the hill, and then taking a defensive position once the hill has been secured. Pathfinding  Finding the shortest path from point A to point B given a number of  constraints. Might need to take coordination between multiple agents in consideration Multi-level agents  Modern AI systems often need multiple layers for controlling low-level  and high-level actions 08-11-24 Simulation Engines 2008, Markus Larsson 12

  13. Finite state machines  Suitable technique for implementing simple rule-based agent  Consists of a set of states and a collection of transitions for each state  A transition consists of a trigger input, which initiates the state transition, a destination state, and an output  The FSM also has a start state from which it begins execution  FSM's are often drawn as state transition diagrams 08-11-24 Simulation Engines 2008, Markus Larsson 13

  14. FSM's for AI  Benefits  Good control over the agent's behavior  Easy to implement  Model is easy to understand for designers  Drawbacks  Hard (and time-consuming) to write exhaustively  No emergent behavior; the agent will only do what we tell it to do. We can not hope to get holistic effects of rules acting together  Deterministic: agent is easy to predict and its behavior could potentially be exploited 08-11-24 Simulation Engines 2008, Markus Larsson 14

  15. Example: Finite state machine 08-11-24 Simulation Engines 2008, Markus Larsson 15

  16. Fuzzy logic  One of the main features of a FSM is that it is deterministic  A desirable effect in many systems  Not necessarily a good thing in AI  Can create predictable behavior  Natural solution  Make our FSM non-deterministic  For a given input, any output can be chosen by random or by an internal weighting function  Fuzzy state machine 08-11-24 Simulation Engines 2008, Markus Larsson 16

  17. Autonomous agents  Definition from Russel & Norvig, 1995  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment using effectors  Works very well with the model we have previously mentioned 08-11-24 Simulation Engines 2008, Markus Larsson 17

  18. Autonomous agents  Autonomous agents take the information they perceive into account when forming and carrying out its decision  Non-autonomous agents simply discard sensory input  We will examine three types of agents  Reactive agents  Reactive agents with state  Goal-based agents 08-11-24 Simulation Engines 2008, Markus Larsson 18

  19. Reactive agents  A reactive agent is the simplest form of agent and reacts to a situation purely according to a set of rules for action and reaction  For each update, the agent searches its database of rules until it finds one that matches the current situation, then executes the appropriate action associated with the rule  Can be easily implemented using FSM 08-11-24 Simulation Engines 2008, Markus Larsson 19

  20. Reactive agents with state  In many cases, it is not sufficient to base behavior on input alone  We might need some kind of state (memory)  Example  A driver looks in a rear-view mirror from time to time. When changing lanes, the driver needs to take both the information from looking in the mirror and the information from looking forwards in consideration. 08-11-24 Simulation Engines 2008, Markus Larsson 20

  21. Goal-based agents  Sometimes state and rules are not sufficient  We need a goal to decide the most useful course of action  Gives rise to goal-based agents which do not only have a rule database, but also select actions with a higher-level goal in mind  Implies that the agent needs to know the consequences Y of performing an action X  The decision process becomes one of searching or planning given a set of actions and consequences  Goal-based agents allow for emergent behavior 08-11-24 Simulation Engines 2008, Markus Larsson 21

Recommend


More recommend