simulation engines tda571 dit030 artificial intelligence
play

Simulation Engines TDA571|DIT030 Artificial Intelligence Tommaso - PowerPoint PPT Presentation

Simulation Engines TDA571|DIT030 Artificial Intelligence Tommaso Piazza 1 Administrative stuff Next week AI presents 6 out of 7 groups have AI If you dont have AI but do have networks you will be presenting on Wednesday next


  1. Simulation Engines TDA571|DIT030 Artificial Intelligence Tommaso Piazza 1

  2. Administrative stuff  Next week AI presents  6 out of 7 groups have AI  If you don’t have AI but do have networks you will be presenting on Wednesday next week, lecture on networks will be on Monday  Probably no lectures on the 30/11 and 2/12  They will take place on the 7/12 and 8/12 instead IDC | Interaction Design Collegium 2

  3. History of AI  AI concerns itself with understanding intelligent entities  Unlike psychology or philosophy, AI deals with how to build these intelligent entities as well  Young research area  Defined in 1956  However, has connections to definitions by classic Greek philosophers such as Plato and Aristotle  Has gone through many turbulent phases  Almost childish enthusiasm in the early days  Depressive state after a while  More realistic outlook today IDC | Interaction Design Collegium 3

  4. AI in games  Most computer games are played against some form of opponent  When not playing against another human, in many cases a computer-controlled opponent is needed  AI in games provide the human player with a challenging opponent or ally without requiring the presence of another human  A keyword is “ challenging ”  Not necessarily proficient or complex  Do not use unnecessarily advanced techniques  Often the AI needs to be tunable for different difficulty levels  There are no style points for being true to the field of AI. Cheap tricks are good IDC | Interaction Design Collegium 4

  5. Agents  An agent is an autonomous and independent entity that, much like a human being:  Collects information about its surroundings  Draws conclusions  Makes decisions  Executes actions  Very useful in AI  Easy and natural semantical conception of a sentient game entity  Lends themselves well to object-oriented design IDC | Interaction Design Collegium 5

  6. A model for AI in games  A useful model for our continued discussion  Perception  The agent collects information about the surrounding using its “senses”  Decision  The agent analyzed the collected data, builds an understanding of the situation and then makes a decision  Action  Given the decision, translates into a number of separate steps needed to accomplish the goal IDC | Interaction Design Collegium 6

  7. Cheating  The golden rule of games AI  Cheat as much as you can get away with!  There is no incentive for AI programmers to “play straight”  The job is to create a worthy opponent  No rules of conduct  Cheating can be done in all parts of the model IDC | Interaction Design Collegium 7

  8. Cheating  Perception  The most basic cheat is to give the computer access to an internal representation of the world instead of having it interpret the world itself  It is often useful to give the AI more information than what the player has access to (exact positions etc)  Decision  More difficult are with cheating  Agents that are not visible to the player can often entirely ignore the decision phase IDC | Interaction Design Collegium 8

  9. Action  Action  It can often be useful to work on different sets of rules than players  For instance, a computer-controlled combat pilot might use a more simplified flight model than human players  If you cheat, make sure it is not obvious to the player !  The ultimate cheat is to script behaviors IDC | Interaction Design Collegium 9

  10. Perception  Perception provides the agent with information about its surrounding environment using sensors  Can be anything from a simple photosensitive sensor that detects light to a full vision system  In the context of games, the agents rarely perceive the environment on their own, instead they look at the common scene graphs etc  Topics of interest  Identify different ways to access an enemy position  Find places to hide  Identify threats (windows, doors, etc) IDC | Interaction Design Collegium 10

  11. Perception  Perceptive tasks depend on the type of game of course, but tasks tend to relate to the perception of the topology of the environment  Manual topology markup  Level editors add topological information to the 3D world (places to hide, places to shoot, patrol routes, pathfinding information, etc)  Automatic topology analysis  Analysis of the world to automatically identify access points, paths, hiding places, etc  Often at least partially done in a pre-processing step IDC | Interaction Design Collegium 11

  12. Decision  Given our input from the perception phase, we want to make a decision  Finite state machines  Rule-based transition system  Fuzzy state machine  Rule-based transition system based on fuzzy logic  Artificial life  Simulation of artificial life forms and behavior  Neural network  Network-structure for interpreting input and giving output (learning architecture) IDC | Interaction Design Collegium 12

  13. Action In the decision phase, we come up with a general decision for our  high-level plan In the action phase, we execute the plan through low-level actions  Example   An AI infantry commander comes up with the decision to “ take hill 241 ”. The action phase then translates this into first finding the shortest path to hill 241 (staying in cover from enemy fire), issuing the movement commands to his soldiers, assuming a combat formation when approaching the hill, and then taking a defensive position once the hill has been secured. Pathfinding   Finding the shortest path from point A to point B given a number of constraints . Might need to take coordination between multiple agents in consideration Multi-level agents   Modern AI systems often need multiple layers for controlling low-level and high-level actions IDC | Interaction Design Collegium 13

  14. Finite state machines  Suitable technique for implementing simple rule- based agent  Consists of a set of states and a collection of transitions for each state  A transition consists of a trigger input, which initiates the state transition, a destination state, and an output  The FSM also has a start state from which it begins execution  FSM's are often drawn as state transition diagrams IDC | Interaction Design Collegium 14

  15. FSM's for AI  Benefits  Good control over the agent's behavior  Easy to implement  Model is easy to understand for designers  Drawbacks  Hard (and time-consuming) to write exhaustively  No emergent behavior; the agent will only do what we tell it to do . We can not hope to get holistic effects of rules acting together  Deterministic: agent is easy to predict and its behavior could potentially be exploited IDC | Interaction Design Collegium 15

  16. Example: Finite state machine IDC | Interaction Design Collegium 16

  17. Fuzzy logic  One of the main features of a FSM is that it is deterministic  A desirable effect in many systems  Not necessarily a good thing in AI  Can create predictable behavior  Natural solution  Make our FSM non-deterministic  For a given input, any output can be chosen by random or by an internal weighting function  Fuzzy state machine IDC | Interaction Design Collegium 17

  18. Autonomous agents  Definition from Russel & Norvig, 1995  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment using effectors  Works very well with the model we have previously mentioned IDC | Interaction Design Collegium 18

  19. Autonomous agents  Autonomous agents take the information they perceive into account when forming and carrying out its decision  Non-autonomous agents simply discard sensory input  We will examine three types of agents  Reactive agents  Reactive agents with state  Goal-based agents IDC | Interaction Design Collegium 19

  20. Reactive agents  A reactive agent is the simplest form of agent and reacts to a situation purely according to a set of rules for action and reaction  For each update, the agent searches its database of rules until it finds one that matches the current situation, then executes the appropriate action associated with the rule  Can be easily implemented using FSM IDC | Interaction Design Collegium 20

  21. Reactive agents with state  In many cases, it is not sufficient to base behavior on input alone  We might need some kind of state (memory)  Example  A driver looks in a rear-view mirror from time to time. When changing lanes, the driver needs to take both the information from looking in the mirror and the information from looking forwards in consideration. IDC | Interaction Design Collegium 21

  22. Goal-based agents  Sometimes state and rules are not sufficient  We need a goal to decide the most useful course of action  Gives rise to goal-based agents which do not only have a rule database, but also select actions with a higher-level goal in mind  Implies that the agent needs to know the consequences Y of performing an action X  The decision process becomes one of searching or planning given a set of actions and consequences  Goal-based agents allow for emergent behavior IDC | Interaction Design Collegium 22

  23. Multi-level agent structures  Another way to achieve emergent behavior using agent technology is to create hierarchical structures of agents acting on different abstraction levels IDC | Interaction Design Collegium 23

Recommend


More recommend