cs 480 680 game engine programming artificial intelligence
play

CS 480/680: GAME ENGINE PROGRAMMING ARTIFICIAL INTELLIGENCE - PowerPoint PPT Presentation

CS 480/680: GAME ENGINE PROGRAMMING ARTIFICIAL INTELLIGENCE 3/7/2013 Santiago Ontan santi@cs.drexel.edu https://www.cs.drexel.edu/~santi/teaching/2013/CS480-680/intro.html Game AI in Educational Game A CS RA @ 20 hrs/week for two-terms


  1. Motor Control Layer • Steering Behaviors generate the “desired accelerations”. An underlying “motor layer” translates that into commands like “accelerate, brake, turn right, turn left”: AI Strategy World Decision Making Interface (perception) Movement Motor Steering Control Behaviors

  2. Output Filtering • Idea: • Use the steering behavior to produce an acceleration request • Project the request onto the accelerations that the vehicle at hand can perform, and ignore the rest Steering Request: Vehicle capabilities: Projection:

  3. Outline • Student Presentations • Game AI • Movement • Path-finding • Scripting • Board Games • Learning • Project Discussion

  4. Pathfinding • Problem: • Finding a path for a A character/unit to move from point A to point B B • One of the most common AI requirements in games • Specially critical in RTS games since there are lots of units

  5. Pathfinding • Problem: • Finding a path for a A character/unit to move from point A to point B B • One of the most common AI requirements in games • Specially critical in RTS games since there are lots of units

  6. Pathfinding • Simplest scenario: • Single character • Non-real time • Grid • Static world • Solution: A* • Complex scenario: • Multiple characters (overlapping paths) • Real time • Continuous map • Dynamic world

  7. Pathfinding is a Problem! • Even in modern commercial games: • http://www.youtube.com/watch? v=lw9G-8gL5o0&feature=player_embedded

  8. Quantization and Localization • Pathfinding computations are performed with an abstraction over the game map (a graph, composed of nodes and links) • Quantization: • Game map coordinates -> graph node • Localization: • Graph node -> Game map coordinates

  9. Tile Graphs • Divide the game map in equal tiles (squares, hexagons, etc.) • Typical in RTS games

  10. Navigation Meshes (Navmesh) • By far the most widely used • Use the level geometry as the pathfinding graph • Floor is made out of triangles: • Use the center of each “floor” triangle as a characteristic point

  11. Navigation Meshes (Navmesh) • One characteristic point is only connected to neighboring ones

  12. Navigation Meshes (Navmesh) • The validity of the graph depends on the level designer. It is her responsibility to author proper triangles for navigation:

  13. Pathfinding • Simplest scenario: • Single character • Non-real time • Grid • Static world • Solution: A* • Complex scenario: • Multiple characters/Dynamic world: D* / D* Lite • Real time: TBA* / LRTA* • Continuous map: Quantization

  14. Outline • Student Presentations • Game AI • Movement • Path-finding • Scripting • Board Games • Learning • Project Discussion

  15. Scripting • Most Game AI is scripted • Game Engine allows game developers to specify the behavior of characters in some scripting language: • Lua • Scheme • Python • Etc. • Game specific languages

  16. Scripting • Advantages: • Gives control to the game designers (characters do whatever the game designers want) • Disadvantages: • Labor intensive: the behavior of each characters needs to be predefined for all situations • Rigid: • If the player finds a hole in one of the behaviors, she can exploit it again and again • Not adaptive to what the player does (unless scripted coded to be so)

  17. Finite State Machines 4 SCVs barracks harvesting Train Harvest Build marines Minerals Barracks 4 marines & Less 4 marines Enemy 4 SCVs than & unseen No 4 SCVs Enemy seen marines Train SCVs Attack Explore Enemy Enemy seen

  18. Finite State Machines • Easy to implement: switch(state) { case START: if (numSCVs<4) state = TRAIN_SCVs; if (numHarvestingSCVs>=4) state = BUILD_BARRACKS; Unit *SCV = findIdleSCV(); Unit *mineral = findClosestMineral(SCV); SCV->harvest(mineral); break; case TRAIN_SCVs: if (numSCVs>=4) state = START; Unit *base = findIdleBase(); base->train(UnitType::SCV); break; case BUILD_BARRACKS: … }

  19. Finite State Machines • Good for simple AIs • Become unmanageable for complex tasks • Hard to maintain

  20. Finite State Machines (Add a new state) 4 SCVs barracks harvesting Train Harvest Build marines Minerals Barracks 4 marines & Less 4 marines Enemy 4 SCVs than & unseen No 4 SCVs Enemy seen marines Train SCVs Attack Explore ? Enemy Enemy seen

  21. Finite State Machines (Add a new state) 4 SCVs barracks harvesting Train Harvest Build 4 marines Minerals Barracks Enemy Less 4 marines unseen 4 SCVs than & No 4 SCVs Enemy seen marines Train Enemy 4 SCVs Inside Attack Explore Base Enemy Enemy seen Attack Inside Enemy

  22. Hierarchical Finite State Machines • FSM inside of the state of another FSM • As many levels as needed • Can alleviate complexity problem to some extent Enemy Inside Base Attack Standard Inside Strategy Enemy No Enemy Inside Base

  23. Hierarchical Finite State Machines • FSM inside of the state of another FSM • As many levels as needed • Can alleviate complexity problem to some extent Enemy 4 SCVs Inside barracks Train Harvest harvesting Build 4 marines Minerals Barracks Base Attack Enemy Less 4 marines unseen 4 SCVs than & Inside No 4 SCVs Enemy seen Enemy marines Train No Enemy 4 SCVs Attack Inside Explore Enemy Enemy Base seen

  24. Behavior Trees • Combination of techniques (some of them we covered last week): • Hierarchical state machines • Scheduling • Automated planning • Action Execution • Increasingly popular in commercial games • Strength: • Visual and easy to understand way to author behaviors and decisions for characters without having programming knowledge

  25. Behavior Trees Appeared in Halo 2 Halo 2 (2004)

  26. Example Behavior Tree

  27. Behavior Tree Basics • A behavior tree (BT) captures the “behavior” or “decision mechanism” of a character in a game • At each frame (if synchronous): • The game engine executes one cycle of the BT: • As a side effect of execution, a BT executes actions (that control the character) • The basic component of a behavior tree is a task • At each game cycle, a cycle of a task is executed • It returns success , failure , error, etc . • As a side effect of execution they might execute things in the game • Three basic types of tasks: • Conditions • Actions • Composites

  28. Behavior Tree Tasks Condition Sequence Selector Action Domain dependent: Generic: Each game defines its own the same for all games actions and conditions

  29. Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right

  30. Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right

  31. Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right

  32. Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right

  33. Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right

  34. Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right

  35. What If There Are Obstacles? • Goal: Make a character move right when the player is near, even if the door is closed Sequence Is Player Selector Near? Sequence Sequence Is Door Is Door Move Move Move Open Closed? Open? Right Right to Door Door

  36. Outline • Student Presentations • Game AI • Movement • Path-finding • Scripting • Board Games • Learning • Project Discussion

  37. Board Games • Main characteristic: turn-based • The AI has a lot of time to decide the next move

  38. Board Games • Not just chess …

  39. Board Games • Not just chess …

  40. Board Games • Not just chess …

  41. Board Games • Not just chess …

  42. Board Games • Not just chess …

  43. Game Tree • Game trees capture the effects of successive action executions: Current Situation Player 1 action Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)

  44. Game Tree • Game trees capture the effects of successive action executions: Current Situation Pick the action that leads to the state with maximum expected utility after taking into account what the other Player 1 action players might do Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)

  45. Game Tree • Game trees capture the effects of successive action executions: In this example, we look ahead only one Current Situation player 1 action and one player 2 action. But we could grow the tree arbitrarily deep Player 1 action Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)

  46. Minimax Principle • Positive utility is good for player 1, and negative for player 2 • Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action Player 2 action U(s) = -1 U(s) = 0 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0

  47. Minimax Principle • Positive utility is good for player 1, and negative for player 2 • Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action Player 2 action (min) U(s) = -1 U(s) = 0 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0

  48. Minimax Principle • Positive utility is good for player 1, and negative for player 2 • Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action U(s) = -1 U(s) = -1 U(s) = 0 Player 2 action (min) U(s) = -1 U(s) = 0 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0

  49. Minimax Principle • Positive utility is good for player 1, and negative for player 2 • Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action (max) U(s) = -1 U(s) = -1 U(s) = 0 Player 2 action (min) U(s) = -1 U(s) = 0 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0

  50. Minimax Algorithm Minimax(state, player, MAX_DEPTH) IF MAX_DEPTH == 0 RETURN U(state) BestAction = null BestScore = null FOR Action in actions(player, state) (Score,Action2) = Minimax(result(action, state), nextplayer(player), MAX_DEPTH-1) IF BestScore == null || (player == 1 && Score>BestScore) || (player == 2 && Score<BestScore) BestScore = Score BestAction = Action ENDFOR RETURN (BestScore, BestAction)

  51. Minimax Algorithm Minimax(state, player, MAX_DEPTH) IF MAX_DEPTH == 0 RETURN U(state) BestAction = null BestScore = null FOR Action in actions(player, state) (Score,Action2) = Minimax(simulate(action, state), nextplayer(player), MAX_DEPTH-1) IF BestScore == null || (player == 1 && Score>BestScore) || (player == 2 && Score<BestScore) BestScore = Score BestAction = Action ENDFOR RETURN (BestScore, BestAction)

Recommend


More recommend