Motor Control Layer • Steering Behaviors generate the “desired accelerations”. An underlying “motor layer” translates that into commands like “accelerate, brake, turn right, turn left”: AI Strategy World Decision Making Interface (perception) Movement Motor Steering Control Behaviors
Output Filtering • Idea: • Use the steering behavior to produce an acceleration request • Project the request onto the accelerations that the vehicle at hand can perform, and ignore the rest Steering Request: Vehicle capabilities: Projection:
Outline • Student Presentations • Game AI • Movement • Path-finding • Scripting • Board Games • Learning • Project Discussion
Pathfinding • Problem: • Finding a path for a A character/unit to move from point A to point B B • One of the most common AI requirements in games • Specially critical in RTS games since there are lots of units
Pathfinding • Problem: • Finding a path for a A character/unit to move from point A to point B B • One of the most common AI requirements in games • Specially critical in RTS games since there are lots of units
Pathfinding • Simplest scenario: • Single character • Non-real time • Grid • Static world • Solution: A* • Complex scenario: • Multiple characters (overlapping paths) • Real time • Continuous map • Dynamic world
Pathfinding is a Problem! • Even in modern commercial games: • http://www.youtube.com/watch? v=lw9G-8gL5o0&feature=player_embedded
Quantization and Localization • Pathfinding computations are performed with an abstraction over the game map (a graph, composed of nodes and links) • Quantization: • Game map coordinates -> graph node • Localization: • Graph node -> Game map coordinates
Tile Graphs • Divide the game map in equal tiles (squares, hexagons, etc.) • Typical in RTS games
Navigation Meshes (Navmesh) • By far the most widely used • Use the level geometry as the pathfinding graph • Floor is made out of triangles: • Use the center of each “floor” triangle as a characteristic point
Navigation Meshes (Navmesh) • One characteristic point is only connected to neighboring ones
Navigation Meshes (Navmesh) • The validity of the graph depends on the level designer. It is her responsibility to author proper triangles for navigation:
Pathfinding • Simplest scenario: • Single character • Non-real time • Grid • Static world • Solution: A* • Complex scenario: • Multiple characters/Dynamic world: D* / D* Lite • Real time: TBA* / LRTA* • Continuous map: Quantization
Outline • Student Presentations • Game AI • Movement • Path-finding • Scripting • Board Games • Learning • Project Discussion
Scripting • Most Game AI is scripted • Game Engine allows game developers to specify the behavior of characters in some scripting language: • Lua • Scheme • Python • Etc. • Game specific languages
Scripting • Advantages: • Gives control to the game designers (characters do whatever the game designers want) • Disadvantages: • Labor intensive: the behavior of each characters needs to be predefined for all situations • Rigid: • If the player finds a hole in one of the behaviors, she can exploit it again and again • Not adaptive to what the player does (unless scripted coded to be so)
Finite State Machines 4 SCVs barracks harvesting Train Harvest Build marines Minerals Barracks 4 marines & Less 4 marines Enemy 4 SCVs than & unseen No 4 SCVs Enemy seen marines Train SCVs Attack Explore Enemy Enemy seen
Finite State Machines • Easy to implement: switch(state) { case START: if (numSCVs<4) state = TRAIN_SCVs; if (numHarvestingSCVs>=4) state = BUILD_BARRACKS; Unit *SCV = findIdleSCV(); Unit *mineral = findClosestMineral(SCV); SCV->harvest(mineral); break; case TRAIN_SCVs: if (numSCVs>=4) state = START; Unit *base = findIdleBase(); base->train(UnitType::SCV); break; case BUILD_BARRACKS: … }
Finite State Machines • Good for simple AIs • Become unmanageable for complex tasks • Hard to maintain
Finite State Machines (Add a new state) 4 SCVs barracks harvesting Train Harvest Build marines Minerals Barracks 4 marines & Less 4 marines Enemy 4 SCVs than & unseen No 4 SCVs Enemy seen marines Train SCVs Attack Explore ? Enemy Enemy seen
Finite State Machines (Add a new state) 4 SCVs barracks harvesting Train Harvest Build 4 marines Minerals Barracks Enemy Less 4 marines unseen 4 SCVs than & No 4 SCVs Enemy seen marines Train Enemy 4 SCVs Inside Attack Explore Base Enemy Enemy seen Attack Inside Enemy
Hierarchical Finite State Machines • FSM inside of the state of another FSM • As many levels as needed • Can alleviate complexity problem to some extent Enemy Inside Base Attack Standard Inside Strategy Enemy No Enemy Inside Base
Hierarchical Finite State Machines • FSM inside of the state of another FSM • As many levels as needed • Can alleviate complexity problem to some extent Enemy 4 SCVs Inside barracks Train Harvest harvesting Build 4 marines Minerals Barracks Base Attack Enemy Less 4 marines unseen 4 SCVs than & Inside No 4 SCVs Enemy seen Enemy marines Train No Enemy 4 SCVs Attack Inside Explore Enemy Enemy Base seen
Behavior Trees • Combination of techniques (some of them we covered last week): • Hierarchical state machines • Scheduling • Automated planning • Action Execution • Increasingly popular in commercial games • Strength: • Visual and easy to understand way to author behaviors and decisions for characters without having programming knowledge
Behavior Trees Appeared in Halo 2 Halo 2 (2004)
Example Behavior Tree
Behavior Tree Basics • A behavior tree (BT) captures the “behavior” or “decision mechanism” of a character in a game • At each frame (if synchronous): • The game engine executes one cycle of the BT: • As a side effect of execution, a BT executes actions (that control the character) • The basic component of a behavior tree is a task • At each game cycle, a cycle of a task is executed • It returns success , failure , error, etc . • As a side effect of execution they might execute things in the game • Three basic types of tasks: • Conditions • Actions • Composites
Behavior Tree Tasks Condition Sequence Selector Action Domain dependent: Generic: Each game defines its own the same for all games actions and conditions
Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right
Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right
Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right
Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right
Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right
Example Execution: • Goal: Make a character move right when the player is near Sequence Is Player Move Near? Right
What If There Are Obstacles? • Goal: Make a character move right when the player is near, even if the door is closed Sequence Is Player Selector Near? Sequence Sequence Is Door Is Door Move Move Move Open Closed? Open? Right Right to Door Door
Outline • Student Presentations • Game AI • Movement • Path-finding • Scripting • Board Games • Learning • Project Discussion
Board Games • Main characteristic: turn-based • The AI has a lot of time to decide the next move
Board Games • Not just chess …
Board Games • Not just chess …
Board Games • Not just chess …
Board Games • Not just chess …
Board Games • Not just chess …
Game Tree • Game trees capture the effects of successive action executions: Current Situation Player 1 action Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)
Game Tree • Game trees capture the effects of successive action executions: Current Situation Pick the action that leads to the state with maximum expected utility after taking into account what the other Player 1 action players might do Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)
Game Tree • Game trees capture the effects of successive action executions: In this example, we look ahead only one Current Situation player 1 action and one player 2 action. But we could grow the tree arbitrarily deep Player 1 action Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)
Minimax Principle • Positive utility is good for player 1, and negative for player 2 • Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action Player 2 action U(s) = -1 U(s) = 0 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0
Minimax Principle • Positive utility is good for player 1, and negative for player 2 • Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action Player 2 action (min) U(s) = -1 U(s) = 0 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0
Minimax Principle • Positive utility is good for player 1, and negative for player 2 • Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action U(s) = -1 U(s) = -1 U(s) = 0 Player 2 action (min) U(s) = -1 U(s) = 0 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0
Minimax Principle • Positive utility is good for player 1, and negative for player 2 • Player 1 chooses actions that maximize U, player 2 chooses actions that minimize U Current Situation Player 1 action (max) U(s) = -1 U(s) = -1 U(s) = 0 Player 2 action (min) U(s) = -1 U(s) = 0 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0
Minimax Algorithm Minimax(state, player, MAX_DEPTH) IF MAX_DEPTH == 0 RETURN U(state) BestAction = null BestScore = null FOR Action in actions(player, state) (Score,Action2) = Minimax(result(action, state), nextplayer(player), MAX_DEPTH-1) IF BestScore == null || (player == 1 && Score>BestScore) || (player == 2 && Score<BestScore) BestScore = Score BestAction = Action ENDFOR RETURN (BestScore, BestAction)
Minimax Algorithm Minimax(state, player, MAX_DEPTH) IF MAX_DEPTH == 0 RETURN U(state) BestAction = null BestScore = null FOR Action in actions(player, state) (Score,Action2) = Minimax(simulate(action, state), nextplayer(player), MAX_DEPTH-1) IF BestScore == null || (player == 1 && Score>BestScore) || (player == 2 && Score<BestScore) BestScore = Score BestAction = Action ENDFOR RETURN (BestScore, BestAction)
Recommend
More recommend