part ii strategic interaction
play

Part II: Strategic Interaction Introduction of competition Three - PDF document

Part II: Strategic Interaction Introduction of competition Three instruments to compete in a market (classify according to the speed at which they can be altered): In short-run: prices (Chapter 5), with rigid cost structure and product


  1. Part II: Strategic Interaction • Introduction of competition • Three instruments to compete in a market (classify according to the speed at which they can be altered): – In short-run: prices (Chapter 5), with rigid cost structure and product characteristics. – In longer-run: - cost structure and product character- istics can be changed. Capacity constraint (Chapter 5), quality, product design, product differentia- tion, Advertising (Chapter 7); Barrier to entry , accommodation and exit (chapter 8); Reputation and predation (Chapter 9). – In long-run: product characteristic, cost structures, R&D (chapter 10) 1

  2. Chapter 11: Introduction to Non Cooperative Game Theory 1 Introduction • “The Theory of Games and Economic Behavior”, John von Neumann and Oskar Morgenstern, 1944. • Two distinct possible approaches: – The strategic and non-cooperative approach. – The cooperative approach. • “ Games ”: scienti fi c metaphor for a wider range of human interactions. • A game is being played any time people interact with each other. • People interact in a rational manner. • Rationality: fundamental assumption in Neoclassical economic theory. But the individual needs not consider her interactions with other individuals . • Game theory: study of rational behavior in situation involving interdependence . 2

  3. Outline 1. Introduction 2. Games and Strategies 3. Static games of complete information – Nash Equilibrium 4. Dynamic games of complete information – Subgame Perfect Nash Equilibrium 5. Static games of incomplete information – Bayesian Nash Equilibrium 6. Dynamic games of incomplete information. – Subgame Perfect Bayesian Equilibrium 7. Reaction functions • Game of complete information - each player’s payoff function is common knowledge among all the players • Game of incomplete information - some players are uncertain about other players payoff functions 3

  4. 2 Games and strategies 2.1 The rules of the game The rules must tell us • who can do what , when they can do it, • who gets how much when the game is over. Essential elements of a game: • players ( who ); strategies ( what ); information; timing ( when ); payoffs ( how much ) 2 principal representations of the rules of the game: • The normal or strategic form; • The extensive form (tree). Assumption: there is common knowledge . Player 1 knows the rules. Player 1 knows that player 2 knows the rules. Player 1 knows that player 2 knows that player 1 knows the rules and so on and so forth. (“I know that you know, I know that you know that I know....”). 4

  5. • Players in the game: n players ( fi rms) i = 1 , 2 , ..., n • Set of strategies (or actions) available to each player s i ∈ S i • ( s 1 , ..., s n ) is the combination of strategies • Payoff associated with any strategy combination π i ( s 1 , ..., s n ) • Information set De fi nition A strategy for a player is a complete plan of actions. It speci fi es a feasible action for the player in every contingency in which the player might be called on to act. De fi nition A pure strategy is the choice by a player of a given action with certainty. De fi nition A mixed strategy is when one player plays randomly between different strategies. Remark A pure strategy is a special case of a mixed strategy. 5

  6. 2.2 Normal form The normal-form representation of a n -player game speci fi es: • The players’ strategies space S 1 , ..., S n • and their payoff functions π 1 , ..., π n • Let denote this game by G = { S 1 , ..., S n ; π 1 , ..., π n } 2.3 Extensive form (Tree of the game) The extensive-form representation of a game speci fi es 1. the players of the game, 2.a. when each player has to move, 2.b. what each player can do at each of his opportunities to move, 2.c. what each player knows at each of the opportunities to move. 3. The payoff received by each player for each combina- tion of moves that could be chosen by the players. 6

  7. 2.4 Example: Prisoners’ Dilemma • 2 suspects are arrested and charge for a crime. • The police lack suf fi cient evidence to convict the suspects, unless at least one confesses. • Deal from the police with each suspect (separately): – if neither confesses then both will be convicted of a minor offence (= 1 month in jail); – if both confess then both will be sentenced to jail for 6 months; – if one confesses but the other does not, then the confessor will be released immediately, the other will be sentenced to 9 months in jail. 1 / 2 not confess not − 1 , − 1 − 9 , 0 confess 0 , − 9 − 6 , − 6 7

  8. 3 Static Game of Complete Infor- mation • Iterated elimination of strictly dominated strategies De fi nition In the normal-form game G , let s 0 i and s 00 i be feasible strategies for player i . Strategy s 0 i is strictly dominated by strategy s 00 i if for each feasible combination of the other players’ strategies, π i ( s 1 , ..., s i − 1 , s 0 i , s i +1 , ..., s n ) < π i ( s 1 , ..., s i − 1 , s 00 i , s i +1 , ..., s n ) for each s − i = ( s 1 , ..., s i − 1 , s i +1 , ..., s n ) . • Nash Equilibrium De fi nition In the normal-form game G , the strategies ( s ∗ 1 , ..., s ∗ n ) are a Nash Equilibrium if, for each player i , s ∗ i is player i ’s best response to the strategies speci fi ed for the n − 1 other players, ( s ∗ 1 , ., s ∗ i − 1 , s ∗ i +1 , .., s ∗ n ) : π i ( s ∗ 1 , ., s ∗ i − 1 , s ∗ i , s ∗ i +1 , .., s ∗ n ) ≥ π i ( s ∗ 1 , ., s ∗ i − 1 , s i , s ∗ i +1 , .., s ∗ n ) for every feasible strategy s i in S i ; that is, s ∗ i solves s i ∈ S i π i ( s ∗ 1 , ., s ∗ i − 1 , s i , s ∗ i +1 , .., s ∗ max n ) . 8

  9. Proposition In the normal-form game G , if iterated elimination of strictly dominated strategies eliminates all but strategies ( s ∗ n ) , then these strategies are the 1 , ..., s ∗ unique Nash equilibrium of the game. Proposition In the normal-form game G , if the strategies ( s ∗ 1 , ..., s ∗ n ) are a Nash equilibrium, then they survive iterated elimination of strictly dominated strategies. More examples: 1. The battle of the sexes – 2 players: a wife and her husband – Strategies space: {Opera , Soccer game} – Payoffs: both players would rather spend the evening together than apart, but the woman prefers the opera, her husband the soccer game. Wife / Husband Opera Soccer game Opera 2 , 1 0 , 0 Soccer game 0 , 0 1 , 2 – What are the equilibria? 9

  10. 2. Matching pennies – 2 players: player 1 and 2 – Strategies space: {Tails, Heads} – Payoffs Player 1 /Player 2 Heads Tails Heads 1 , − 1 − 1 , 1 Tails − 1 , 1 1 , − 1 3. Price competition with differentiated goods – 2 players: fi rm 1 and 2 – strategies s i = p i for i = 1 , 2 – c : unit cost – Demand for fi rm i is q i = D i ( p i , p j ) = 1 − bp i + dp j with 0 ≤ d ≤ b . – Each fi rm maximizes its pro fi t π i = ( p i − c )(1 − bp i + dp j ) Max p i – There exists an unique Nash equilibrium 2 = 1 + cb p ∗ 1 = p ∗ 2 b − d 10

  11. 4 Dynamic Game of Complete Information • Players’ payoff function are common knowledge. • Perfect information : at each move in the game the player with the move knows the full history of the play of the game thus far. • Imperfect information : at some move the player with the move does not know the history of the game. • Central issue of dynamic games: credibility . • Subgame Perfect Nash equilibrium (Selten, 1965): re fi nement of Nash equilibrium for dynamic game. • Backward induction argument, Kuhn’s algorithm (Kuhn, 1953) 11

  12. 4.1 Dynamic game of complete and perfect information Timing: 1. Player 1 chooses an action a 1 ∈ A 1 2. Player 2 observes a 1 and thus chooses an action a 2 ∈ A 2 3. Payoffs are π 1 ( a 1 , a 2 ) and π 2 ( a 1 , a 2 ) • Examples: Stackelberg’s model of duopoly; Rubin- stein’s bargaining game.... • Backwards induction – player 2 chooses a 1 that maximizes π 2 ( a 1 , a 2 ) . Assume that for each a 1 there exists a unique solution R 2 ( a 1 ) . – Player 1 should anticipate R 2 ( a 1 ) , and chooses a 1 that maximizes π 1 ( a 1 , R 2 ( a 1 )) . Assume there exists a unique solution a ∗ 1 . – Backward induction outcome ( a ∗ 1 , R 2 ( a ∗ 1 )) – Subgame Perfect equilibrium is ( a ∗ 1 , R 2 ( a 1 )) 12

  13. De fi nition A subgame in an extensive-form game – begins at a decision node n that is a singleton informa- tion set (but not the fi rst decision node), – includes all the decision and terminal nodes following n in the game tree and, – does not cut any information set. De fi nition A NE is subgame-perfect if the players’ strategies constitute a Nash equilibrium in every subgame. De fi nition In the two-stage game of complete and perfect information, the backward-induction outcome is 1 )) but the subgame-perfect Nash equilibrium ( a ∗ 1 , R 2 ( a ∗ is ( a ∗ 1 , R 2 ( a 1 )) . Example 1 : • Player 1 chooses L or R , where L ends the game with payoff 2 to player 1 and 0 to player 2. • Player 2 observes 1’s choice. If 1 chooses R then 2 chooses L 0 or R 0 where L 0 ends the game with 1 to each player. • Player 1 observes 2’s choice. If the earlier choices were R and R 0 then 1 chooses L 00 and R 00 , both of which end the game, L 00 with payoffs (3,0) and R 00 with (0,2). 13

Recommend


More recommend