epistemic game theory
play

Epistemic Game Theory Lecture 2 ESSLLI12, Opole Eric Pacuit - PowerPoint PPT Presentation

Epistemic Game Theory Lecture 2 ESSLLI12, Opole Eric Pacuit Olivier Roy TiLPS, Tilburg University MCMP, LMU Munich ai.stanford.edu/~epacuit http://olivier.amonbofis.net August 7, 2012 Eric Pacuit and Olivier Roy 1 Plan for the week 1.


  1. Dominance vs MEU For the converse direction, we sketch the proof for two player games and where X = S − i . 1 Let G = � S 1 , S 2 , u 1 , u 2 � be a two-player game. (Let U i : ∆( S 1 ) × ∆( S 2 ) → R be the expected utility for i ) Suppose that α ∈ ∆( S 1 ) is not a best response to any p ∈ ∆( S 2 ). ∀ p ∈ ∆( S 2 ) ∃ q ∈ ∆( S 1 ) , U 1 ( q , p ) > U 1 ( α, p ) 1 The proof of the more general statement uses the supporting hyperplane theorem from convex analysis. Eric Pacuit and Olivier Roy 7

  2. Dominance vs MEU For the converse direction, we sketch the proof for two player games and where X = S − i . 1 Let G = � S 1 , S 2 , u 1 , u 2 � be a two-player game. (Let U i : ∆( S 1 ) × ∆( S 2 ) → R be the expected utility for i ) Suppose that α ∈ ∆( S 1 ) is not a best response to any p ∈ ∆( S 2 ). ∀ p ∈ ∆( S 2 ) ∃ q ∈ ∆( S 1 ) , U 1 ( q , p ) > U 1 ( α, p ) We can define a function b : ∆( S 2 ) → ∆( S 1 ) where, for each p ∈ ∆( S 2 ), U 1 ( b ( p ) , p ) > U 1 ( α, p ). 1 The proof of the more general statement uses the supporting hyperplane theorem from convex analysis. Eric Pacuit and Olivier Roy 7

  3. Dominance vs MEU Consider the game G ′ = � S 1 , S 2 , u 1 , u 2 � where u 1 ( s 1 , s 2 ) = u 1 ( s 1 , s 2 ) − U 1 ( α, s 2 ) and u 2 ( s 1 , s 2 ) = − u 1 ( s 1 , s 2 ) Eric Pacuit and Olivier Roy 8

  4. Dominance vs MEU Consider the game G ′ = � S 1 , S 2 , u 1 , u 2 � where u 1 ( s 1 , s 2 ) = u 1 ( s 1 , s 2 ) − U 1 ( α, s 2 ) and u 2 ( s 1 , s 2 ) = − u 1 ( s 1 , s 2 ) By the minimax theorem, there is a Nash equilibrium ( p ∗ 1 , p ∗ 2 ) such that for all m ∈ ∆( S 2 ), U ( p ∗ 1 , m ) ≥ U 1 ( p ∗ 1 , p ∗ 2 ) ≥ U 1 ( b ( p ∗ 2 ) , p ∗ 2 ) Eric Pacuit and Olivier Roy 8

  5. Dominance vs MEU Consider the game G ′ = � S 1 , S 2 , u 1 , u 2 � where u 1 ( s 1 , s 2 ) = u 1 ( s 1 , s 2 ) − U 1 ( α, s 2 ) and u 2 ( s 1 , s 2 ) = − u 1 ( s 1 , s 2 ) By the minimax theorem, there is a Nash equilibrium ( p ∗ 1 , p ∗ 2 ) such that for all m ∈ ∆( S 2 ), U ( p ∗ 1 , m ) ≥ U 1 ( p ∗ 1 , p ∗ 2 ) ≥ U 1 ( b ( p ∗ 2 ) , p ∗ 2 ) We now prove that U 1 ( b ( p ∗ 2 ) , p ∗ 2 ) > 0: Eric Pacuit and Olivier Roy 8

  6. Dominance vs MEU U 1 ( b ( p ∗ 2 ) , p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ 2 ) = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y )[ u 1 ( x , y ) − U 1 ( α, y )] x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( b ( p ∗ 2 ) , p ∗ = 2 ) y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ > 2 ) − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ > 2 ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ 2 )( x ) U 1 ( α, p ∗ = 2 ) · � 2 ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ = 2 ) = 0 Eric Pacuit and Olivier Roy 9

  7. Dominance vs MEU U 1 ( b ( p ∗ 2 ) , p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ 2 ) = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y )[ u 1 ( x , y ) − U 1 ( α, y )] x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( b ( p ∗ 2 ) , p ∗ = 2 ) y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ > 2 ) − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ > 2 ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ 2 )( x ) U 1 ( α, p ∗ = 2 ) · � 2 ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ = 2 ) = 0 Eric Pacuit and Olivier Roy 9

  8. Dominance vs MEU U 1 ( b ( p ∗ 2 ) , p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ 2 ) = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y )[ u 1 ( x , y ) − U 1 ( α, y )] x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( b ( p ∗ 2 ) , p ∗ = 2 ) y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ > 2 ) − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ > 2 ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ 2 )( x ) U 1 ( α, p ∗ = 2 ) · � 2 ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ = 2 ) = 0 Eric Pacuit and Olivier Roy 9

  9. Dominance vs MEU U 1 ( b ( p ∗ 2 ) , p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ 2 ) = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y )[ u 1 ( x , y ) − U 1 ( α, y )] x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( b ( p ∗ 2 ) , p ∗ = 2 ) y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ > 2 ) − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ > 2 ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ 2 )( x ) U 1 ( α, p ∗ = 2 ) · � 2 ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ = 2 ) = 0 Eric Pacuit and Olivier Roy 9

  10. Dominance vs MEU U 1 ( b ( p ∗ 2 ) , p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ 2 ) = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y )[ u 1 ( x , y ) − U 1 ( α, y )] x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( b ( p ∗ 2 ) , p ∗ = 2 ) y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ > 2 ) − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ > 2 ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ 2 )( x ) U 1 ( α, p ∗ = 2 ) · � 2 ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ = 2 ) = 0 Eric Pacuit and Olivier Roy 9

  11. Dominance vs MEU U 1 ( b ( p ∗ 2 ) , p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ 2 ) = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y )[ u 1 ( x , y ) − U 1 ( α, y )] x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( b ( p ∗ 2 ) , p ∗ = 2 ) y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ > 2 ) − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ y ∈ S 2 p ∗ = 2 ) − � 2 )( x ) � 2 ( y ) U 1 ( α, y ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ 2 )( x ) U 1 ( α, p ∗ = 2 ) · � 2 ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ = 2 ) = 0 Eric Pacuit and Olivier Roy 9

  12. Dominance vs MEU U 1 ( b ( p ∗ 2 ) , p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ 2 ) = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y )[ u 1 ( x , y ) − U 1 ( α, y )] x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( b ( p ∗ 2 ) , p ∗ = 2 ) y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ > 2 ) − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ y ∈ S 2 p ∗ = 2 ) − � 2 )( x ) � 2 ( y ) U 1 ( α, y ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ = 2 ) · � 2 )( x ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ = 2 ) · � 2 )( x ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ = 2 ) = 0 Eric Pacuit and Olivier Roy 9

  13. Dominance vs MEU U 1 ( b ( p ∗ 2 ) , p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ 2 ) = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y )[ u 1 ( x , y ) − U 1 ( α, y )] x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ = � � 2 ( y ) u 1 ( x , y ) x ∈ S 1 y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( b ( p ∗ 2 ) , p ∗ = 2 ) y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ y ∈ S 2 b ( p ∗ 2 )( x ) p ∗ > 2 ) − � � 2 ( y ) U 1 ( α, y ) x ∈ S 1 U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ y ∈ S 2 p ∗ = 2 ) − � 2 )( x ) � 2 ( y ) U 1 ( α, y ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ x ∈ S 1 b ( p ∗ = 2 ) · � 2 )( x ) U 1 ( α, p ∗ 2 ) − U 1 ( α, p ∗ = 2 ) = 0 Eric Pacuit and Olivier Roy 9

  14. Dominance vs MEU Hence, for all m ∈ ∆( S 2 ) we have U ( p ∗ 1 , m ) ≥ U 1 ( p ∗ 1 , p ∗ 2 ) ≥ U 1 ( b ( p ∗ 2 ) , p ∗ 2 ) > 0 Eric Pacuit and Olivier Roy 10

  15. Dominance vs MEU Hence, for all m ∈ ∆( S 2 ) we have U ( p ∗ 1 , m ) ≥ U 1 ( p ∗ 1 , p ∗ 2 ) ≥ U 1 ( b ( p ∗ 2 ) , p ∗ 2 ) > 0 which implies for all m ∈ ∆( S 2 ), U 1 ( p ∗ 1 , m ) > U 1 ( α, m ), and so α is strictly dominated by p ∗ 1 . Eric Pacuit and Olivier Roy 10

  16. Dominance vs MEU Important Issue: Correlated Beliefs x l r y l r z l r 1,1,3 1,0,3 1,1,2 1,0,0 1,1,0 1,0,0 u u u 0,1,0 0,0,0 0,1,0 1,1,2 0,1,3 0,0,3 d d d Eric Pacuit and Olivier Roy 11

  17. Dominance vs MEU Important Issue: Correlated Beliefs x l r y l r z l r 1,1,3 1,0,3 1,1,2 1,0,0 1,1,0 1,0,0 u u u 0,1,0 0,0,0 0,1,0 1,1,2 0,1,3 0,0,3 d d d ◮ Note that y is not strictly dominated for Charles. Eric Pacuit and Olivier Roy 11

  18. Dominance vs MEU Important Issue: Correlated Beliefs x l r y l r z l r 1,1,3 1,0,3 1,1,2 1,0,0 1,1,0 1,0,0 u u u 0,1,0 0,0,0 0,1,0 1,1,2 0,1,3 0,0,3 d d d ◮ Note that y is not strictly dominated for Charles. ◮ It is easy to find a probability measure p ∈ ∆( S A × S B ) such that y is a best response to p . Suppose that p ( u , l ) = p ( d , r ) = 1 2 . Then, EU ( x , p ) = EU ( z , p ) = 1 . 5 while EU ( y , p ) = 2. Eric Pacuit and Olivier Roy 11

  19. Dominance vs MEU Important Issue: Correlated Beliefs x l r y l r z l r 1,1,3 1,0,3 1,1,2 1,0,0 1,1,0 1,0,0 u u u 0,1,0 0,0,0 0,1,0 1,1,2 0,1,3 0,0,3 d d d ◮ Note that y is not strictly dominated for Charles. ◮ It is easy to find a probability measure p ∈ ∆( S A × S B ) such that y is a best response to p . Suppose that p ( u , l ) = p ( d , r ) = 1 2 . Then, EU ( x , p ) = EU ( z , p ) = 1 . 5 while EU ( y , p ) = 2. ◮ However, there is no probability measure p ∈ ∆( S A × S B ) such that y is a best response to p and p ( u , l ) = p ( u ) · p ( l ). Eric Pacuit and Olivier Roy 11

  20. Dominance vs MEU x l r y l r z l r u 1,1,3 1,0,3 u 1,1,2 1,0,0 u 1,1,0 1,0,0 d 0,1,0 0,0,0 d 0,1,0 1,1,2 d 0,1,3 0,0,3 ◮ To see this, suppose that a is the probability assigned to u and b is the probability assigned to l . Then, we have: • The expected utility of y is 2 ab + 2(1 − a )(1 − b ); • The expected utility of x is 3 ab + 3 a (1 − b ) = 3 a ( b + (1 − b )) = 3 a ; and • The expected utility of z is 3(1 − a ) b + 3(1 − a )(1 − b ) = 3(1 − a )( b + (1 − b )) = 3(1 − a ). Eric Pacuit and Olivier Roy 12

  21. Dominance vs MEU Weak Dominance and MEU Fact . Suppose that G = � N , { S i } i ∈ N , { u i } i ∈ N � is a strategic game and X ⊆ S − i . A strategy s i ∈ S i is weakly dominated (possibly by a mixed strategy) with respect to X iff there is no full support probability measure p ∈ ∆ > 0 ( X ) such that s i is a best response to p . Eric Pacuit and Olivier Roy 13

  22. Dominance vs MEU Some preliminary remarks Eric Pacuit and Olivier Roy 14

  23. Dominance vs MEU Propositional Attitudes ◮ We will talk about so-called propositional attitudes . These are attitudes (like knowledge, beliefs, desires, intentions, etc...) that take propositions as objects. Eric Pacuit and Olivier Roy 15

  24. Dominance vs MEU Propositional Attitudes ◮ We will talk about so-called propositional attitudes . These are attitudes (like knowledge, beliefs, desires, intentions, etc...) that take propositions as objects. ◮ Proposition will be taken to be element of a given algebra. I.e. measurable subsets of a state space (sigma- and/or power-set algebra), formulas in a given language (abstract Boolean algebra)... Eric Pacuit and Olivier Roy 15

  25. Dominance vs MEU All-out vs graded attitudes ◮ A propositional attitude A is all-out when, for any proposition p , the agent can only be in three states of that attitude regarding p : 1. Ap : the agent “believes” that p . 2. A ¬ p : the agent “disbelieve” that p . 3. ¬ Ap ∧ ¬ A ¬ p : the agent “suspends judgment” about p . Eric Pacuit and Olivier Roy 16

  26. Dominance vs MEU All-out vs graded attitudes ◮ A propositional attitude A is all-out when, for any proposition p , the agent can only be in three states of that attitude regarding p : 1. Ap : the agent “believes” that p . 2. A ¬ p : the agent “disbelieve” that p . 3. ¬ Ap ∧ ¬ A ¬ p : the agent “suspends judgment” about p . ◮ A propositional attitude A is graded when, for any proposition p , the states of that attitude that the agent be in w.r.t. a proposition p can be compared according to their strength on a given scale. P ¬ P p i A 1/8 3/8 Eric Pacuit and Olivier Roy 16

  27. Knowledge and beliefs in games Hard and Soft Attitudes ◮ Hard attitudes: • Truthful. • Unrevisable. • Fully introspective. ◮ Soft attitudes: • Can be false / mistaken. • Revisable / can be reversed. • Not fully introspective. Eric Pacuit and Olivier Roy 17

  28. Knowledge and beliefs in games Models of graded beliefs Eric Pacuit and Olivier Roy 18

  29. Models of graded beliefs Harsanyi Type Space Based on the work of John Harsanyi on games with incomplete information , game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality: A type is everything a player knows privately at the beginning of the game which could affect his beliefs about payoffs and about all other players’ possible types. Each type is assigned a joint probability over the space of types and actions λ i : T i → ∆( T − i × S − i ) The other players’ types Eric Pacuit and Olivier Roy 19

  30. Models of graded beliefs Harsanyi Type Space Based on the work of John Harsanyi on games with incomplete information , game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality: ◮ A type is everything a player knows privately at the beginning of the game which could affect his beliefs about payoffs and about all other players’ possible types. Each type is assigned a joint probability over the space of types and actions λ i : T i → ∆( T − i × S − i ) The other players’ types Eric Pacuit and Olivier Roy 19

  31. Models of graded beliefs Harsanyi Type Space Based on the work of John Harsanyi on games with incomplete information , game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality: ◮ A type is everything a player knows privately at the beginning of the game which could affect his beliefs about payoffs and about all other players’ possible types. ◮ Each type is assigned a joint probability over the space of types and actions λ i : T i → ∆( T − i × S − i ) The other players’ types Eric Pacuit and Olivier Roy 19

  32. Models of graded beliefs Harsanyi Type Space Based on the work of John Harsanyi on games with incomplete information , game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality: ◮ A type is everything a player knows privately at the beginning of the game which could affect his beliefs about payoffs and about all other players’ possible types. ◮ Each type is assigned a joint probability over the space of types and actions λ i : T i → ∆( T − i × S − i ) Player i ’s types Eric Pacuit and Olivier Roy 19

  33. Models of graded beliefs Harsanyi Type Space Based on the work of John Harsanyi on games with incomplete information , game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality: ◮ A type is everything a player knows privately at the beginning of the game which could affect his beliefs about payoffs and about all other players’ possible types. ◮ Each type is assigned a joint probability over the space of types and actions λ i : T i → ∆( T − i × S − i ) The set of all probability distributions Eric Pacuit and Olivier Roy 19

  34. Models of graded beliefs Harsanyi Type Space Based on the work of John Harsanyi on games with incomplete information , game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality: ◮ A type is everything a player knows privately at the beginning of the game which could affect his beliefs about payoffs and about all other players’ possible types. ◮ Each type is assigned a joint probability over the space of types and actions λ i : T i → ∆( T − i × S − i ) The other players’ types Eric Pacuit and Olivier Roy 19

  35. Models of graded beliefs Harsanyi Type Space Based on the work of John Harsanyi on games with incomplete information , game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality: ◮ A type is everything a player knows privately at the beginning of the game which could affect his beliefs about payoffs and about all other players’ possible types. ◮ Each type is assigned a joint probability over the space of types and actions λ i : T i → ∆( T − i × S − i ) The other players’ choices Eric Pacuit and Olivier Roy 19

  36. Models of graded beliefs Returning to the Example: A Game Model Bob One type for Ann ( t A ) and two types U H M for Bob ( t B , u B ) A state is a tuple of choices and 3,3 0,0 H Ann types: ( M , t A , M , u B ) 0,0 1,1 M Calculate expected utility in the usual way... U H M U H M U H M t B 0 0.5 t A t A 0 1 0.4 0.6 u B 0.2 0.3 t B u B t A Eric Pacuit and Olivier Roy 20

  37. Models of graded beliefs Returning to the Example: A Game Model Bob ◮ One type for Ann ( t A ) and two types U H M for Bob ( t B , u B ) A state is a tuple of choices and 3,3 0,0 H Ann types: ( M , t A , M , u B ) 0,0 1,1 M Calculate expected utility in the usual way... U H M U H M U H M t B 0 0.5 t A t A 0 1 0.4 0.6 u B 0.2 0.3 t B u B t A Eric Pacuit and Olivier Roy 20

  38. Models of graded beliefs Returning to the Example: A Game Model Bob ◮ One type for Ann ( t A ) and two types U H M for Bob ( t B , u B ) ◮ A state is a tuple of choices and 3,3 0,0 H Ann types: ( M , M , t A , t B ) 0,0 1,1 M Calculate expected utility in the usual way... U H M U H M U H M t B 0 0.5 t A t A 0 1 0.4 0.6 u B 0.2 0.3 t B u B t A Eric Pacuit and Olivier Roy 20

  39. Models of graded beliefs Returning to the Example: A Game Model Bob ◮ One type for Ann ( t A ) and two types U H M for Bob ( t B , u B ) ◮ A state is a tuple of choices and 3,3 0,0 H Ann types: ( M , t A , M , u B ) 0,0 1,1 M ◮ Calculate expected utility in the usual way... U H M U H M U H M t B 0 0.5 t A t A 0 1 0.4 0.6 u B 0.2 0.3 t B u B t A Eric Pacuit and Olivier Roy 20

  40. Models of graded beliefs Returning to the Example: A Game Model Bob Ann ( t A ) is rational U H M 0 · 0 . 5 + 1 · 0 ≥ 3 · 0 . 5 + 0 · 0 . 2 Bob is rational 3,3 0,0 H Ann 0 · 0 . 5 + 1 · 0 ≥ 3 · 0 . 5 + 0 · 0 . 2 0,0 1,1 M Bob thinks Ann is irrational P B ( Irrat ( Ann )) = 0 . xx U H M U H M U H M t B 0 0.5 t A t A 0 1 0.4 0.6 u B 0.2 0.3 u B t A t B Eric Pacuit and Olivier Roy 20

  41. Models of graded beliefs Returning to the Example: A Game Model Bob ◮ M is rational for Ann ( t A ) U H M 0 · 0 . 2 + 1 · 0 . 8 ≥ 3 · 0 . 2 + 0 · 0 . 8 Bob is rational 3,3 0,0 H Ann 0 · 0 . 5 + 1 · 0 ≥ 3 · 0 . 5 + 0 · 0 . 2 0,0 1,1 M Bob thinks Ann is irrational P B ( Irrat ( Ann )) = 0 . xx U H M U H M U H M t B 0 0.5 t A t A 0 1 0.4 0.6 u B 0.2 0.3 u B t A t B Eric Pacuit and Olivier Roy 20

  42. Models of graded beliefs Returning to the Example: A Game Model Bob ◮ M is rational for Ann ( t A ) U H M 0 · 0 . 2 + 1 · 0 . 8 ≥ 3 · 0 . 2 + 0 · 0 . 8 ◮ M is rational for Bob ( t B ) 3,3 0,0 H Ann 0 · 0 + 1 · 1 ≥ 3 · 0 + 0 · 1 0,0 1,1 M Bob thinks Ann is irrational P B ( Irrat ( Ann )) = 0 . xx U H M U H M U H M t B 0 0.5 t A t A 0 1 0.4 0.6 u B 0.2 0.3 u B t A t B Eric Pacuit and Olivier Roy 20

  43. Models of graded beliefs Returning to the Example: A Game Model Bob ◮ M is rational for Ann ( t A ) U H M 0 · 0 . 2 + 1 · 0 . 8 ≥ 3 · 0 . 2 + 0 · 0 . 8 ◮ M is rational for Bob ( t B ) 3,3 0,0 H Ann 0 · 0 + 1 · 1 ≥ 3 · 0 + 0 · 1 0,0 1,1 ◮ Ann thinks Bob may be irrational M P B ( Irrat ( Ann )) = 0 . xx U H M U H M U H M t B 0 0.5 t A t A 0 1 0.4 0.6 u B 0.2 0.3 u B t A t B Eric Pacuit and Olivier Roy 20

  44. Models of graded beliefs Returning to the Example: A Game Model Bob ◮ M is rational for Ann ( t A ) U H M 0 · 0 . 2 + 1 · 0 . 8 ≥ 3 · 0 . 2 + 0 · 0 . 8 ◮ M is rational for Bob ( t B ) 3,3 0,0 H Ann 0 · 0 + 1 · 1 ≥ 3 · 0 + 0 · 1 0,0 1,1 ◮ Ann thinks Bob may be irrational M P A ( Irrat [ B ]) = 0 . 3, P A ( Rat [ B ]) = 0 . 7 U H M U H M U H M t B 0 0.5 t A t A 0 1 0.4 0.6 u B 0.2 0.3 u B t A t B Eric Pacuit and Olivier Roy 20

  45. Models of graded beliefs Rationality Let G = � N , { S i } i ∈ N , { u i } i ∈ N � be a strategic game and T = �{ T i } i ∈ N , { λ i } i ∈ N , S � a type space for G . Eric Pacuit and Olivier Roy 21

  46. Models of graded beliefs Rationality Let G = � N , { S i } i ∈ N , { u i } i ∈ N � be a strategic game and T = �{ T i } i ∈ N , { λ i } i ∈ N , S � a type space for G . For each t i ∈ T i , we can define a probability measure p t i ∈ ∆( S − i ): � p t i ( s − i ) = λ i ( t i )( s − i , t − i ) t − i ∈ T − i Eric Pacuit and Olivier Roy 21

  47. Models of graded beliefs Rationality Let G = � N , { S i } i ∈ N , { u i } i ∈ N � be a strategic game and T = �{ T i } i ∈ N , { λ i } i ∈ N , S � a type space for G . For each t i ∈ T i , we can define a probability measure p t i ∈ ∆( S − i ): � p t i ( s − i ) = λ i ( t i )( s − i , t − i ) t − i ∈ T − i The set of states (pairs of strategy profiles and type profiles) where player i chooses rationally is: Rat i := { ( s i , t i ) | s i is a best response to p t i } The event that all players are rational is Rat = { ( s , t ) | for all i , ( s i , t i ) ∈ Rat i } . Eric Pacuit and Olivier Roy 21

  48. Models of graded beliefs Common “knowledge” of rationality In much of this literature, “full belief” or sometimes “knowledge” is identified with probability 1. (This is not a philosophical commitment, but rather a term of art!) Eric Pacuit and Olivier Roy 22

  49. Models of graded beliefs Common knowledge of rationality Define R n i by induction on n : Eric Pacuit and Olivier Roy 23

  50. Models of graded beliefs Common knowledge of rationality Define R n i by induction on n : Let R 1 i = Rat i . Eric Pacuit and Olivier Roy 23

  51. Models of graded beliefs Common knowledge of rationality Define R n i by induction on n : Let R 1 i = Rat i . Suppose that for each i , R n i has been defined. Define R n − i as follows: R n − i = { ( s , t ) | s ∈ S − i , t ∈ T − j , and for each j � = i , ( s j , t j ) ∈ R n j } . Eric Pacuit and Olivier Roy 23

  52. Models of graded beliefs Common knowledge of rationality Define R n i by induction on n : Let R 1 i = Rat i . Suppose that for each i , R n i has been defined. Define R n − i as follows: R n − i = { ( s , t ) | s ∈ S − i , t ∈ T − j , and for each j � = i , ( s j , t j ) ∈ R n j } . For each n > 1, R n +1 = { ( s , t ) | ( s , t ) ∈ R n i and λ i ( t ) assigns probability 1 to R n − i } i Eric Pacuit and Olivier Roy 23

  53. Models of graded beliefs Common knowledge of rationality Define R n i by induction on n : Let R 1 i = Rat i . Suppose that for each i , R n i has been defined. Define R n − i as follows: R n − i = { ( s , t ) | s ∈ S − i , t ∈ T − j , and for each j � = i , ( s j , t j ) ∈ R n j } . For each n > 1, R n +1 = { ( s , t ) | ( s , t ) ∈ R n i and λ i ( t ) assigns probability 1 to R n − i } i Common knowledge of rationality is: � R n � R n � R n 1 × 2 × · · · × N n ≥ 1 n ≥ 1 n ≥ 1 Eric Pacuit and Olivier Roy 23

  54. Models of graded beliefs B U L R ◮ Consider the state ( d , r , a 3 , b 3 ). Both a 3 2,2 0,0 U and b 3 correctly believe that (i.e., assign A probability 1 to) the outcome is ( d , r ) 0,0 1,1 D λ A ( a 1 ) λ A ( a 2 ) λ A ( a 3 ) L R L R L R 0 . 5 0 . 5 0 . 5 0 0 0 b 1 b 1 b 1 0 0 0 0 0 0 . 5 b 2 b 2 b 2 0 0 0 0 . 5 0 0 . 5 b 3 b 3 b 3 λ B ( b 1 ) λ B ( b 2 ) λ B ( b 3 ) U D U D U D a 1 0 . 5 0 a 1 0 . 5 0 a 1 0 0 a 2 0 0 . 5 a 2 0 0 a 2 0 0 . 5 a 3 0 0 a 3 0 0 . 5 a 3 0 0 . 5 Eric Pacuit and Olivier Roy 24

  55. Models of graded beliefs B ◮ This fact is not common knowledge: a 3 U L R assigns a 0.5 probability to Bob being of type b 2 , and type b 2 assigns a 0.5 2,2 0,0 U probability to Ann playing l . Ann does not A 0,0 1,1 know that Bob knows that she is playing r D λ A ( a 1 ) L R λ A ( a 2 ) L R λ A ( a 3 ) L R b 1 0 . 5 0 . 5 b 1 0 . 5 0 b 1 0 0 0 0 0 0 0 0 . 5 b 2 b 2 b 2 0 0 0 0 . 5 0 0 . 5 b 3 b 3 b 3 λ B ( b 1 ) λ B ( b 2 ) λ B ( b 3 ) U D U D U D 0 . 5 0 0 . 5 0 0 0 a 1 a 1 a 1 0 0 . 5 0 0 0 0 . 5 a 2 a 2 a 2 0 0 0 0 . 5 0 0 . 5 a 3 a 3 a 3 Eric Pacuit and Olivier Roy 24

  56. Models of graded beliefs B U L R ◮ Furthermore, while it is true that both 2,2 0,0 U Ann and Bob are rational, it is not A common knowledge that they are rational. 0,0 1,1 D λ A ( a 1 ) λ A ( a 2 ) λ A ( a 3 ) L R L R L R 0 . 5 0 . 5 0 . 5 0 0 0 b 1 b 1 b 1 0 0 0 0 0 0 . 5 b 2 b 2 b 2 0 0 0 0 . 5 0 0 . 5 b 3 b 3 b 3 λ B ( b 1 ) λ B ( b 2 ) λ B ( b 3 ) U D U D U D a 1 0 . 5 0 a 1 0 . 5 0 a 1 0 0 a 2 0 0 . 5 a 2 0 0 a 2 0 0 . 5 a 3 0 0 a 3 0 0 . 5 a 3 0 0 . 5 Eric Pacuit and Olivier Roy 24

  57. Models of graded beliefs General Comments ◮ Suppressed mathematical details about probabilities ( σ -algebra, etc.) Eric Pacuit and Olivier Roy 25

  58. Models of graded beliefs General Comments ◮ Suppressed mathematical details about probabilities ( σ -algebra, etc.) ◮ “Impossibility” is identified with probability 0, but it is an important distinction (especially for infinite games) Eric Pacuit and Olivier Roy 25

  59. Models of graded beliefs General Comments ◮ Suppressed mathematical details about probabilities ( σ -algebra, etc.) ◮ “Impossibility” is identified with probability 0, but it is an important distinction (especially for infinite games) ◮ We can model “soft” information using conditional probability systems, lexicographic probabilities, nonstandard probabilities (more on this later). Eric Pacuit and Olivier Roy 25

  60. Models of all-out attitudes. Eric Pacuit and Olivier Roy 26

  61. Models of all-out attitudes Hard Information Eric Pacuit and Olivier Roy 27

  62. Models of all-out attitudes Example Bob r l Ann u 1, -1 -1, 1 d -1,1 1, -1 Eric Pacuit and Olivier Roy 28

  63. Models of all-out attitudes Example Bob r l Ann u 1, -1 -1, 1 d -1,1 1, -1 u , r w 1 w 3 u , l w 2 w 4 d , l d , r Eric Pacuit and Olivier Roy 28

  64. Models of all-out attitudes Example Bob r l Ann u 1, -1 -1, 1 d -1,1 1, -1 u , r w 1 w 3 u , l w 2 w 4 d , l d , r Eric Pacuit and Olivier Roy 28

  65. Models of all-out attitudes Example Bob r l Ann u 1, -1 -1, 1 d -1,1 1, -1 A u , r w 1 w 3 u , l A w 2 w 4 d , l d , r Eric Pacuit and Olivier Roy 28

  66. Models of all-out attitudes Example Bob r l Ann u 1, -1 -1, 1 d -1,1 1, -1 A u , r w 1 w 3 u , l B B A w 2 w 4 d , l d , r Eric Pacuit and Olivier Roy 28

  67. Models of all-out attitudes Epistemic Model Suppose that G is a strategic game, S is the set of strategy profiles of G , and Ag is the set of players. An epistemic model based on S and Ag is a triple � W , { Π i } i ∈ Ag , σ � , where W is a nonempty set, for each i ∈ Ag , Π i is a partition 2 over W and σ : W → S is a strategy function. 2 A partition of W is a pairwise disjoint collection of subsets of W whose union is all of W . Elements of a partition Π on W are called cells , and for w ∈ W , let Π( w ) denote the cell of Π containing w . Eric Pacuit and Olivier Roy 29

  68. Models of all-out attitudes Epistemic Model Suppose that G is a strategic game, S is the set of strategy profiles of G , and Ag is the set of players. An epistemic model based on S and Ag is a triple � W , { Π i } i ∈ Ag , σ � , where W is a nonempty set, for each i ∈ Ag , Π i is a partition 2 over W and σ : W → S is a strategy function. 2 A partition of W is a pairwise disjoint collection of subsets of W whose union is all of W . Elements of a partition Π on W are called cells , and for w ∈ W , let Π( w ) denote the cell of Π containing w . Eric Pacuit and Olivier Roy 29

  69. Models of all-out attitudes Game models Game G Eric Pacuit and Olivier Roy 30

  70. Models of all-out attitudes Game models Strategy Space Game G b a Eric Pacuit and Olivier Roy 30

  71. Models of all-out attitudes Game models Strategy Space Game G b a Game Model Eric Pacuit and Olivier Roy 30

  72. Models of all-out attitudes Game models Strategy Space Game G b a Game Model Eric Pacuit and Olivier Roy 30

  73. Models of all-out attitudes Game models Strategy Space Game G b a Game Model Eric Pacuit and Olivier Roy 30

  74. Models of all-out attitudes Kripke Model for S5 Prop is a given set of atomic propositions and Ag is a set of agents. An epistemic model based on Prop and Ag is a triple � W , { Π i } i ∈ Ag , V � , where W is a nonempty set, for each i ∈ Ag , Π i is a partition over W and V : W → P ( Prop ) is a valuation function. Eric Pacuit and Olivier Roy 31

  75. Models of all-out attitudes Example A r l S u 1, -1 -1, 1 d -1,1 1, -1 A u , r w 1 w 3 u , l S S A w 2 w 4 d , l d , r Eric Pacuit and Olivier Roy 32

  76. Models of all-out attitudes Example A w 1 u , r w 3 u , l S S A w 2 w 4 d , l d , r Eric Pacuit and Olivier Roy 33

  77. Models of all-out attitudes Example A w 1 u , r w 3 u , l S S A w 2 w 4 d , l d , r = K i ϕ iff for all w ′ ∈ π i ( w ), M w ′ | ◮ M , w | = ϕ . Eric Pacuit and Olivier Roy 33

  78. Models of all-out attitudes Example A w 1 u , r w 3 u , l U S S A w 2 w 4 d , l d , r = K i ϕ iff for all w ′ ∈ π i ( w ), M w ′ | ◮ M , w | = ϕ . ◮ One assumption: Ex-interim condition. • If w ′ ∈ π i ( w ) then σ ( w ) i = σ ( w ′ ) i . Eric Pacuit and Olivier Roy 33

  79. Models of all-out attitudes Hard Information, Axiomatically 1. Closed under known implication (K): K i ( ϕ → ψ ) → ( K i ϕ → K i ψ ) 2. Logical truth are known (NEC): If | = ϕ then | = K i ϕ 3. Truthful, (T): K i ϕ → ϕ 4. Positive introspection (4): K i ϕ → K i K i ϕ 5. Negative introspection (5): ¬ K i ϕ → K i ¬ K i ϕ Eric Pacuit and Olivier Roy 34

Recommend


More recommend