Lecture Slides - Part 4 Bengt Holmstrom MIT February 2, 2016. Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 1 / 65
Mechanism Design n agents i = 1 , . . . , n agent i has type θ i ∈ Θ i which is i ’s private information = ( θ 1 , . . . , θ n ) ∈ Θ = i Θ i θ We denote θ − i = ( θ 1 , . . . , θ i − 1 , θ i + 1 , . . . , θ n ) = ( θ i , θ − i ) θ Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 2 / 65
y ∈ Y is a decision to be taken by the principal P E.g.: y = ( x , t ) , where x is the allocation (who gets the good in an auction; how much of a public good is built; etc) and t is the transfer (how much people pay/are paid) Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 3 / 65
Mechanism A mechanism Γ = { M , y } specifies a message space M and a decision rule y ( m ) Each agent sends a message m i ( θ i ) to P from message space M i , and then P chooses action y ( m 1 , . . . , m n ) P has commitment power Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 4 / 65
Preferences Agent i has utility u i ( y , θ ) P has utility v ( y , θ ) (Note: i ’s utility can depend on other players’ types, but in some examples it will only depend on her own type, u i ( y , θ i ) ) Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 5 / 65
Beliefs p ( θ ) is a common prior belief Players have posteriors given their type p ( θ i | θ i ) derived from their prior Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 6 / 65
Timing P chooses a mechanism ( M , y ) and commits to it 1 Agents play the “game”, with equilibrium 2 ∗ ( θ ) = ( m ∗ ∗ ( θ n )) m 1 ( θ 1 ) , . m . . , n ∗ ( θ )) Outcome ˜ y ( θ ) = y ( m 3 For now we will be agnostic about the equilibrium concept used to ∗ determine m Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 7 / 65
Questions ˜( θ ) can be implemented? (Depending on the Which allocations y solution concept) ˜( θ ) among the implementable ones is optimal for P ? Which y ˜ = ( x ( θ ) , E.g.: in our screening problem, y t ( θ )) and we could implement any non-decreasing schedule x ( θ ) (but with restrictions on t ( θ ) Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 8 / 65
Two Solution Concepts DSE (Dominant Strategy equilibrium): i has a best strategy independently of the other agents’ types (even if I knew their types) BNE (Bayesian Nash equilibrium) Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 9 / 65
Revelation Principle Proposition BNE version: suppose Γ has BNE m ∗ ( θ ) with outcome ˜( θ ) = y ( m ∗ ( θ )) . Then there exists a direct revelation mechanism Γ d y with M = Θ and y d ( θ ) = y d ( θ i ) = θ i is ˜( θ ) , such that m i BNE-implementable. Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 10 / 65
In a direct mechanism, P just asks agents to reveal their type, and chooses some allocation accordingly It is incentive-compatible for agents to tell their true type ˜( θ ) is The revelation principle says that decision rule y implementable with some mechanism ( M , y ) iff truth-telling is a ˜) BNE of mechanism (Θ , y This greatly reduces the space of mechanisms we need to study Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 11 / 65
We already saw the revelation principle in our screening problem: A solution was initially framed as a payment schedule t ( x ) , which would induce some equilibrium production x ( θ ) by the agent But we reframed it as directly choosing ( x ( θ ) , t ( θ )) for each θ , subject to IC and IR conditions Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 12 / 65
Note: P ’s commitment power matters If P did not have commitment power it would be hard to get agents to reveal θ since it might allow for more deviations ex post by P The TSA has rules to punish people detected to have drugs In the direct mechanism version, you would always tell the truth, and you would not get punished if you had some amount that they would not have detected anyway But they don’t have the commitment power to do this: if you say “yes, I have five grams of cocaine” you will go to jail Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 13 / 65
Proof “If” direction is obvious: if truth telling is a BNE of mechanism ˜) , then this mechanism implements allocation y ˜( θ ) (Θ , y “Only if”: start with general ( M , y ) ∗ is a BNE, then m i ∗ ( θ i ) ∈ argmax ∗ If m E i [ u i ( y ( m i , m − i ( θ − i ) , θ ) | θ i ] m i In particular ∗ ∗ ∗ ˜ ∗ E i [ u i ( y ( m i ( θ i ) , m − i ( θ − i ) , θ ) | θ i ] ≥ E i [ u i ( y ( m i ( θ i ) , m − i ( θ − i ) , θ ) | θ i ] ˜ ˜ for any θ i : no point in mimicking any other type θ i ˜ ˜ ˜( θ i , θ − i ) , θ ) | θ i ] ≥ E i [ u i ( y ˜( θ i , θ − i ) , θ ) | θ i ] for all θ Hence E i [ u i ( y i ˜ θ i E i [ u i (˜ Then θ i ∈ argmax ˜ y ( θ i , θ − i ) , θ ) | θ i ] , so truth-telling is an ˜) equilibrium of (Θ , y Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 14 / 65
DSE Same theorem holds for the DSE solution concept ∗ is a DSE if Here, m ∗ m i ( θ i ) ∈ argmax m i u i ( y ( m i , m − i ) , θ ) for any m − i Notes: DSE implies BNE Revelation principle is a “testing device” Commitment is again critical More general mechanisms may be useful for unique implementation Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 15 / 65
VCG mechanism VCG is a DSE implementation of any decision rule The catch: it is not necessarily budget-balanced Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 16 / 65
y ( x , t ) allocation t = ( t 1 , . . . , t n ) transfers E.g.: x is a public good, or x = ( x 1 , . . . , x n ) is an allocation of private goods u i ( y , θ ) = u i ( x , θ i ) + t i : quasilinear preferences Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 17 / 65
First-best allocation: � ∗ ( θ ) ∈ argmax x u i ( x i , θ i ) ∀ θ i ∗ ( θ ) be implemented? Question: can x Yes Counterintuitive: it seems like in real life it is very hard to get people to reveal preferences for a public good and build it whenever optimal Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 18 / 65
Lecture 10 Reminder: we had asked, given these utility functions: u i ( y , θ ) = u i ( x , θ i ) + t i ∗ ( θ ) , given by Could we implement x � ∗ ( θ ) ∈ argmax x u i ( x i , θ i ) ∀ θ, i as a DSE? In other words, do there exist { t i ( m ) } such that it is DSE to announce m i = θ i for all i ? Yes! Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 19 / 65
DSE means that ∗ ( m i , m − i ) , θ i ) + t i ( m i , m − i )] ∀ θ i , m − i θ i ∈ argmax [ u i ( x m i Note: DSE requires that declaring your true type is optimal even if other people are lying and sending whatever messages m − i Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 20 / 65
# ∗ , By definition of x � ∗ ( m i , m − i ) , θ i )+ ∗ ( m i , m − i ) , m j )] ∀ θ i , m − i θ i ∈ argmax [ u i ( x u j ( x m i j = i ∗ Since sending m i = θ i implements the socially optimal x (assuming other players’ types are given by m j ) Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 21 / 65
VCG Mechanism Idea: we can just set the transfers for player i equal to all the remaining terms! � t VCG ( m i , m − i ) = ∗ ( m i , m − i ) , m j ) + h i ( m − i ) u j ( x i j # = i ∗ ( θ i , m − i ) , so he has Then i ’s incentives are always to implement x a weakly dominant strategy to announce m i = θ i h i is any function that depends on m − i and hence does not affect i ’s incentives May be useful if we want transfers to add up to 0 Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 22 / 65
Uniqueness ∗ Not only does VCG implement x But it is also essentially the unique mechanism that does this Theorem If Θ i is “smoothly connected” ∀ i, then { t VCG } uniquely implements i ∗ ( θ ) (up to “constants” h i ( m − i ) ). x ′ ∈ Θ i , there is a Smoothly connected means that, for any θ i , θ i ′ , c is C 2 and u ◦ c is curve c : [ 0 , 1 ] → Θ i s.t. c ( 0 ) = θ i , c ( 1 ) = θ i C 2 Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 23 / 65
Example Suppose x = 1 or 0: build or not build Building has social cost K (for simplicity K = 0) θ i is i ’s willingness to pay ∗ ( θ ) = 1 if i θ i ≥ K and 0 otherwise x Then what are the VCG transfers? t VCG ( m ) = 0 if i ’s WTP is not pivotal i t VCG ( m ) = = i θ j ≤ 0 if i is pivotal for x = 1 i j # t VCG ( m ) = − = i θ j if i is pivotal for x = 0 j # i Idea: i always pays for the externality of his message Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 24 / 65
Our example above is called a pivot scheme It implies a particular choice of h i : � h i ( m − i ) = − max u i ( x , m j ) x j # = i In particular this choice of h i guarantees that i t i ( m i , m − i ) ≤ 0 for all m (the principal never has to pay money on net) Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 25 / 65
Recommend
More recommend