wisdom of the crowd
play

Wisdom of The Crowd Lirong Xia Example: Crowdsourcing . . . . - PowerPoint PPT Presentation

Wisdom of The Crowd Lirong Xia Example: Crowdsourcing . . . . . . . . . . . . . . . . . . . . > > . . . . . . . . a b c c b > a b > a b > Turker 1 Turker 2 Turker n 2 The Condorcet Jury


  1. Wisdom of The Crowd Lirong Xia

  2. Example: Crowdsourcing . . . . . . . . . . . . . . . . . . . . > > . . . . . . . . a b c c b > a b > a b > … Turker 1 Turker 2 Turker n 2

  3. The Condorcet Jury theorem [Condorcet 1785] The Condorcet Jury theorem. Pr( | ) Ø Given = Pr( | ) • two alternatives { O , M }. • 0.5< p <1, = p> 0.5 Ø Suppose • each agent’s preferences is generated i.i.d., such that • w/p p , the same as the ground truth • w/p 1 - p , different from the ground truth Ø Then, as n →∞, the majority of agents’ preferences converges in probability to the ground truth 3

  4. Importance of the Jury Theorem Ø Justifies democracy and wisdom of the crowd • “ lays, among other things, the foundations of the ideology of the democratic regime ” [Paroush SCW-98] 4

  5. Proof Ø Group competence • Pr( maj ( P n )= a | a ) • P n : n i.i.d. votes given ground truth a ! $ n n ∑ & p j (1 − p ) n − j # j = n ( * j " % ) + ) + 2 Ø Random variable X j : takes 1 w/p p , 0 otherwise • encoding whether signal=ground truth Ø Σ j= 1 n X j /n converges to p in probability (Law of Large Numbers) 5

  6. Limitations of CJT Ø Given more than two? • two alternatives { a , b }. • competence 0.5< p <1, heterogeneous agents? Ø Suppose dependent agents? • agents’ signals are i.i.d. conditioned on the ground truth • w/p p , the same as the ground truth • w/p 1 - p , different from the ground truth • agents truthfully report their signals strategic agents? Ø The majority rule reveals ground truth as n →∞ other rules? 6

  7. Extensions Ø Dependent agents Ø Heterogeneous agents Ø Strategic agents Ø More than two alternatives 7

  8. An active area Myerson Shapley&Grofman S ocial A merican G ames and M athematical T heory and P ublic Econometrica + C hoice and P olitical E conomic S ocial D ecision C hoice JET W elfare S cience B ehavior S ciences R eview 8

  9. Does CJT hold for strategic agents? The group competence 1. is higher than that of any single agent • Not always (same-vote equilibrium) 2. increases in the group size n • Not always (same-vote equilibrium) 3. goes to 1 as n →∞ • Yes for some models and informative equilibrium 9

  10. Strategic voting Ø Common interest Bayesian voting game [Austen- Smith&Banks APSR-96] • two alternatives { a, b }, two signals { A,B }, a prior, Pr(signal|truth), • p a =Pr(signal= A |truth= a ) • p b =Pr(signal= B |truth= b ) • agents have the same utility function U(outcome, ground truth) =1 iff outcome = ground truth • sincere voting: vote for the alternative with the highest posterior probability • informative voting: vote for the signal • strategic voting: vote for the alternative with the highest expected utility 10

  11. Timeline of the Bayesian game 1. Nature chooses a ground truth g 2. Every agent j receives a signal s j ~Pr( s j | g ) 3. Every agent computes the posterior distribution (belief) over the ground truth using Bayesian’s rule 4. Every agent chooses a vote to maximizes her expected utility according to her belief 5. The outcome is computed by the voting rule 11

  12. High level example Ø Two signals, two voters Ø Model: p 1 -p Pr( | ) Posterior: = Pr( | ) p p 1 -p 1 -p = p> 0.5 The other signal: Truthful agent: + my vote , winner: half/half half/half utility for voting : 1 0 0.5 0.5 12

  13. Sincere voting vs. informative voting Ø Setting • Two alternatives { a, b }, two signals { A,B } • Three agents • Pr( A | a ) = p a =0.6, Pr( B | b )= p b =0.8 • Prior: Pr( a )=0.2, Pr( b )=0.8 Ø An agent receives A • Informative voting: a • posterior probability: • Pr( a | A ) ∝ Pr( a ) × Pr( A | a ) = 0.2 × 0.6 • Pr( b | A ) ∝ Pr( b ) × Pr( A | b ) = 0.8 × (1-0.8) • sincere voting: b 13

  14. Strategic voting Ø Setting • Two alternatives { a, b }, two signals { A,B } • Three agents • Pr( A | a ) = p a =0.6, Pr( B | b )= p b =0.8 • Prior: Pr( a )=0.2, Pr( b )=0.8 Ø An agent receives A , other two agents are informative • Conditioned on other two votes being { a, b } • Signal profile is ( A,A,B ) • Posterior probabilities • Pr( a | A,A,B ) ∝ Pr( a ) × Pr( A | a ) × Pr( A | a ) × Pr( B | a )=Pr( a ) p a 2 (1- p a ) =0.2 × 0.6 2 × (1-0.6)=0.0288 • Pr( b | A,A,B ) ∝ Pr( b ) × Pr( A | b ) × Pr( A | b ) × Pr( B | b )=Pr( b ) (1- p b ) 2 p b =0.8 × (1-0.8) 2 × 0.8=0.0256 • Strategic voting: a 14

  15. Eliciting Probabilities Ø Outcome space O = {o 1 ,…, o m } • Example: {Sunny, Rainy} Ø An expert is asked for distribution q=(q 1 ,…, q m ) over O • her true belief is p Ø Suppose the next day the weather is o, expert is awarded by a scoring rule s(q,o) • s: Lot(O) × O à R • Example: linear scoring rule s lin (q,o k ) = q k Ø Expert’s expected utility • S(q,p) = ∑ o ∈ O p(o)s(q,o) • When p = (0.7, 0.3), S(q,p) = 0.7 q 1 + 0.3 (1-q 1 ), maximized at q 1 =1 15

  16. (Strictly) Proper Scoring Rules Ø A scoring rule s is strictly proper, if for all p,q ∈ Lot(O) such that p≠q S(p,p) > S(q,p) • reporting true belief is strictly optimal Ø Example (logarithm scoring rule). • s log (q,o k ) = ln (q k ) • For k =2, S log (q,p) = p 1 ln(q 1 ) + (1-p 1 ) ln(1-q 1 ) • maximized at q 1 = p 1 Ø Example (quadratic scoring rule). • s log (q,o k ) = 2q k - ∑ j q j2 • For k =2, S log (q,p) = 2p 1 q 1 + 2(1-p 1 )(1-q 1 )-∑ j q j2 • maximized at q 1 = p 1 16

  17. Characterization of Strictly Proper Scoring Rules Ø Theorem. For m=2, a scoring rule s(q,p) is strictly proper, if and only if G(p) = S(p,p) is strict convex. • Can be extended to m>2 17

Recommend


More recommend