Evaluating Resistance to False- Name Manipulations in Elections Vincent Conitzer Bo Waggoner Lirong Xia Thanks to Hossein Azari and Giorgos Zervas for helpful discussions ! March 2012 1
Outline • Background and motivation: Why study elections in which we expect false-name votes? • Our model • How to select a false-name-limiting method? • How to evaluate the election outcome? • Recap and future work March 2012 2
Motivating Challenge: Poll customers about a potential product March 2012 3
Preventing strategic behavior Deter or hinder misreporting • Restricted settings (e.g., single-peaked preferences) • Use computational complexity March 2012 4
False-name manipulation • False-name-proof voting mechanisms? • Extremely negative result for voting [C., WINE’08] • Restricting to single-peaked preferences does not help much [Todo, Iwasaki, Yokoo , AAMAS’11] • Assume creating additional identifiers comes at a cost [Wagman & C., AAAI’08] • Verify some of the identities [C., TARK’07] • Use social network structure [C., Immorlica, Letchford, Munagala, Wagman , WINE’10] Overview article [C., Yokoo, AIMag 2010] Common factor: false-name- proof March 2012 6
Let’s at least put up some obstacles 140.247.232.88 jmhzdszx@sharklasers.com Issues: 1. Some still vote multiple times 2. Some don’t vote at all March 2012 7
Approach Suppose we can experimentally determine how many identities voters tend to use for each method. 140.247.232.88 jmhzdszx@sharklasers.com 80 80 80 % of people 60 60 60 40 40 40 20 20 20 0 0 0 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 # of votes March 2012 March 2012 8 8
Outline • Background and motivation: Why study elections in which we expect false-name votes? • Our model • How to select a false-name-limiting method? • How to evaluate the election outcome? • Recap and future work March 2012 9
Model • For each false-name-limiting method, take the individual vote distribution 𝜌 as given • Suppose votes are drawn i.i.d. 0.8 Probability 0.6 0.4 0.2 0 0 1 2 3 4 5 # of votes March 2012 10
Model • Single-peaked preferences (here: two alternatives) Su Supporters Votes Cast Observed 𝝆 𝑵 𝑩 𝒐 𝑩 𝑾 𝑩 𝒘 False- name- limiting method 𝑪 𝒐 𝑪 𝑾 𝑪 𝒘 March 2012 March 2012 11 11
Outline • Background and motivation: Why study elections in which we expect false-name votes? • Our model • How to select a false-name-limiting method? • How to evaluate the election outcome? • Recap and future work March 2012 12
Example • Is the choice always obvious? • Individual vote distribution for 2010 U.S. midterm Congressional elections: Actual (in-person) Hypothetical (online) 80 percent of eligible percent of eligible 80 voters voters 60 60 40 40 20 20 0 0 0 1 2 3 4 5 0 1 2 3 4 1000 Votes cast Votes cast March 2012 13
Problem statement voters 𝑜 𝐵 > 𝑜 𝐶 𝝆 𝟐 𝝆 𝟑 > ? Pr[correct | 𝜌 1 ] Pr[correct | 𝜌 2 ] (Pr[correct ] = Pr[ 𝑊 𝐵 > 𝑊 𝐶 ]) March 2012 14
Our results • We show: which of 𝜌 1 and 𝜌 2 is preferable as elections grow large • Setting: sequence of growing supporter profiles ( 𝑜 𝐵 , 𝑜 𝐶 ) where: 1. 𝑜 𝐵 − 𝑜 𝐶 ∈ 𝑃( 𝑜) (elections are “close”) 2. 𝑜 𝐵 − 𝑜 𝐶 ∈ 𝜕 1 (but not “dead even”) March 2012 15
Selecting a false-name-limiting method Theorem 1. 𝜈 1 𝜈 2 Suppose 𝜏 1 > 𝜏 2 . Then eventually Pr[correct | 𝜌 1 ] > Pr[correct | 𝜌 2 ]. “For large enough elections, the ratio of mean to standard deviation is all that matters.” March 2012 16
Selecting a false-name-limiting method Intuition. • Distributions approach Gaussians • Pr[correct] = Pr[ 𝑊 𝐵 > 𝑊 𝐶 ] = Pr[ 𝑊 𝐵 - 𝑊 𝐶 > 0 ] 𝜏 𝑜 𝐵 −𝑜 𝐶 𝜈 Φ . approaches 𝑜 𝜈 2 𝜈 1 𝜏 2 𝜏 1 March 2012 17
Question 1 Recap voters 𝑜 𝐵 > 𝑜 𝐶 𝝂 𝟑 𝝂 𝟐 𝝆 𝟐 𝝆 𝟑 𝝉 𝟐 𝝉 𝟑 • Takeaway: choose highest ratio! • Inspiration for new methods? March 2012 18
Outline • Background and motivation: Why study elections in which we expect false-name votes? • Our model • How to select a false-name-limiting method? • How to evaluate the election outcome? • Recap and future work March 2012 19
Analyzing election results • Observe votes 𝑤 𝐵 > 𝑤 𝐶 • One approach: Bayesian Prior Evidence Posterior Pr[ 𝑜 𝐵 , 𝑜 𝐶 ] (𝑤 𝐵 , 𝑤 𝐶 ) Pr[ 𝑜 𝐵 , 𝑜 𝐶 | 𝑤 𝐵 , 𝑤 𝐶 ] Requires a prior, which may be costly/impossible to obtain biased or open to manipulation • Our approach: statistical hypothesis testing March 2012 20
Statistical hypothesis testing Observed 𝒘 𝑩 > 𝒘 𝑪 Conclusion 𝝆 𝑵 𝒐 𝑩 > 𝒐 𝑪 𝜸 “test statistic” Null ll hypothesis 𝝆 𝑵 ] 𝒐 𝑩 = 𝒐 𝑪 Pr[ 𝜸 ≥ 𝜸 “p - value” March 2012 21
Statistical hypothesis testing Observed Conclusion 𝝆 𝑵 𝒐 𝑩 > 𝒐 𝑪 𝜸 Null ll hypothesis p-value 𝝆 𝑵 𝒐 𝑩 = 𝒐 𝑪 ] Pr[ 𝜸 > 𝜸 observed is not unlikely “accept” null p-value > .05 under null hypothesis observed is unlikely reject null p-value < .05 under null hypothesis March 2012 22
Complication Null hypothesis: 𝑜 𝐵 = 𝑜 𝐶 = 1, 2, 3, 4, ⋯ We can compute a p-value for each one. p-value Reject (max-p < R) 𝒐 𝑩 Accept (min-p > R) p-value 𝒐 𝑩 p-value Unclear 𝒐 𝑩 March 2012 23
Our statistical test Procedure: 1. Select significance level R (e.g. 0.05). 2. Observe votes 𝑤 𝐵 > 𝑤 𝐶 . . 3. Compute 𝛾 4. If max 𝑜 𝐵 =𝑜 𝐶 𝑞 -value < R, reject. 5. If min 𝑜 𝐵 =𝑜 𝐶 𝑞 -value > R , don’t reject . 6. Else, inconclusive whether to reject or not. March 2012 March 2012 24 24
Example and picking a test statistic Supporters 𝝆 𝑵 Observed 𝑜 𝐵 (?) 92 = 𝑤 𝐵 False-name- limiting method M 𝑜 𝐶 (?) 80 = 𝑤 𝐶 𝛾(𝑤 𝐵 , 𝑤 𝐶 ) = ? March 2012 25
Selecting a test statistic 𝑤 𝐵 = 92, 𝑤 𝐶 = 80 . Observed: = 𝑤 𝛾 𝐵 − 𝑤 𝐶 = 12 Difference rule: 𝐵 −𝑤 𝐶 = 𝑤 𝛾 ≈ 0.07 Percent rule: 𝑤 𝐵 −𝑤 𝐶 = 𝑤 12 𝛾 = General form: 172 𝛽 𝛽 𝑤 (Adjusted margin of victory) March 2012 26
Test statistics that fail Theorem 2. Let the adjusted margin of victory be 𝒘 𝑩 −𝒘 𝑪 𝛾 = 𝛽 . 𝒘 Then For any 𝛽 < 0.5 , max-p = ½: we can 1. never be sure to reject. (Type 2 errors) For any 𝛽 > 0.5 , min-p = 0: we can 2. never be sure to “accept” . (Type 1 errors) March 2012 27
Test statistics for an election p-value March 2012 28
The “right” test statistic Theorem 3. Let the adjusted margin of victory formula be 𝐵 −𝑤 𝐶 𝑤 𝛾 = 0.5 . 𝒘 Then , we will reject. For a large enough 𝛾 1. (Declare the outcome “correct”.) , we will not reject. For a small enough 𝛾 2. (Declare the outcome “inconclusive”.) March 2012 29
Test statistics for an election p-value March 2012 30
We can usually tell whether to reject or not March 2012 31
Use this test! 1. Select significance level R (e.g. 0.05). 2. Observe votes 𝑤 𝐵 > 𝑤 𝐶 . = 𝑤 𝐵 −𝑤 𝐶 3. Compute 𝛾 0.5 . 𝒘 4. If max 𝑜 𝐵 =𝑜 𝐶 𝑞 -value < R, reject: high confidence. 5. If min 𝑜 𝐵 =𝑜 𝐶 𝑞 -value > R , don’t: low confidence. 6. Else, inconclusive whether to reject or not. (rare!) March 2012 32
Outline • Background and motivation: Why study elections in which we expect false-name votes? • Our model • How to select a false-name-limiting method? • How to evaluate the election outcome? • Recap and future work March 2012 33
Summary • Model: take 𝜌 as given, draw votes i.i.d. • How to select a false-name-limiting method? A: Pick the method with the highest 𝜈 𝜏 . • How to evaluate the election outcome? A: Statistical significance test with 𝐵 −𝑤 𝐶 = 𝑤 𝛾 𝑤 0.5 using max p-value and min p-value. March 2012 34
Future Work • Single-peaked preferences (done) • Application to real-world problems • Other models or weaker assumptions • How to actually produce distributions 𝜌 ? – Experimentally – Model agents and utilities Thanks! March 2012 35
Recommend
More recommend