Model: Timing and modes of persuasion Sender designs R i observes s i and Nature draws θ ∼ f and signals ( s i ) n i =1 . chooses d i ∈ { 0 , 1 } . an information policy. We consider two classes of information policies: Individual policy ( π i ) n i =1 : General policy π : π : { H , L } n → ∆ (∏ n ) i =1 S i . π i : { H , L } → ∆ ( S i ) , ∀ i . 9 / 25
Model: Modes of persuasion Without loss, we focus on direct obedient policies: 10 / 25
Model: Modes of persuasion Without loss, we focus on direct obedient policies: ▶ S i = { 0 , 1 } for each R i . ▶ R i follows her action recommendation ˆ d i ∈ { 0 , 1 } . ▶ ˆ d = ( d 1 , ..., d n ) . 10 / 25
Model: Modes of persuasion Without loss, we focus on direct obedient policies: ▶ S i = { 0 , 1 } for each R i . ▶ R i follows her action recommendation ˆ d i ∈ { 0 , 1 } . ▶ ˆ d = ( d 1 , ..., d n ) . General policy π : π : { H , L } n → ∆ ( { 0 , 1 } n ). 10 / 25
Model: Modes of persuasion Without loss, we focus on direct obedient policies: ▶ S i = { 0 , 1 } for each R i . ▶ R i follows her action recommendation ˆ d i ∈ { 0 , 1 } . ▶ ˆ d = ( d 1 , ..., d n ) . General policy π : π : { H , L } n → ∆ ( { 0 , 1 } n ). ( π ( ·| θ )) θ ∈{ H , L } n 10 / 25
Model: Modes of persuasion Without loss, we focus on direct obedient policies: ▶ S i = { 0 , 1 } for each R i . ▶ R i follows her action recommendation ˆ d i ∈ { 0 , 1 } . ▶ ˆ d = ( d 1 , ..., d n ) . Individual policy ( π i ) n i =1 : General policy π : π : { H , L } n → ∆ ( { 0 , 1 } n ). π i : { H , L } → ∆ ( { 0 , 1 } ) , ∀ i . ( π ( ·| θ )) θ ∈{ H , L } n 10 / 25
Model: Modes of persuasion Without loss, we focus on direct obedient policies: ▶ S i = { 0 , 1 } for each R i . ▶ R i follows her action recommendation ˆ d i ∈ { 0 , 1 } . ▶ ˆ d = ( d 1 , ..., d n ) . Individual policy ( π i ) n i =1 : General policy π : π : { H , L } n → ∆ ( { 0 , 1 } n ). π i : { H , L } → ∆ ( { 0 , 1 } ) , ∀ i . ( π ( ·| θ )) θ ∈{ H , L } n ( π i ( H ) , π i ( L )) , ∀ i 10 / 25
Model: Modes of persuasion Without loss, we focus on direct obedient policies: ▶ S i = { 0 , 1 } for each R i . ▶ R i follows her action recommendation ˆ d i ∈ { 0 , 1 } . ▶ ˆ d = ( d 1 , ..., d n ) . Individual policy ( π i ) n i =1 : General policy π : π : { H , L } n → ∆ ( { 0 , 1 } n ). π i : { H , L } → ∆ ( { 0 , 1 } ) , ∀ i . ( π ( ·| θ )) θ ∈{ H , L } n ( π i ( H ) , π i ( L )) , ∀ i We allow for any policy that is the limit of full-support policies. We solve for the Sender-optimal policy. 10 / 25
Roadmap . . Model 1 . . General persuasion 2 . . Individual persuasion 3
General persuasion R i evaluates her payoff conditional on her vote being pivotal: 11 / 25
General persuasion R i evaluates her payoff conditional on her vote being pivotal: ▶ ˆ d a : all voters receive an approval recommendation. 11 / 25
General persuasion R i evaluates her payoff conditional on her vote being pivotal: ▶ ˆ d a : all voters receive an approval recommendation. R i ’s posterior belief of being H is: i f ( θ ) π ( ˆ ∑ d a | θ ) θ ∈ Θ H Pr( θ i = H | ˆ d a ) = . θ ∈{ H , L } n f ( θ ) π ( ˆ ∑ d a | θ ) 11 / 25
General persuasion R i evaluates her payoff conditional on her vote being pivotal: ▶ ˆ d a : all voters receive an approval recommendation. R i ’s posterior belief of being H is: i f ( θ ) π ( ˆ ∑ d a | θ ) θ ∈ Θ H Pr( θ i = H | ˆ d a ) = . θ ∈{ H , L } n f ( θ ) π ( ˆ ∑ d a | θ ) R i obeys an approval recommendation iff: ℓ i Pr( θ i = H | ˆ d a ) ⩾ . 1 + ℓ i 11 / 25
General persuasion Sender’s problem is: 12 / 25
General persuasion Sender’s problem is: f ( θ ) π ( ˆ ∑ d a | θ ) max π ( ˆ d a | θ ) ⩾ 0 θ ∈{ H , L } n 12 / 25
General persuasion Sender’s problem is: f ( θ ) π ( ˆ ∑ d a | θ ) max π ( ˆ d a | θ ) ⩾ 0 θ ∈{ H , L } n s.t. π ( ˆ d a | θ ) − 1 ⩽ 0 , ∀ θ, 12 / 25
General persuasion Sender’s problem is: f ( θ ) π ( ˆ ∑ d a | θ ) max π ( ˆ d a | θ ) ⩾ 0 θ ∈{ H , L } n s.t. π ( ˆ d a | θ ) − 1 ⩽ 0 , ∀ θ, ∑ f ( θ ) π ( ˆ ∑ f ( θ ) π ( ˆ d a | θ ) ℓ i − d a | θ ) ⩽ 0 , ∀ i . θ ∈ Θ L θ ∈ Θ H i i 12 / 25
. . . Optimal policy under perfect correlation Only ( H .. H ) and ( L .. L ) are possible to realize. 13 / 25
. . . Optimal policy under perfect correlation Only ( H .. H ) and ( L .. L ) are possible to realize. Due to perfect correlation, all voters share the same Pr( θ i = H | ˆ d a ). 13 / 25
. . . Optimal policy under perfect correlation Only ( H .. H ) and ( L .. L ) are possible to realize. Due to perfect correlation, all voters share the same Pr( θ i = H | ˆ d a ). R 1 imposes the highest cutoff on this belief: ℓ 1 Pr( θ 1 = H | ˆ d a ) ⩾ . 1 + ℓ 1 13 / 25
. . . Optimal policy under perfect correlation Only ( H .. H ) and ( L .. L ) are possible to realize. Due to perfect correlation, all voters share the same Pr( θ i = H | ˆ d a ). R 1 imposes the highest cutoff on this belief: ℓ 1 Pr( θ 1 = H | ˆ d a ) ⩾ . 1 + ℓ 1 This is as if Sender were facing R 1 alone. 13 / 25
Optimal policy under perfect correlation Only ( H .. H ) and ( L .. L ) are possible to realize. Due to perfect correlation, all voters share the same Pr( θ i = H | ˆ d a ). R 1 imposes the highest cutoff on this belief: ℓ 1 Pr( θ 1 = H | ˆ d a ) ⩾ . 1 + ℓ 1 This is as if Sender were facing R 1 alone. . Proposition: . Suppose voters’ states are perfectly correlated. The unique optimal policy is d a | L .. L ) = f ( H .. H ) 1 π ( ˆ π ( ˆ d a | H .. H ) = 1 , . f ( L .. L ) ℓ 1 Only R 1 ’s IC binds. . 13 / 25
. . . Imperfect correlation: Under unanimity, a binding IC is equivalent to a zero payoff. 14 / 25
Imperfect correlation: Strictest voters’ ICs bind . Proposition: . In any optimal policy, the IC constraints for a subgroup of the strictest voters bind, i.e. IC binds for i ∈ { 1 , ..., i ′ } for some i ′ ⩾ 1 . . Under unanimity, a binding IC is equivalent to a zero payoff. 14 / 25
Imperfect correlation: Strictest voters’ ICs bind . Proposition: . In any optimal policy, the IC constraints for a subgroup of the strictest voters bind, i.e. IC binds for i ∈ { 1 , ..., i ′ } for some i ′ ⩾ 1 . . Under unanimity, a binding IC is equivalent to a zero payoff. Examine the dual of the linear programming problem. 14 / 25
The dual problem Sender’s problem is: ∑ f ( θ ) π ( ˆ d a | θ ) max π ( ˆ d a | θ ) ⩾ 0 θ ∈{ H , L } n s.t. π ( ˆ d a | θ ) − 1 ⩽ 0 , ∀ θ, ∑ f ( θ ) π ( ˆ ∑ f ( θ ) π ( ˆ d a | θ ) ℓ i − d a | θ ) ⩽ 0 , ∀ i . θ ∈ Θ L θ ∈ Θ H i i 15 / 25
The dual problem Sender’s problem is: ∑ f ( θ ) π ( ˆ d a | θ ) max π ( ˆ d a | θ ) ⩾ 0 θ ∈{ H , L } n s.t. π ( ˆ d a | θ ) − 1 ⩽ 0 , ∀ θ, ∑ f ( θ ) π ( ˆ ∑ f ( θ ) π ( ˆ d a | θ ) ℓ i − d a | θ ) ⩽ 0 , ∀ i . θ ∈ Θ L θ ∈ Θ H i i Let γ θ , µ i be the dual variables. 15 / 25
The dual problem Sender’s problem is: ∑ f ( θ ) π ( ˆ d a | θ ) max π ( ˆ d a | θ ) ⩾ 0 θ ∈{ H , L } n s.t. π ( ˆ d a | θ ) − 1 ⩽ 0 , ∀ θ, ∑ f ( θ ) π ( ˆ ∑ f ( θ ) π ( ˆ d a | θ ) ℓ i − d a | θ ) ⩽ 0 , ∀ i . θ ∈ Θ L θ ∈ Θ H i i Let γ θ , µ i be the dual variables. The dual problem is: ∑ min γ θ , γ θ ⩾ 0 ,µ i ⩾ 0 θ ∈{ H , L } n ( ) ∑ ∑ s.t. γ θ ⩾ f ( θ ) 1 + µ i − µ i ℓ i , ∀ θ. i : θ i = H i : θ i = L 15 / 25
Imperfect correlation: . Proposition: . In any optimal policy, the IC constraints for a subgroup of the strictest voters bind, i.e. IC binds for i ∈ { 1 , ..., i ′ } for some i ′ ⩾ 1 . . Under unanimity, a binding IC is equivalent to a zero payoff. Examine the dual of the linear programming problem. 16 / 25
Imperfect correlation: The strictest voters’ ICs bind . Proposition: . In any optimal policy, the IC constraints for a subgroup of the strictest voters bind, i.e. IC binds for i ∈ { 1 , ..., i ′ } for some i ′ ⩾ 1 . . Under unanimity, a binding IC is equivalent to a zero payoff. Examine the dual of the linear programming problem. Think of each IC as a “resource constraint.” 16 / 25
Imperfect correlation: The strictest voters’ ICs bind . Proposition: . In any optimal policy, the IC constraints for a subgroup of the strictest voters bind, i.e. IC binds for i ∈ { 1 , ..., i ′ } for some i ′ ⩾ 1 . . Under unanimity, a binding IC is equivalent to a zero payoff. Examine the dual of the linear programming problem. Think of each IC as a “resource constraint.” Granting surplus to a tough voter is more expensive than to a lenient one. 16 / 25
Roadmap . . Model 1 . . General persuasion 2 . . Individual persuasion 3
Individual persuasion Sender designs ( π i ( H ) , π i ( L )) for each R i . 17 / 25
Individual persuasion Sender designs ( π i ( H ) , π i ( L )) for each R i . Let Pr( θ i = H | R − i approve) denote the probability that θ i = H conditional on R − i approving: ∑ i f ( θ ) ∏ j ̸ = i π j ( θ j ) θ ∈ Θ H Pr( θ i = H | R − i approve) = j ̸ = i π j ( θ j ) . ∑ θ ∈{ H , L } n f ( θ ) ∏ 17 / 25
Individual persuasion Sender designs ( π i ( H ) , π i ( L )) for each R i . Let Pr( θ i = H | R − i approve) denote the probability that θ i = H conditional on R − i approving: ∑ i f ( θ ) ∏ j ̸ = i π j ( θ j ) θ ∈ Θ H Pr( θ i = H | R − i approve) = j ̸ = i π j ( θ j ) . ∑ θ ∈{ H , L } n f ( θ ) ∏ R i obeys an approval recommendation if: Pr( θ i = H | R − i approve) π i ( H ) − ℓ i Pr( θ i = L | R − i approve) π i ( L ) ⩾ 0 . 17 / 25
Individual persuasion R i ’s approval IC is easily written as: ℓ i π i ( L ) Pr( θ i = H | R − i approve) ⩾ ℓ i π i ( L ) + π i ( H ) . 18 / 25
Individual persuasion R i ’s approval IC is easily written as: ℓ i π i ( L ) Pr( θ i = H | R − i approve) ⩾ ℓ i π i ( L ) + π i ( H ) . Sender chooses ( π i ( H ) , π i ( L )) n i =1 to maximize his payoff: ∑ ∏ f ( θ ) π i ( θ i ) , θ i subject to the approval ICs. 18 / 25
Optimal policy under perfect correlation . Proposition: . Suppose voters’ states are perfectly correlated. Any optimal policy is of the form: π i ( H ) = 1 for all i , π i ( L ) = f ( H .. H ) 1 ( π 1 ( L ) , ..., π n ( L )) ∈ [0 , 1] n such that ∏ . f ( L .. L ) ℓ 1 . i 19 / 25
Example 1: Binding ICs for all voters 20 / 25
Example 1: Binding ICs for all voters The distribution over state profiles is: f ( HHH ) = 9 50 , f ( HHL ) = 1 20 , f ( HLL ) = 2 25 , f ( LLL ) = 43 100 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 39) . 20 / 25
Example 1: Binding ICs for all voters The distribution over state profiles is: f ( HHH ) = 9 50 , f ( HHL ) = 1 20 , f ( HLL ) = 2 25 , f ( LLL ) = 43 100 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 39) . The voters are relatively homogeneous in their thresholds of doubt. 20 / 25
Example 1: Binding ICs for all voters The distribution over state profiles is: f ( HHH ) = 9 50 , f ( HHL ) = 1 20 , f ( HLL ) = 2 25 , f ( LLL ) = 43 100 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 39) . The voters are relatively homogeneous in their thresholds of doubt. The optimal policy is: ( π 1 ( L ) , π 2 ( L ) , π 3 ( L )) = (0 . 071 , 0 . 073 , 0 . 075) . 20 / 25
Example 1: Binding ICs for all voters The distribution over state profiles is: f ( HHH ) = 9 50 , f ( HHL ) = 1 20 , f ( HLL ) = 2 25 , f ( LLL ) = 43 100 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 39) . The voters are relatively homogeneous in their thresholds of doubt. The optimal policy is: ( π 1 ( L ) , π 2 ( L ) , π 3 ( L )) = (0 . 071 , 0 . 073 , 0 . 075) . All three IC constraints bind. 20 / 25
Example 2: Rubber-stamping behavior 21 / 25
Example 2: Rubber-stamping behavior The distribution over state profiles is: f ( HHH ) = 9 50 , f ( HHL ) = 1 20 , f ( HLL ) = 2 25 , f ( LLL ) = 43 100 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 2) . 21 / 25
Example 2: Rubber-stamping behavior The distribution over state profiles is: f ( HHH ) = 9 50 , f ( HHL ) = 1 20 , f ( HLL ) = 2 25 , f ( LLL ) = 43 100 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 2) . R 3 becomes much more lenient than before. 21 / 25
Example 2: Rubber-stamping behavior The distribution over state profiles is: f ( HHH ) = 9 50 , f ( HHL ) = 1 20 , f ( HLL ) = 2 25 , f ( LLL ) = 43 100 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 2) . R 3 becomes much more lenient than before. The optimal policy is: ( π 1 ( L ) , π 2 ( L ) , π 3 ( L )) = (0 . 038 , 0 . 039 , 1) . 21 / 25
Example 2: Rubber-stamping behavior The distribution over state profiles is: f ( HHH ) = 9 50 , f ( HHL ) = 1 20 , f ( HLL ) = 2 25 , f ( LLL ) = 43 100 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 2) . R 3 becomes much more lenient than before. The optimal policy is: ( π 1 ( L ) , π 2 ( L ) , π 3 ( L )) = (0 . 038 , 0 . 039 , 1) . R 3 rubber-stamps the others’ approval decisions, obtaining a positive payoff. 21 / 25
Example 3: Truthful recommendation 22 / 25
Example 3: Truthful recommendation The distribution over state profiles is: f ( HHH ) = 6 1 750 , f ( LLL ) = 18 7 25 , f ( HHL ) = 250 , f ( HLL ) = 25 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 39) . 22 / 25
Example 3: Truthful recommendation The distribution over state profiles is: f ( HHH ) = 6 1 750 , f ( LLL ) = 18 7 25 , f ( HHL ) = 250 , f ( HLL ) = 25 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 39) . The correlation is stronger compared to Example 1. 22 / 25
Example 3: Truthful recommendation The distribution over state profiles is: f ( HHH ) = 6 1 750 , f ( LLL ) = 18 7 25 , f ( HHL ) = 250 , f ( HLL ) = 25 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 39) . The correlation is stronger compared to Example 1. The optimal policy is: ( π 1 ( L ) , π 2 ( L ) , π 3 ( L )) = (0 , 0 . 606 , 0 . 644) . 22 / 25
Example 3: Truthful recommendation The distribution over state profiles is: f ( HHH ) = 6 1 750 , f ( LLL ) = 18 7 25 , f ( HHL ) = 250 , f ( HLL ) = 25 . The thresholds of doubt are: ( ℓ 1 , ℓ 2 , ℓ 3 ) = (41 , 40 , 39) . The correlation is stronger compared to Example 1. The optimal policy is: ( π 1 ( L ) , π 2 ( L ) , π 3 ( L )) = (0 , 0 . 606 , 0 . 644) . R 1 learns her state perfectly. The only slack IC is R 1 ’s IC. 22 / 25
Monotonicity of persuasion . Proposition: . There exists an optimal policy in which the approval probability in low state weakly decreases in the threshold of doubt: π i ( L ) ⩽ π i +1 ( L ) for all i ∈ { 1 , ..., n − 1 } . Moreover, in any optimal policy in which R i ’s IC binds, π i ( L ) > π j ( L ) for all j ∈ { 1 , 2 , ..., i − 1 } . . 23 / 25
Monotonicity of persuasion . Proposition: . There exists an optimal policy in which the approval probability in low state weakly decreases in the threshold of doubt: π i ( L ) ⩽ π i +1 ( L ) for all i ∈ { 1 , ..., n − 1 } . Moreover, in any optimal policy in which R i ’s IC binds, π i ( L ) > π j ( L ) for all j ∈ { 1 , 2 , ..., i − 1 } . . More demanding voters’ policies are more informative. 23 / 25
Monotonicity of persuasion . Proposition: . There exists an optimal policy in which the approval probability in low state weakly decreases in the threshold of doubt: π i ( L ) ⩽ π i +1 ( L ) for all i ∈ { 1 , ..., n − 1 } . Moreover, in any optimal policy in which R i ’s IC binds, π i ( L ) > π j ( L ) for all j ∈ { 1 , 2 , ..., i − 1 } . . More demanding voters’ policies are more informative. Voters are divided into three subgroups: 23 / 25
Monotonicity of persuasion . Proposition: . There exists an optimal policy in which the approval probability in low state weakly decreases in the threshold of doubt: π i ( L ) ⩽ π i +1 ( L ) for all i ∈ { 1 , ..., n − 1 } . Moreover, in any optimal policy in which R i ’s IC binds, π i ( L ) > π j ( L ) for all j ∈ { 1 , 2 , ..., i − 1 } . . More demanding voters’ policies are more informative. Voters are divided into three subgroups: perfectly informed partially manipulated rubber-stampers π i ( L ) = 0 π i ( L ) ∈ (0 , 1) π i ( L ) = 1 23 / 25
When do some voters learn their states perfectly? ω ∈ { G , B } such that Pr( ω = G ) = p 0 . Pr( H | G ) = Pr( L | B ) = λ 1 ∈ [1 / 2 , 1]. ℓ i = ℓ for all i . λ 1 1 ℓ 1 λ ∗ ℓ +1 1 2 � �� � � �� � � �� � no truthful revelation truthful revelation to some voters no truthful revelation one partially informed voter π 1 ( L ) ∈ (0 , 1) π i ( L ) = π j ( L ) ∈ (0 , 1) ∀ i, j the rest rubber-stamp the rest rubber-stamp 24 / 25
Concluding remarks We explore group persuasion in the context of unanimity rule, affiliated payoff states and heterogeneous thresholds of doubt. We compare two modes of persuasion: general and individual persuasion. General persuasion makes the strictest voters indifferent. Individual persuasion divides the group into perfectly-informed voters, partially-informed voters, and rubber-stampers. Under non-unanimous rules, general persuasion leads to certain approval, while individual persuasion does not. Future work: ▶ Non-unanimous rules under individual persuasion. ▶ Communication among voters. ▶ Sequential persuasion. 25 / 25
Thank you!
Independent states under general persuasion Three voters’ states are drawn independently. Each voter’s state is H with probability 19 / 20. The threshold profile is { ℓ 1 , ℓ 2 , ℓ 3 } = { 41 , 40 , 20 } . One optimal policy is π ( ˆ d a | θ ) = 1 for θ ∈ { HHH , HHL } , d a | LHH ) = 820 d a | HLH ) = 840 π ( ˆ π ( ˆ 1639 , 1639 , π ( ˆ d a | θ ) = 0 for θ ∈ { HLL , LLH , LHL , LLL } . R 1 ’s and R 2 ’s IC constraints bind. R 3 ’s does not. Back .
k < n votes: Independent general persuasion Each voter’s recommendation is drawn independently conditional on the state profile. We can construct a certain approval policy which is the limit of a sequence of full-support policies. For state profiles with k high-state voters, these voters are recommended to approve. The low-state voters are recommended to reject. In all other state profiles, all voters are recommended to approve. When R i receives an approval recommendation and conditions on being pivotal, she believes that her state is high.
k < n votes: Certain approval under IGP . Proposition: . Under independent general persuasion, Sender’s payoff is one. . This strengthens the previous result by showing that Sender achieves a certain approval even when constrained to independent general persuasion. The voters impose no check on Sender if Sender is allowed to condition on the entire state profile.
Roadmap . . Model 1 . . General persuasion 2 . . Individual persuasion 3 . . Extensions: 4 ▶ Public and sequential persuasion ▶ Non-unanimous rule
Recommend
More recommend