Naive Bayesian Learning in Social Networks Jerry Anunrojwong (Harvard) joint with Nat Sothanaphan (MIT) EC’18
Social Learning state of the world unknown to the agents Rule: can only talk to your neighbors Prior works: Bayesian Learning Naive Learning
Bayesian Learning (Rational) Beliefs are distributions. Perfectly rational and Bayesian.
Bayesian Learning (Rational) Beliefs are distributions. Perfectly rational and Bayesian. Weigh confidence in beliefs
Bayesian Learning (Rational) I need to subtract other people’s beliefs Beliefs are distributions. from yours. But how? Perfectly rational and I need superhuman Bayesian. reasoning & knowledge. Weigh confidence in beliefs Do very sophisticated Bayesian reasoning Network structure is common knowledge
Naive Learning (DeGroot) Beliefs are scalars. Update beliefs by taking (weighted) average of neighbors’ beliefs. I don’t need to know beyond my neighbors!
Naive Learning (DeGroot) Beliefs are scalars. Update beliefs by taking (weighted) average of neighbors’ beliefs. I don’t need to know beyond my neighbors! Simple and intuitive belief update rule Only need to know neighbors
Naive Learning (DeGroot) Beliefs are scalars. Update beliefs by taking (weighted) average of neighbors’ beliefs. I don’t need to know No notion of beyond confidence in beliefs my neighbors! Simple and intuitive belief update rule Only need to know neighbors
Question: How can we combine the pros of naive and Bayesian learning? Naive Bayesian Learning Beliefs are distributions. Weigh Bayesian Agents use Bayes’ rule. confidence in beliefs Simple and intuitive Agents treat neighbors as independent. Naive belief update rule Only need to know Belief update rule only depends on neighbors. Naive neighbors
Naive Bayesian Learning (Our paper) unknown state of the world Beliefs are distributions. Update beliefs by Bayes independent rule, assuming naively signals that neighbors are My my neighbors’ independent information signal signal sources. common prior posterior update my my neighbors’ belief belief My mental model
Naive Bayesian Learning (Our paper) Each time step: unknown state of the world Beliefs are distributions. Update beliefs by Bayes independent I have access to rule, assuming naively signals my and my neighbors’ beliefs that neighbors are My my neighbors’ independent information signal signal sources. common prior posterior update my my neighbors’ belief belief My mental model My update rule
Naive Bayesian Learning (Our paper) Each time step: unknown state of the world Beliefs are distributions. Update beliefs by Bayes independent I have access to rule, assuming naively signals my and my neighbors’ beliefs that neighbors are My my neighbors’ independent information signal signal sources. I infer my and my neighbors’ signals assuming their beliefs common arise from my mental model prior posterior update my my neighbors’ belief belief My mental model My update rule
Naive Bayesian Learning (Our paper) Each time step: unknown state of the world Beliefs are distributions. Update beliefs by Bayes independent I have access to rule, assuming naively signals my and my neighbors’ beliefs that neighbors are My my neighbors’ independent information signal signal sources. I infer my and my neighbors’ signals assuming their beliefs common arise from my mental model prior posterior I update my beliefs from common update prior by conditioning on my and my my neighbors’ my neighbors’ inferred signals belief belief My mental model My update rule
Naive Bayesian Update Rule: Example 1 2 3 belief t=0 inferred signals belief t=1
Naive Bayesian Update Rule: Example 1 2 3 belief t=1 inferred signals belief t=2
Naive Bayesian Update Rule: Example Copies of signals “flow” Mental model assumes beliefs are “fresh” 1 2 3 belief t=1 inferred signals belief t=2
Main Result (Informal) we analytically characterize the consensus and the formula for the consensus says ... influence on centrally confident consensus located beliefs NAIVE LEARNING BAYESIAN LEARNING
Main Result (Informal) we analytically characterize the consensus and the formula for the consensus says ... influence on centrally confident consensus located beliefs NAIVE LEARNING BAYESIAN LEARNING eigenvector centrality eigenvector of adjacency matrix
Main Result (Informal) we analytically characterize the consensus and the formula for the consensus says ... influence on centrally confident consensus located beliefs NAIVE LEARNING BAYESIAN LEARNING eigenvector centrality eigenvector of an agent is central adjacency matrix if it connects to other central agents
Main Result (Informal) we analytically characterize the consensus and the formula for the consensus says ... influence on centrally confident consensus located beliefs NAIVE LEARNING BAYESIAN LEARNING eigenvector centrality eigenvector of an agent is central adjacency matrix if it connects to other central agents also appear in DeGroot learning but from different dynamics
Main Result (Formal) DEFINITION weighted log-likelihood function for each state θ consensus θ max Theorem Every agent’s belief converges to the point distribution at maximizer of ℓ(θ).
agent i’s initial belief Main Result (Formal) common prior DEFINITION weighted log-likelihood function for each state θ consensus “confidence of beliefs at θ” how much agent i believes in θ θ max compared to the prior baseline Theorem Every agent’s belief converges to the point distribution at maximizer of ℓ(θ).
agent i’s initial belief Main Result (Formal) common prior DEFINITION weighted log-likelihood function for each state θ consensus “confidence of beliefs at θ” centrality-weighted how much agent i believes in θ average of θ max compared to the prior baseline Theorem Every agent’s belief converges to the point distribution at maximizer of ℓ(θ).
Understanding Main Result Intuitively - agents take a lot of signals as independent → beliefs converge to a point
Understanding Main Result Intuitively - agents take a lot of signals as independent → beliefs converge to a point - initial beliefs come from independent signals → “confident beliefs” = “informative signals”
Example: Gaussian Beliefs interpretation: scalar belief μ i with confidence τ i agent i’s initial belief
Example: Gaussian Beliefs interpretation: scalar belief μ i with confidence τ i agent i’s initial belief a scenario: at the beginning, agent i receives signal with independent
Example: Gaussian Beliefs interpretation: scalar belief μ i with confidence τ i agent i’s initial belief a scenario: at the beginning, agent i receives signal with independent consensus
Example: Gaussian Beliefs interpretation: scalar belief μ i with confidence τ i agent i’s initial belief a scenario: at the beginning, agent i receives signal with independent agent i’s influence consensus influence on centrally informative consensus located signals
Policy Implication I: how to seed opinion leaders Learning quality = precision of consensus, as a random variable! unless agent is central but poorly informed I am a centrally located leader, and If social planners want to seed opinion leaders, I am confident enough to dump they must make those leaders well informed. my uninformed belief on you all ELSE you get this isolated minions!
Policy Implication II: how to solve clustered seeding Key point: their model has no notion of Information loss from clustered seeding occurs in their model but not ours. “confidence in beliefs” ⅓ ~0 ⅓ ~½ ~½ ⅓ our model BBCM optimal information middle agent aggregation is “blocked”
Conclusion - We propose a model that combines the pros of naive and Bayesian learning. - consensus = maximizer of the weighted log-likelihood function - centrally located + confident beliefs = influence on consensus - Two policy implications: how to seed opinion leaders + clustered seeding
Gaussian Beliefs: Quality of Learning consensus is a random variable precision Q captures learning quality Comparative statics unless v k is large and τ k is small POLICY IMPLICATION I am a centrally located leader, and If social planners want to seed opinion leaders, I am confident enough to dump they must make those leaders well informed. my uninformed belief on you all ELSE you get this isolated minions!
Recommend
More recommend