local non bayesian social learning with stubborn agents
play

Local non-Bayesian social learning with stubborn agents Daniel Vial, - PowerPoint PPT Presentation

Introduction Model Learning outcome Adversarial setting Related work Local non-Bayesian social learning with stubborn agents Daniel Vial, Vijay Subramanian ECE Department, University of Michigan Introduction Model Learning outcome


  1. Introduction Model Learning outcome Adversarial setting Related work Local non-Bayesian social learning with stubborn agents Daniel Vial, Vijay Subramanian ECE Department, University of Michigan

  2. Introduction Model Learning outcome Adversarial setting Related work Motivation Social learning in the presence of malicious agents Most prominent example: fake news on social networks [Shearer, Gottfried 2017] [Shearer 2018] [Allcott, Gentzkow 2017]

  3. Introduction Model Learning outcome Adversarial setting Related work Overview Salient features: 1 Simultaneous consumption/discussion of news 2 Legitimate news partially reveals “truth” 3 Fake news more likely in “echo chambers” We analyze model incorporating these features: 1 Agents receive signals/share beliefs about true state θ 2 Regular agents: signals = noisy observations of θ 3 Stubborn agents: signals uncorrelated with θ ; ignore others’ beliefs Main questions: Do stubborn agents prevent regular agents from learning θ ? How can stubborn agents maximize influence?

  4. Introduction Model Learning outcome Adversarial setting Related work Learning model (basic ingredients) True state θ ∈ (0 , 1), (regular) agents A , stubborn agents/bots B Signals at time t : s t ( i ) ∼ Bernoulli( θ ) for i ∈ A , s t ( i ) = 0 for i ∈ B Beliefs at time t : Beta( α t ( i ) , β t ( i )) for i ∈ A ∪ B If j → i in graph, i observes α t − 1 ( j ) , β t − 1 ( j ) at t ; i ∈ B has only self-loop

  5. Introduction Model Learning outcome Adversarial setting Related work Learning model (belief updates) How should i use signal s t ( i ) + neighbor parameters { α t − 1 ( j ) , β t − 1 ( j ) : j → i } ? We adopt non-Bayesian model similar to [Jadbabaie et al. 2012] Bayesian update using signal, then average with neighbors in graph: η � α t ( i ) = (1 − η )( α t − 1 ( i ) + s t ( i )) + α t − 1 ( j ) d in ( i ) j ∈ A ∪ B : j → i η � β t ( i ) = (1 − η )( β t − 1 ( i ) + 1 − s t ( i )) + β t − 1 ( j ) d in ( i ) j ∈ A ∪ B : j → i Quantity of interest: α t ( i ) θ t ( i ) = E [Beta( α t ( i ) , β t ( i ))] = α t ( i ) + β t ( i ) (View as summary statistic of i ’s belief/opinion at t )

  6. Introduction Model Learning outcome Adversarial setting Related work Learning horizon As learning horizon (i.e. number belief updates) grows . . . . . . agents receive more unbiased observations . . . influence of bots spreads Learning horizon plays important, but non-obvious role Difficult to analyze finite horizon for fixed graph Will consider sequence { G n } n ∈ N of random graphs, where G n has n agents Will consider horizon T n ∈ N for G n (finite for each finite n )

  7. Introduction Model Learning outcome Adversarial setting Related work Graph model 1 Realize { d out ( i ) , d A in ( i ) , d B in ( i ) } n i =1 satisfying n n � � d out ( i ) ∈ N , d A in ( i ) ∈ N , d B d A in ( i ) ∈ Z + , d out ( i ) = in ( i ) a . s . i =1 i =1 2 From { d out ( i ) , d A in ( i ) } n i =1 , construct sub-graph with nodes A = { 1 , . . . , n } via directed configuration model [Chen, Olvera-Cravioto 2013] 3 Connect d B in ( i ) bots (with only self-loop) to each i ∈ A Here bot connections { d B in ( i ) } n i =1 given; later, will consider optimal connections

  8. Introduction Model Learning outcome Adversarial setting Related work Assumptions Key random variable: “density” of (regular) agents, measured as n d A in ( i ) d out ( i ) � p n = ˜ × � n d A in ( i ) + d B in ( i ) j =1 d out ( j ) i =1 � �� � � �� � Fraction in-neighbors trying to learn Sample w.r.t. out-degree distribution Assumption 1 (for belief convergence): lim n →∞ P ( | ˜ p n − p n | > δ n ) = 0 for some { p n } n ∈ N , { δ n } n ∈ N ⊂ (0 , 1) s.t. lim n →∞ δ n = 0 lim n →∞ T n = ∞ Assumption 2 (for branching process approximation): Sparse degrees (finite mean/variance) with high probability T n = O (log n ) ⇒ Guarantees θ T n ( i ) depends on o ( n ) other agents (“local” learning)

  9. Introduction Model Learning outcome Adversarial setting Related work Main result Theorem Given assumptions, we have for i ∗ ∼ { 1 , . . . , n } uniformly,  θ, T n (1 − p n ) − n →∞ 0 − − →    θ T n ( i ∗ ) P θ (1 − e − K η ) / ( K η ) , T n (1 − p n ) − n →∞ K ∈ (0 , ∞ ) − − → − − − → . n →∞    0 , T n (1 − p n ) − n →∞ ∞ − − → Illustration, assuming T n , p n related as T n ∝ (1 − p n ) − C :

  10. Introduction Model Learning outcome Adversarial setting Related work Remarks on main result Again assuming T n , p n related as T n ∝ (1 − p n ) − C : 1 Phase transition occurs (small change to C ≈ 1 ⇒ big change belief) 2 For fixed p n , agents initially (at small T n ) learn, later (at large T n ) forget! 3 For fixed T n ∝ (1 − p n ) − 1 , bots experience “diminishing returns” 4 When T n (1 − p n ) → K ∈ (0 , ∞ ), limiting belief = θ (1 − e − K η ) / ( K η ): As η → 0, agents ignore network, belief → θ As η → 1, belief → θ (1 − e − K ) / K (not → 0, “discontinuity”)

  11. Introduction Model Learning outcome Adversarial setting Related work Special case If p n → p < 1 (i.e. bots non-vanishing), stronger result holds: Theorem Suppose p n → p ∈ (0 , 1) , so that θ T n ( i ∗ ) → 0 in P . Then, under slightly stronger assumptions, and for any ǫ > 0 , |{ i ∈ A : θ T n ( i ) > ǫ }| = o ( n ) with high probability as n → ∞ . “Slightly stronger assumptions”: T n = Ω(log n ) (instead of just T n → ∞ ) Minimum rates of convergence for “with high probability” statements

  12. Introduction Model Learning outcome Adversarial setting Related work Key ideas of proof (1/2) Recall parameter updates: η � α t ( i ) = (1 − η )( α t − 1 ( i ) + s t ( i )) + α t − 1 ( j ) (1) d in ( i ) j ∈ A ∪ B : j → i η � β t ( i ) = (1 − η )( β t − 1 ( i ) + 1 − s t ( i )) + β t − 1 ( j ) (2) d in ( i ) j ∈ A ∪ B : j → i Assume α 0 ( j ) = β 0 ( j ) = o ( T n ) ∀ j and define P = column-normalized adjacency matrix e i = unit vector in i -th direction Then iterating (1)-(2) yields t − 1 θ T n ( i ) = 1 � s t − τ ( η P + (1 − η ) I ) τ e i + o (1) T n τ =0 Interpretation: take Uniform( { 1 , . . . , T n } )-length lazy random walk from i , sample signal of node reached

  13. Introduction Model Learning outcome Adversarial setting Related work Key ideas of proof (2/2) Previous slide: interpret θ T n ( i ) in terms of lazy random walk (LRW) Bots are absorbing states on this LRW (owing to self-loops) To analyze beliefs, analyze absorption probabilities LRW and breadth-first-search graph construction can be done simultaneously By T n = O (log n ) and sparsity, LRW explores tree-like sub-graph before horizon Reduces random process on random graph to much simpler process (simultaneous construction of tree / computation of absorption probabilities)

  14. Introduction Model Learning outcome Adversarial setting Related work Formulation Previously assumed { d out ( i ) , d A in ( i ) , d B in ( i ) } n i =1 given Now suppose { d out ( i ) , d A in ( i ) } n i =1 given, adversary chooses { d B in ( i ) } n i =1 By main result, adversary (with budget b ∈ N ) should solve n n d A in ( i ) d out ( i ) � � d B min s.t. in ( i ) ≤ b � n d A in ( i ) + d B in ( i ) j =1 d out ( j ) { d B in ( i ) } n i =1 ∈ Z n + i =1 i =1 � �� � Key random variable ˜ p n shown previously Integer program (IP), so we devise approximation scheme

  15. Introduction Model Learning outcome Adversarial setting Related work Approximation scheme Independently attach each bot to i -th agent with probability proportional to � �� � � λ ∗ d out ( i ) d A max in ( i ) in ( i ) − 1 , 0 (3) d A (3) is solution to LP relaxation of IP; λ ∗ > 0 is efficiently computable Intuition: bots want to connect to i -th agent only if d out ( i ) 1 in ( i ) ≥ λ ∗ , i.e. only d A if i is influential ( d out ( i ) large) + susceptible to influence ( d A in ( i ) small) Theorem For any δ > 0 , scheme gives (2 + δ ) -approximation with high probability, i.e. � objective for approximation scheme � n →∞ P lim > 2 + δ = 0 . objective for optimal scheme

  16. Introduction Model Learning outcome Adversarial setting Related work Empirical performance For real social networks, our approximation scheme outperforms heuristics, even those using network structure (Networks from [ SNAP Datasets: Stanford Large Network Dataset Collection ]) Ultimately, new insights into vulnerabilities of social networks

  17. Introduction Model Learning outcome Adversarial setting Related work Most similar models in literature [Azzimonti, Fernandes 2018] (Almost) same belief update (minor differences to bot behavior) Only empirical results (allows for richer model, e.g. time-varying graph) [Jadbabaie et al. 2012] Communicate distributions, not parameters, i.e. � µ t ( i ) = η ii BU( µ t − 1 ( i ) , s t ( i )) + η ji µ t − 1 ( j ) j � = i where µ terms are distributions, � j η ji = 1, BU = “Bayesian update” Richer belief update, but stronger assumptions: 1 Fixed, strongly-connected graph 2 Infinite horizon 3 No stubborn agents

Recommend


More recommend