Latent variable models for social networks β’ Latent variable models allow for heterogeneity of nodes in social networks β’ Each node (actor) has a latent variable π΄ π β’ Probability of forming edge between two nodes is independent of all other node pairs given values of latent variables π π π, π = π π§ ππ π΄ π , π΄ π , π πβ π β’ Ideally latent variables should provide an interpretable representation
(Continuous) latent space model β’ Motivation: homophily or assortative mixing β’ Probability of edge between two nodes increases as characteristics of the nodes become more similar β’ Represent nodes in an unobserved (latent) space of characteristics or βsocial spaceβ β’ Small distance between 2 nodes in latent space ο¨ high probability of edge between nodes β’ Induces transitivity: observation of edges π, π and π, π suggests that π and π are not too far apart in latent space ο¨ more likely to also have an edge
(Continuous) latent space model β’ (Continuous) latent space model (LSM) proposed by Hoff et al. (2002) β’ Each node has a latent position π΄ π β β π β’ Probabilities of forming edges depend on distances between latent positions β’ Define pairwise affinities π ππ = π β π΄ π β π΄ π 2
Latent space model: generative process 1. Sample node positions in latent space 2. Compute affinities between all pairs of nodes 3. Sample edges between all pairs of nodes Figure due to P. D. Hoff, Modeling homophily and stochastic equivalence in symmetric relational data, NIPS 2008
Advantages and disadvantages of latent space model β’ Advantages of latent space model β’ Visual and interpretable spatial representation of network β’ Models homophily (assortative mixing) well via transitivity β’ Disadvantages of latent space model β’ 2-D latent space representation often may not offer enough degrees of freedom β’ Cannot model disassortative mixing (people preferring to associate with people with different characteristics)
Stochastic block model (SBM) β’ First formalized by Holland et al. (1983) β’ Also known as multi-class ErdΕs - RΓ©nyi model β’ Each node has categorical latent variable π¨ π β 1, β¦ , πΏ denoting its class or group β’ Probabilities of forming edges depend on class memberships of nodes ( πΏ Γ πΏ matrix W ) β’ Groups often interpreted as functional roles in social networks
Stochastic equivalence and block models β’ Stochastic equivalence: generalization of structural equivalence β’ Group members have identical probabilities of forming edges to members other groups β’ Can model both assortative and disassortative mixing Figure due to P. D. Hoff, Modeling homophily and stochastic equivalence in symmetric relational data, NIPS 2008
Stochastic equivalence vs community detection Original graph Blockmodel Stochastically equivalent, but are not densely connected Figure due to Goldenberg et al. (2009) - Survey of Statistical Network Models, Foundations and Trends
Stochastic blockmodel Latent representation Alice Bob Claire UCSD UCI UCLA Alice 1 Bob 1 Claire 1
Reordering the matrix to show the inferred block structure Kemp, Charles, et al. "Learning systems of concepts with an infinite relational model." AAAI. Vol. 3. 2006.
Model structure Interaction matrix W (probability of an edge Latent groups Z from block k to block kβ) Kemp, Charles, et al. "Learning systems of concepts with an infinite relational model." AAAI. Vol. 3. 2006.
Stochastic block model generative process 44
Stochastic block model Latent representation Alice Bob Nodes assigned to only one latent group. Not always an appropriate assumption Claire Running Dancing Fishing Alice 1 Bob 1 Claire 1
Mixed membership stochastic blockmodel (MMSB) Alice Bob Nodes represented by distributions over latent groups (roles) Claire Running Dancing Fishing Alice 0.4 0.4 0.2 Bob 0.5 0.5 Claire 0.1 0.9 Airoldi et al., (2008)
Mixed membership stochastic blockmodel (MMSB) Airoldi et al., (2008)
Latent feature models Alice Bob Cycling Tango Fishing Salsa Running Claire Waltz Running Mixed membership implies a kind of βconservation of (probability) massβ constraint: If you like cycling more, you must like running less, to sum to one Miller, Griffiths, Jordan (2009)
Latent feature models Alice Bob Cycling Tango Fishing Salsa Running Claire Waltz Running Mixed membership implies a kind of βconservation of (probability) massβ constraint: If you like cycling more, you must like running less, to sum to one Miller, Griffiths, Jordan (2009)
Latent feature models Alice Bob Cycling Tango Fishing Salsa Running Claire Waltz Running Mixed membership implies a kind of βconservation of (probability) massβ constraint: If you like cycling more, you must like running less, to sum to one Miller, Griffiths, Jordan (2009)
Latent feature models Alice Bob Cycling Tango Fishing Salsa Running Nodes represented by Claire binary vector of latent features Waltz Running Cycling Fishing Running Tango Salsa Waltz Alice Z = Bob Claire Miller, Griffiths, Jordan (2009)
Latent feature models β’ Latent Feature Relational Model LFRM (Miller, Griffiths, Jordan, 2009) likelihood model: 1 - ο΅ 0 + ο΅ β’ βIf I have feature k , and you have feature l , add W kl to the log-odds of the probability we interactβ β’ Can include terms for network density, covariates, popularity,β¦, as in the p2 model 52
Outline β’ Mathematical representations of social networks and generative models β’ Introduction to generative approach β’ Connections to sociological principles β’ Fitting generative social network models to data β’ Example application scenarios β’ Model selection and evaluation β’ Recent developments in generative social network models β’ Dynamic social network models
Application 1: Facebook wall posts β’ Network of wall posts on Facebook collected by Viswanath et al. (2009) β’ Nodes: Facebook users β’ Edges: directed edge from π to π if π posts on π βs Facebook wall β’ What model should we use? β’ (Continuous) latent space and latent feature models do not handle directed graphs in a straightforward manner β’ Wall posts might not be transitive, unlike friendships β’ Stochastic block model might not be a bad choice as a starting point
Model structure Interaction matrix W (probability of an edge Latent groups Z from block k to block kβ) Kemp, Charles, et al. "Learning systems of concepts with an infinite relational model." AAAI. Vol. 3. 2006.
Fitting stochastic block model β’ A priori block model: assume that class (role) of each node is given by some other variable β’ Only need to estimate π ππ β² : probability that node in class π connects to node in class πβ² for all π, πβ² β’ Likelihood given by Number of actual Number of possible edges in block π, π β² edges in block π, π β² β’ Maximum-likelihood estimate (MLE) given by
Estimating latent classes β’ Latent classes (roles) are unknown in this data set β’ First estimate latent classes π then use MLE for π β’ MLE over latent classes is intractable! β’ ~πΏ π possible latent class vectors β’ Spectral clustering techniques have been shown to accurately estimate latent classes β’ Use singular vectors of (possibly transformed) adjacency matrix to estimate classes β’ Many variants with differing theoretical guarantees
Spectral clustering for directed SBMs 1. Compute singular value decomposition π = πΞ£π π 2. Retain only first πΏ columns of π, π and first πΏ rows and columns of Ξ£ 3. Define coordinate-scaled singular vector matrix = πΞ£ 1/2 πΞ£ 1/2 π to return 4. Run k-means clustering on rows of π of latent classes estimate π Scales to networks with thousands of nodes!
Demo of SBM on Facebook wall post network
Application 2: social network of bottlenose dolphin interactions β’ Data collected by marine biologists observing interactions between 62 bottlenose dolphins β’ Introduced to network science community by Lusseau and Newman (2004) β’ Nodes: dolphins β’ Edges: undirected relations denoting frequent interactions between dolphins β’ What model should we use? β’ Social interactions here are in a group setting so lots of transitivity may be expected β’ Interactions associated by physical proximity β’ Use latent space model to estimate latent positions
(Continuous) latent space model β’ (Continuous) latent space model (LSM) proposed by Hoff et al. (2002) β’ Each node has a latent position π΄ π β β π β’ Probabilities of forming edges depend on distances between latent positions β’ Define pairwise affinities π ππ = π β π΄ π β π΄ π 2 π π π, π = π π§ ππ π ππ 1 + π π ππ πβ π
Estimation for latent space model β’ Maximum-likelihood estimation β’ Log-likelihood is concave in terms of pairwise distance matrix πΈ but not in latent positions π β’ First find MLE in terms of πΈ then use multi-dimensional scaling (MDS) to get initialization for π β’ Faster approach: replace πΈ with shortest path distances in graph then use MDS β’ Use non-linear optimization to find MLE for π β’ Latent space dimension often set to 2 to allow visualization using scatter plot Scales to ~1000 nodes
Demo of latent space model on dolphin network
Bayesian inference β’ As a Bayesian, all you have to do is write down your prior beliefs, write down your likelihood, and apply Bayes β rule, 64
Elements of Bayesian Inference Likelihood Prior Posterior Marginal likelihood (a.k.a. model evidence) is a normalization constant that does not depend on the value of ΞΈ . It is the probability of the data under the model, marginalizing over all possible ΞΈβs. 65
The full posterior distribution can be very useful The mode (MAP estimate) is unrepresentative of the distribution 66
MAP estimate can result in overfitting 67
Markov chain Monte Carlo β’ Goal : approximate/summarize a distribution, e.g. the posterior, with a set of samples β’ Idea : use a Markov chain to simulate the distribution and draw samples 68
Gibbs sampling β’ Sampling from a complicated distribution, such as a Bayesian posterior, can be hard. β’ Often, sampling one variable at a time, given all the others, is much easier. β’ Graphical models: Graph structure gives us Markov blanket 69
Gibbs sampling β’ Update variables one at a time by drawing from their conditional distributions β’ In each iteration, sweep through and update all of the variables, in any order. 70
Gibbs sampling 71
Gibbs sampling 72
Gibbs sampling 73
Gibbs sampling 74
Gibbs sampling 75
Gibbs sampling 76
Gibbs sampling 77
Gibbs sampling for SBM
Variational inference β’ Key idea: β’ Approximate distribution of interest p(z) with another distribution q(z) β’ Make q(z) tractable to work with β’ Solve an optimization problem to make q(z) as similar to p(z) as possible, e.g. in KL-divergence 79
Variational inference q p 80
Variational inference q p 81
Variational inference q p 82
Reverse KL Forwards KL Blows up if p is small and q isnβt. Blows up if q is small and p isnβt. Under-estimates the support Over-estimates the support 83 Figures due to Kevin Murphy (2012). Machine Learning: A Probabilistic Perspective
KL-divergence as an objective function for variational inference β’ Minimizing the KL is equivalent to maximizing Fit the data well Be flat 84
KL-divergence as an objective function for variational inference β’ Minimizing the KL is equivalent to maximizing Fit the data well Be flat 85
KL-divergence as an objective function for variational inference β’ Minimizing the KL is equivalent to maximizing Fit the data well Be flat 86
KL-divergence as an objective function for variational inference β’ Minimizing the KL is equivalent to maximizing Fit the data well Be flat 87
Mean field variational inference β’ We still need to compute expectations over z β’ However, we have gained the option to restrict q( z ) to make these expectations tractable. β’ The mean field approach uses a fully factorized q( z ) The entropy term decomposes nicely: 88
Mean field algorithm β’ Until converged β’ For each factor i β’ Select variational parameters such that β’ Each update monotonically improves the ELBO so the algorithm must converge 89
Deriving mean field updates for your model β’ Write down the mean field equation explicitly, β’ Simplify and apply the expectation. β’ Manipulate it until you can recognize it as a log-pdf of a known distribution (hopefully). β’ Reinstate the normalizing constant. 90
Mean field vs Gibbs sampling β’ Both mean field and Gibbs sampling iteratively update one variable given the rest β’ Mean field stores an entire distribution for each variable, while Gibbs sampling draws from one. 91
Pros and cons vs Gibbs sampling β’ Pros: β’ Deterministic algorithm, typically converges faster β’ Stores an analytic representation of the distribution, not just samples β’ Non-approximate parallel algorithms β’ Stochastic algorithms can scale to very large data sets β’ No issues with checking convergence β’ Cons: β’ Will never converge to the true distribution, unlike Gibbs sampling β’ Dense representation can mean more communication for parallel algorithms β’ Harder to derive update equations 92
Variational inference algorithm for MMSB (Variational EM) β’ Compute maximum likelihood estimates for interaction parameters W kk β β’ Assume fully factorized variational distribution for mixed membership vectors, cluster assignments β’ Until converged β’ For each node β’ Compute variational discrete distribution over itβs latent z p->q and z q->p assignments β’ Compute variational Dirichlet distribution over its mixed membership distribution β’ Maximum likelihood update for W
Application of MMSB to Sampsonβs Monastery β’ Sampson (1968) studied friendship relationships between novice monks β’ Identified several factions β’ Blockmodel appropriate? β’ Conflicts occurred β’ Two monks expelled β’ Others left Airoldi, E. M., Blei, D. M., Fienberg, S. E., & Xing, E. P. (2009). Mixed membership stochastic blockmodels. In Advances in Neural Information Processing Systems (pp. 33-40).
Application of MMSB to Sampsonβs Monastery Estimated blockmodel Airoldi, E. M., Blei, D. M., Fienberg, S. E., & Xing, E. P. (2009). Mixed membership stochastic blockmodels. In Advances in Neural Information Processing Systems (pp. 33-40).
Application of MMSB to Sampsonβs Monastery Estimated blockmodel Least coherent Airoldi, E. M., Blei, D. M., Fienberg, S. E., & Xing, E. P. (2009). Mixed membership stochastic blockmodels. In Advances in Neural Information Processing Systems (pp. 33-40).
Application of MMSB to Sampsonβs Monastery Estimated Mixed membership vectors (posterior mean) Airoldi, E. M., Blei, D. M., Fienberg, S. E., & Xing, E. P. (2009). Mixed membership stochastic blockmodels. In Advances in Neural Information Processing Systems (pp. 33-40).
Application of MMSB to Sampsonβs Monastery Estimated Mixed membership vectors (posterior mean) Expelled Airoldi, E. M., Blei, D. M., Fienberg, S. E., & Xing, E. P. (2009). Mixed membership stochastic blockmodels. In Advances in Neural Information Processing Systems (pp. 33-40).
Application of MMSB to Sampsonβs Monastery Wavering not captured Estimated Mixed membership vectors (posterior mean) Airoldi, E. M., Blei, D. M., Fienberg, S. E., & Xing, E. P. (2009). Mixed membership stochastic blockmodels. In Advances in Neural Information Processing Systems (pp. 33-40).
Application of MMSB to Sampsonβs Monastery Original network Summary of network (use Οβs) (whom do you like?) Airoldi, E. M., Blei, D. M., Fienberg, S. E., & Xing, E. P. (2009). Mixed membership stochastic blockmodels. In Advances in Neural Information Processing Systems (pp. 33-40).
Recommend
More recommend