online model free influence maximization with persistence
play

Online Model-Free Influence Maximization with Persistence Paul Lagr - PowerPoint PPT Presentation

Online Model-Free Influence Maximization with Persistence Paul Lagr ee, Olivier Capp e, Bogdan Cautis, Silviu Maniu LRI, Univ. Paris-Sud, CNRS, LIMSI & Univ. Paris Saclay May 9, 2017 P. Lagr ee, O. Capp e, B. Cautis & S.


  1. Online Model-Free Influence Maximization with Persistence Paul Lagr´ ee, Olivier Capp´ e, Bogdan Cautis, Silviu Maniu LRI, Univ. Paris-Sud, CNRS, LIMSI & Univ. Paris Saclay May 9, 2017 P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 1 / 28

  2. Background & Motivations P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 2 / 28

  3. Classic Influence Maximization [Kempe et al., 2003] Important problem in social networks, with applications in marketing, computational advertising. Objective: Given a promotion budget, maximize the influence spread in the social network (word-of-mouth effect). Select k seeds (influencers) in the social graph, given an graph G = ( V , E ) and a propagation model Edges correspond to follow relations, friendships, etc. in the social network P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 3 / 28

  4. Problem IM Optimization Problem Denoting S ( I ) the influence cascade starting from a set of seeds I , the objective of the IM is to solve the following problem arg max E [ S ( I )] . I ⊆ V , | I | = L P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 4 / 28

  5. Independent Cascade Model To each edge ( u , v ) ∈ E , a probability p ( u , v ) is associated 1 at time 0 – activate seed s 2 node u activated at time t – influence is propagated at t + 1 to neighbours v independently with probability p ( u , v ) 3 once a node is activated, it cannot be deactivated or activated again. P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 5 / 28

  6. Independent Cascade Model – Example 0.8 0.3 0.3 0.1 0.5 0.05 0.1 0.2 0.5 P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 6 / 28

  7. Independent Cascade Model – Example 0.8 0.3 0.3 0.1 0.5 0.05 0.1 0.2 0.5 P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 7 / 28

  8. Independent Cascade Model – Example 0.8 0.3 0.3 0.1 0.5 0.05 0.1 0.2 0.5 P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 8 / 28

  9. Independent Cascade Model – Example 0.8 0.3 0.3 0.1 0.5 0.05 0.1 0.2 0.5 P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 9 / 28

  10. Independent Cascade Model – Example 0.8 0.3 0.3 0.1 0.5 0.05 0.1 0.2 0.5 P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 10 / 28

  11. Approximated IM algorithms 1 Computing expected spread: Monte Carlo simulations 2 for solving the IM: greedy approximation algorithm Multiple algorithms and estimators: TIM / TIM+ [Tang et al., 2014], IMM [Tang et al., 2015], SSA [Nguyen et al., 2016], PMC [Ohsaka et al., 2014], . . . P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 11 / 28

  12. Online Influence Maximization P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 12 / 28

  13. Online Influence Maximization We only know the social graph, but not edge probabilities. Problem introduced by [Lei et al., 2015] for the IC model. 1 at trial n — select a set of k seeds, 2 the diffusion happens, observe activated nodes and edge activation attempts 3 repeat to step 1 until the budget is consumed. P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 13 / 28

  14. Online Influence Maximization with Persistence OIMP Problem [Lei et al., 2015] Given a budget N , the objective of the online influence maximization with persistence is to solve the following optimization problem � � � arg max E 1 ≤ n ≤ N S ( I n ) � . � � � I n ⊆ V , | I n | = L , ∀ 1 ≤ n ≤ N A node can be activated several times at different trials, but it will yield reward only once P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 14 / 28

  15. Motivations A campaign with several steps: different posts with a single semantics. people may transfer the information several times, but “adopting” the concept rewards only once (e.g. politics) brand fanatics, e.g., Star Wars, Apple, etc. social advertisement in users’ feed (e.g. Twitter / Facebook), people may transfer/ like the content several times across the campaign. P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 15 / 28

  16. Online Model-Free Influence Maximization with Persistence P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 16 / 28

  17. Setting In the following, we work in the persistent setting no assumption regarding the diffusion model simple feedback: set of activated nodes Simple , realistic , target short horizons P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 17 / 28

  18. Model To simplify the graph problem, we consider the corresponding graph (depth-1 trees): Experts Basic Nodes Hypothesis: empty intersection between experts New problem: estimating the missing mass of each expert, that is, the expected number of nodes that can still be reached from a given seed. P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 18 / 28

  19. Missing mass Following the work of [Bubeck et al., 2013] ∈ � n Missing mass R n := � u ∈ A 1 { u / i =1 S i } p ( u ) Corresponds to the potential of the expert Missing mass estimator (known as the Good-Turing estimator) U n ( u ) ˆ � R n := , n u ∈ A where U n ( u ) is the indicator equal to 1 if x has been sampled exactly once. The estimator is the fraction of hapaxes! P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 19 / 28

  20. Confidence Bounds Estimator Bias � � u ∈ A p ( u ) � E [ R n ] − E [ ˆ R n ] ∈ − , 0 n Theorem With probability at least 1 − δ , denoting λ := � u ∈ A p ( u ) and √ � λ log(4 /δ ) + 1 3 n log 4 β n := (1 + 2) δ , the following holds: n − β n − λ n ≤ R n − ˆ R n ≤ β n . P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 20 / 28

  21. Algorithm UCB-like algorithm at round t , we play the expert k with largest index � √ ˆ λ k ( t ) log(4 t ) + log(4 t ) b k ( t ) := ˆ R k ( t ) + (1 + 2) 3 N k ( t ) , N k ( t ) where N k ( t ) denotes the number of times expert k has been played up to round t P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 21 / 28

  22. Optimism in Face of Uncertainty P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 22 / 28

  23. Experiments P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 23 / 28

  24. Execution time (DBLP) 10 4 10 2 Running time ( s ) 10 0 Oracle EG-CB 10 − 2 GT-UCB Random 10 − 4 MaxDegree 100 200 300 400 500 Trial DBLP (WC – L = 1) P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 24 / 28

  25. Growth of spreads (DBLP) × 10 5 × 10 5 × 10 4 Oracle 1 . 2 5 . 0 1 . 0 EG-CB 1 . 0 GT-UCB 4 . 0 0 . 8 Random Influence Spread Influence Spread Influence Spread 0 . 8 MaxDegree 3 . 0 0 . 6 0 . 6 2 . 0 0 . 4 0 . 4 1 . 0 0 . 2 0 . 2 0 . 0 0 . 0 0 . 0 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 Trial Trial Trial DBLP (WC – L = 5) DBLP (TV – L = 5) DBLP (LT – L = 5) P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 25 / 28

  26. Thank you. P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 26 / 28

  27. References David Kempe, Jon Kleinberg and ´ Eva Tardos (2003) Maximizing the Spread of Influence Through a Social Network. Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , 137–146. Siyu Lei, Silviu Maniu, Luyi Mo, Reynold Cheng and Pierre Senellart (2015) Online Influence Maximization SIGKDD . S´ ebastien Bubeck, Damien Ernst and Aur´ elien Garivier (2013) Optimal Discovery with Probabilistic Expert Advice: Finite Time Analysis and Macroscopic Optimality Journal of Machine Learning Research , 601 – 623. Wei Chen, Yajun Wang, Yang Yuan and Qinshi Wang (2016) Combinatorial Multi-armed Bandit and Its Extension to Probabilistically Triggered Arms Journal of Machine Learning Research , 1746 – 1778. Sharan Vaswani, V.S. Lakshmanan and Mark Schmidt (2015) Influence Maximization with Bandits Workshop NIPS . P. Lagr´ ee, O. Capp´ e, B. Cautis & S. Maniu Online Model-Free IM May 9, 2017 27 / 28

Recommend


More recommend