learning cascaded influence under partial monitoring
play

Learning Cascaded Influence under Partial Monitoring Jiaqi Ma 1 Jie - PowerPoint PPT Presentation

Learning Cascaded Influence under Partial Monitoring Learning Cascaded Influence under Partial Monitoring Jiaqi Ma 1 Jie Zhang 2 Jie Tang 3 1 Dept. of Automation, Tsinghua University 2 Dept. of Physics, Tsinghua University 3 Dept. of Computer


  1. Learning Cascaded Influence under Partial Monitoring Learning Cascaded Influence under Partial Monitoring Jiaqi Ma 1 Jie Zhang 2 Jie Tang 3 1 Dept. of Automation, Tsinghua University 2 Dept. of Physics, Tsinghua University 3 Dept. of Computer Science, Tsinghua University ASONAM, 2016

  2. Learning Cascaded Influence under Partial Monitoring Outline 1 Motivation Social Influence Cascaded Indirect Influence 2 Challenges 3 Problem Formulation 4 Algorithm 5 Experiments Datasets Experiments on Normalized Regrets Experiments on Application Improvement 6 Conclusion

  3. Learning Cascaded Influence under Partial Monitoring Motivation Social Influence Outline 1 Motivation Social Influence Cascaded Indirect Influence 2 Challenges 3 Problem Formulation 4 Algorithm 5 Experiments Datasets Experiments on Normalized Regrets Experiments on Application Improvement 6 Conclusion

  4. Learning Cascaded Influence under Partial Monitoring Motivation Social Influence Social Influence Social influence is the phenomenon that people’s opinions, emotions or behaviors are affected by others Application: viral marketing, propaganda, advertising promotion...

  5. Learning Cascaded Influence under Partial Monitoring Motivation Cascaded Indirect Influence Outline 1 Motivation Social Influence Cascaded Indirect Influence 2 Challenges 3 Problem Formulation 4 Algorithm 5 Experiments Datasets Experiments on Normalized Regrets Experiments on Application Improvement 6 Conclusion

  6. Learning Cascaded Influence under Partial Monitoring Motivation Cascaded Indirect Influence Cascaded Indirect Influence Social influence between non-adjacent users in the social network 0.2 0.5 0.4 0.3 0.1 ? s 0.5 0.1 0.1 s t t 0.4 0.2 0.8 Cascaded Influence 0.7 0.1 0.3 0.6 0.1 Direct Influence Network Application: friend recommendation, link prediction, ...

  7. Learning Cascaded Influence under Partial Monitoring Challenges Challenges Information about non-adjacent users is rare The number of potential paths between two users is exponentially large Most of the previous works infer the direct influence from the cascade data – partial, sparse and dynamic

  8. Learning Cascaded Influence under Partial Monitoring Problem Formulation Cascaded Indirect Influence Given a dynamic influence network G t = ( V , E , W t ) Direct influence i e − ( t − τ i ) /δ w e , t = � Influence path from u to v I t ( p i ) = � e ∈ p i w e , t Influence probability v is activated by u indirectly N N � � I t = 1 − (1 − I t ( p i )) = I t ( p i ) + o ( I t ( p i )) i =0 i =0 Omit the high-order terms of I t ( p i ) and take the top- k terms of the first-order I t ( p i )

  9. Learning Cascaded Influence under Partial Monitoring Problem Formulation Cascaded Indirect Influence Definition Cascaded Indirect Influence. The cascaded indirect influence from u to v is defined as the sum of the top k influence score among all the paths in P , � I t = max I t ( p i ) Q ⊂P p i ∈ Q s.t. | Q | = k

  10. Learning Cascaded Influence under Partial Monitoring Problem Formulation Partial Monitoring Setting The number of the intermediate paths are exponentially large – Intractable to learn indirect influence from all the paths Partial Monitoring Setting & Online Learning Problem T T 1 � � � ˆ min T (max I t ( p i ) − I t ( D t )) decision Q ⊂P t =1 t =1 p i ∈ Q s.t. | Q | = k

  11. Learning Cascaded Influence under Partial Monitoring Problem Formulation Partial Monitoring Setting Problem T T 1 � � � ˆ min T (max I t ( p i ) − I t ( D t )) decision Q ⊂P p i ∈ Q t =1 t =1 s.t. | Q | = k Regret Normalized Regret

  12. Learning Cascaded Influence under Partial Monitoring Problem Formulation Regret Growth rate of the Regret Our Goal

  13. Learning Cascaded Influence under Partial Monitoring Algorithm Algorithm – E-EXP3

  14. Learning Cascaded Influence under Partial Monitoring Algorithm Algorithm – E-EXP3

  15. Learning Cascaded Influence under Partial Monitoring Algorithm Algorithm – E-EXP3 Example Top k = 2 Exploration ? ? 0.3 ? ? ? ? ? 0.5 ? 0.3 ? 0.4 ? 0.1 Edge Bandit s s s ? ? 0.5 0.1 0.5 ? ? ? ? t t t ? 0.3 0.4 ? ? ? ? 0.8 0.6 ? ? ? ? ? ? ? ? ? ? ? Exploitation ? ? ? ? Cascaded Influence 0.123 0.18 (a) Original (b) (c) s t s t Exploration & Exploitation

  16. Learning Cascaded Influence under Partial Monitoring Algorithm Algorithm Theory Analysis Parameter � |C| ln N mixing coefficient : γ = ( e − 1) T � learning rate : η = 1 ln N ( e − 1) |C| T K Regret Upper Bound � ( e − 1) T |C| ln N 2 K More Proof Details: http://www.jiaqima.me/papers/learning-cascaded- influence.pdf

  17. Learning Cascaded Influence under Partial Monitoring Algorithm Algorithm – RE-EXP3 Algorithm 1: Preprocessing Schedule of RE-EXP3 Input : Preprocessing Round T p γ , K , |C| Output: η 1 η ← γ/ K |C| 2 G ← ∅ 3 foreach t in range ( T p ) do Choose D t with E-EXP3 4 G ← G ∪ { g ′ i , t : i ∈ D t } 5 1 6 η ← η × min { mean ( G )+3 var ( G ) , 1 }

  18. Learning Cascaded Influence under Partial Monitoring Experiments Datasets Outline 1 Motivation Social Influence Cascaded Indirect Influence 2 Challenges 3 Problem Formulation 4 Algorithm 5 Experiments Datasets Experiments on Normalized Regrets Experiments on Application Improvement 6 Conclusion

  19. Learning Cascaded Influence under Partial Monitoring Experiments Datasets Experiments: Datasets Synthetic Networks 2000 vertexes edge generation probability 0.01 edge weight U[0, 0.3] or U[0.6, 1] 60,000 times WeiBo 1,776,950 users 308,739,489 following relationships 23,755,810 retweets 100 time stamps Aminer 231,728 papers 269,508 authors 347,735 citation relationships 44 time stamps

  20. Learning Cascaded Influence under Partial Monitoring Experiments Experiments on Normalized Regrets Outline 1 Motivation Social Influence Cascaded Indirect Influence 2 Challenges 3 Problem Formulation 4 Algorithm 5 Experiments Datasets Experiments on Normalized Regrets Experiments on Application Improvement 6 Conclusion

  21. Learning Cascaded Influence under Partial Monitoring Experiments Experiments on Normalized Regrets Experiments on Normalized Regrets(Synthetic) NR on Synthetic RNR on Synthetic 10 1.4 RE-EXP3 RE-EXP3 E-EXP3 E-EXP3 9 1.2 P-EXP3 P-EXP3 8 CUCB CUCB Random Random 1.0 7 RNR NR 6 0.8 5 0.6 4 3 0.4 0 10000 20000 30000 40000 50000 60000 0 10000 20000 30000 40000 50000 60000 Time Stamp Time Stamp (a) NR (b) RNR Figure: Normalized Regret on Synthetic Data

  22. Learning Cascaded Influence under Partial Monitoring Experiments Experiments on Normalized Regrets Experiments on Normalized Regrets(Weibo & Aminer) Average RNR on Weibo Average RNR on AMiner 1.05 1.05 1.00 1.00 Average RNR Average RNR 0.95 0.95 0.90 RE-EXP3 RE-EXP3 0.85 0.90 E-EXP3 E-EXP3 P-EXP3 P-EXP3 0.80 0.85 CUCB CUCB 0.75 Random Random 0.70 0.80 0 20 40 60 80 100 0 5 10 15 20 25 30 35 40 Time Stamp Time Stamp (a) Average RNR on Weibo (b) Average RNR on AMiner Figure: Average Normalized Regret on real social networks (1500 pairs of users)

  23. Learning Cascaded Influence under Partial Monitoring Experiments Experiments on Application Improvement Outline 1 Motivation Social Influence Cascaded Indirect Influence 2 Challenges 3 Problem Formulation 4 Algorithm 5 Experiments Datasets Experiments on Normalized Regrets Experiments on Application Improvement 6 Conclusion

  24. Learning Cascaded Influence under Partial Monitoring Experiments Experiments on Application Improvement Experiments on Application Improvement(Weibo) Table: Application Improvement - Logistic Regression Methods Accuracy Precision Recall F1 score PF 0.55 0.58 0.45 0.51 P-EXP3 0.57 0.58 0.55 0.57 E-EXP3 0.59 0.61 0.55 0.58 RE-EXP3 0.64 0.65 0.63 0.64 FO 0.70 0.77 0.60 0.68 Table: Application Improvement - SVM Methods Accuracy Precision Recall F1 score PF 0.58 0.57 0.72 0.63 P-EXP3 0.56 0.58 0.53 0.55 E-EXP3 0.58 0.60 0.55 0.57 RE-EXP3 0.63 0.65 0.61 0.63 FO 0.70 0.77 0.57 0.66

  25. Learning Cascaded Influence under Partial Monitoring Conclusion Conclusion Formalized a novel problem of cascade indirect influence based on IC model Proposed two online learning algorithms (E-EXP3 and RE-EXP3) in the partial monitoring setting Theoretically proved that E-EXP3 has a cumulative regret √ bound of O ( T ). Compared the algorithms with three baseline methods on both synthetic and real networks (Weibo and AMiner). Applied the learned cascaded influence to help behavior prediction

Recommend


More recommend