preferential attachment graphs are somewhere dense
play

PREFERENTIAL ATTACHMENT GRAPHS ARE SOMEWHERE-DENSE Jan Dreier, - PowerPoint PPT Presentation

PREFERENTIAL ATTACHMENT GRAPHS ARE SOMEWHERE-DENSE Jan Dreier, Philipp Kuinke , Peter Rossmanith TACO 2018 RWTH Aachen University MOTIVATION S parsity Nowhere dense r Locally bounded r Locally excluding Bounded expansion


  1. PREFERENTIAL ATTACHMENT GRAPHS ARE SOMEWHERE-DENSE Jan Dreier, Philipp Kuinke , Peter Rossmanith TACO 2018 RWTH Aachen University

  2. MOTIVATION

  3. S parsity Nowhere dense ω r ∇ ∇ Locally bounded r ∇ Locally excluding Bounded expansion expansion a minor Excluding a topological minor r Locally bounded Excluding a minor treewidth Bounded genus Planar Bounded treewidth Bounded degree Outerplanar Bounded treedepth Forests Star forests Linear forests Image by Felix Reidl 2

  4. Sparsity: r -shallow topological minor 3

  5. Sparsity: r -shallow topological minor 3

  6. Sparsity: r -shallow topological minor G � ▽ r : � The set of all r -shallow topological minors of G . 3

  7. Sparsity: r -shallow topological minor G � ▽ r : � The set of all r -shallow topological minors of G . ω ( G � ▽ r ) � max ▽ r ω ( H ) (clique size) H ∈ G � 3

  8. Sparsity Definition (Nowhere-dense) A graph class G is nowhere-dense if there exists a function f , such that for all r and all G ∈ G , ω ( G � ▽ r ) ≤ f ( r ) . 4

  9. Sparsity Definition (Nowhere-dense) A graph class G is nowhere-dense if there exists a function f , such that for all r and all G ∈ G , ω ( G � ▽ r ) ≤ f ( r ) . Definition (Somewhere-dense) A graph class G is somewhere-dense if for all functions f there exists an r and a G ∈ G , such that ω ( G � ▽ r ) > f ( r ) . 4

  10. Sparsity Definition (Nowhere-dense) A graph class G is nowhere-dense if there exists a function f , such that for all r and all G ∈ G , ω ( G � ▽ r ) ≤ f ( r ) . Definition (Somewhere-dense) A graph class G is somewhere-dense if for all functions f there exists an r and a G ∈ G , such that ω ( G � ▽ r ) > f ( r ) . G is not nowhere-dense ⇔ G is somewhere-dense 4

  11. Typical Properties of Complex Networks 5

  12. Typical Properties of Complex Networks � low diameter (small world property) 5

  13. Typical Properties of Complex Networks � low diameter (small world property) � locally dense, but globally sparse 5

  14. Typical Properties of Complex Networks � low diameter (small world property) � locally dense, but globally sparse � heavy tail degree distribution 5

  15. Typical Properties of Complex Networks � low diameter (small world property) � locally dense, but globally sparse � heavy tail degree distribution � clustering 5

  16. Typical Properties of Complex Networks � low diameter (small world property) � locally dense, but globally sparse � heavy tail degree distribution � clustering � community structure 5

  17. Typical Properties of Complex Networks � low diameter (small world property) � locally dense, but globally sparse � heavy tail degree distribution � clustering � community structure � scale freeness 5

  18. Random Graphs Random graphs with the goal of modeling real-world data: � Mathematically analyzable � Generation of infinite instances 6

  19. Sparse in the limit Definition (a.a.s. nowhere-dense) A random graph model G is a.a.s. nowhere-dense if there exists a function f such that for all r n →∞ P [ ω ( G n � lim ▽ r ) ≤ f ( r )] � 1 where G n is a random variable modeling a graph with n ver- tices randomly drawn from G . 7

  20. Sparse in the limit Definition (a.a.s. somewhere-dense) A random graph model G is a.a.s. somewhere-dense if for all functions f there exists an r , such that n →∞ P [ ω ( G n � lim ▽ r ) > f ( r )] � 1 where G n is a random variable modeling a graph with n ver- tices randomly drawn from G . 8

  21. Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 9

  22. Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n 9

  23. Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n a.a.s. nowhere-dense → 9

  24. Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n a.a.s. nowhere-dense → 2. p � 1 − 1 / n 9

  25. Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n a.a.s. nowhere-dense → 2. p � 1 − 1 / n a.a.s. somewhere-dense → 9

  26. Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n a.a.s. nowhere-dense → 2. p � 1 − 1 / n a.a.s. somewhere-dense → 3. p � 1 / 2 9

  27. Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n a.a.s. nowhere-dense → 2. p � 1 − 1 / n a.a.s. somewhere-dense → 3. p � 1 / 2 Neither! → 9

  28. Preferential attachment graphs “the rich get richer”, “preferential attachment”, “Barabási–Albert graphs” Start with some small fixed graph. Add vertices. Connect them to m vertices with a probability proportional to their degrees. Interesting properties: � power law degree distribution � scale free 10

  29. Preferential attachment graphs m � 2, n � 100: 11

  30. Preferential attachment graphs m � 2, n � 100: E [ d n � m ( v i )] ∼ m n / i 11

  31. T AIL BOUNDS

  32. Degree Concentrations � Tail bounds exists for number of vertices with degree d . [Bollobás et al. 2001] 13

  33. Degree Concentrations � Tail bounds exists for number of vertices with degree d . [Bollobás et al. 2001] � Via Martingales + Azuma-Hoeffding inequality 13

  34. Degree Concentrations � Tail bounds exists for number of vertices with degree d . [Bollobás et al. 2001] � Via Martingales + Azuma-Hoeffding inequality � Does not work for large d (i.e. order √ n ) 13

  35. Degree Concentrations � Tail bounds exists for number of vertices with degree d . [Bollobás et al. 2001] � Via Martingales + Azuma-Hoeffding inequality � Does not work for large d (i.e. order √ n ) � But we need high degree vertices! 13

  36. Concentration of a single vertex No vertex is sharply concentrated! 14

  37. Concentration of a single vertex No vertex is sharply concentrated! P [ d n 1 ( v t ) � 1 ] 14

  38. Concentration of a single vertex No vertex is sharply concentrated! n 1 � P [ d n 1 ( v t ) � 1 ] � ( 1 − 2 i − 1 ) i � t 14

  39. Concentration of a single vertex No vertex is sharply concentrated! n 2 i − 1 ) ≥ 1 1 � P [ d n 1 ( v t ) � 1 ] � ( 1 − n i � t 14

  40. Concentration of a single vertex No vertex is sharply concentrated! n 2 i − 1 ) ≥ 1 1 � P [ d n 1 ( v t ) � 1 ] � ( 1 − n i � t We can not hope for general exponential bounds. 14

  41. Concentration of a single vertex p 0.02 0.01 k 0 250 500 Distribution of d 10000 ( v 1 ) . 1 15

  42. Concentration of a single vertex p 0.02 0.01 k 0 250 500 Distribution of d 10000 ( v 1 ) conditioned under d 100 1 ( v 1 ) � 18. 1 15

  43. Concentration of a single vertex p 0.02 0.01 k 0 250 500 Distribution of d 10000 ( v 1 ) conditioned under d 1000 ( v 1 ) � 56. 1 1 15

  44. The rich stay rich Theorem Let 0 < ε ≤ 1 / 40 , t , m , n ∈ N , t > 1 ε 6 and S ⊆ { v 1 , . . . , v t } . Then � � � � n n m ( S ) for all n ≥ t � t d t m ( S ) < d n t d t � d t ( 1 − ε ) m ( S ) < ( 1 + ε ) � m ( S ) P ≥ 1 − ln ( 15 t ) e ε − O ( 1 ) d t m ( S ) . 16

  45. The rich stay rich Theorem (The approximate version) Let ε ≥ 0 , t , m , n ∈ N , and S ⊆ { v 1 , . . . , v t } : � � m ( S )] � ≥ 1 − e − ε d t ( 1 − ε ) E [ d n m ( S )] < d n m ( S ) < ( 1 + ε ) E [ d n � d t m ( S ) P � m ( S ) 16

  46. The rich stay rich Theorem (The approximate version) Let ε ≥ 0 , t , m , n ∈ N , and S ⊆ { v 1 , . . . , v t } : � � m ( S )] � ≥ 1 − e − ε d t ( 1 − ε ) E [ d n m ( S )] < d n m ( S ) < ( 1 + ε ) E [ d n � d t m ( S ) P � m ( S ) � The rich stay rich 16

  47. The rich stay rich Theorem (The approximate version) Let ε ≥ 0 , t , m , n ∈ N , and S ⊆ { v 1 , . . . , v t } : � � m ( S )] � ≥ 1 − e − ε d t ( 1 − ε ) E [ d n m ( S )] < d n m ( S ) < ( 1 + ε ) E [ d n � d t m ( S ) P � m ( S ) � The rich stay rich � At first there is chaos 16

  48. The rich stay rich Theorem (The approximate version) Let ε ≥ 0 , t , m , n ∈ N , and S ⊆ { v 1 , . . . , v t } : � � m ( S )] � ≥ 1 − e − ε d t ( 1 − ε ) E [ d n m ( S )] < d n m ( S ) < ( 1 + ε ) E [ d n � d t m ( S ) P � m ( S ) � The rich stay rich � At first there is chaos � If we have information for t we can better predict n > t 16

  49. Proving the theorem 17

  50. Proving the theorem 17

  51. Proving the theorem 17

  52. SOMEWHERE-DENSE

Recommend


More recommend