PREFERENTIAL ATTACHMENT GRAPHS ARE SOMEWHERE-DENSE Jan Dreier, Philipp Kuinke , Peter Rossmanith TACO 2018 RWTH Aachen University
MOTIVATION
S parsity Nowhere dense ω r ∇ ∇ Locally bounded r ∇ Locally excluding Bounded expansion expansion a minor Excluding a topological minor r Locally bounded Excluding a minor treewidth Bounded genus Planar Bounded treewidth Bounded degree Outerplanar Bounded treedepth Forests Star forests Linear forests Image by Felix Reidl 2
Sparsity: r -shallow topological minor 3
Sparsity: r -shallow topological minor 3
Sparsity: r -shallow topological minor G � ▽ r : � The set of all r -shallow topological minors of G . 3
Sparsity: r -shallow topological minor G � ▽ r : � The set of all r -shallow topological minors of G . ω ( G � ▽ r ) � max ▽ r ω ( H ) (clique size) H ∈ G � 3
Sparsity Definition (Nowhere-dense) A graph class G is nowhere-dense if there exists a function f , such that for all r and all G ∈ G , ω ( G � ▽ r ) ≤ f ( r ) . 4
Sparsity Definition (Nowhere-dense) A graph class G is nowhere-dense if there exists a function f , such that for all r and all G ∈ G , ω ( G � ▽ r ) ≤ f ( r ) . Definition (Somewhere-dense) A graph class G is somewhere-dense if for all functions f there exists an r and a G ∈ G , such that ω ( G � ▽ r ) > f ( r ) . 4
Sparsity Definition (Nowhere-dense) A graph class G is nowhere-dense if there exists a function f , such that for all r and all G ∈ G , ω ( G � ▽ r ) ≤ f ( r ) . Definition (Somewhere-dense) A graph class G is somewhere-dense if for all functions f there exists an r and a G ∈ G , such that ω ( G � ▽ r ) > f ( r ) . G is not nowhere-dense ⇔ G is somewhere-dense 4
Typical Properties of Complex Networks 5
Typical Properties of Complex Networks � low diameter (small world property) 5
Typical Properties of Complex Networks � low diameter (small world property) � locally dense, but globally sparse 5
Typical Properties of Complex Networks � low diameter (small world property) � locally dense, but globally sparse � heavy tail degree distribution 5
Typical Properties of Complex Networks � low diameter (small world property) � locally dense, but globally sparse � heavy tail degree distribution � clustering 5
Typical Properties of Complex Networks � low diameter (small world property) � locally dense, but globally sparse � heavy tail degree distribution � clustering � community structure 5
Typical Properties of Complex Networks � low diameter (small world property) � locally dense, but globally sparse � heavy tail degree distribution � clustering � community structure � scale freeness 5
Random Graphs Random graphs with the goal of modeling real-world data: � Mathematically analyzable � Generation of infinite instances 6
Sparse in the limit Definition (a.a.s. nowhere-dense) A random graph model G is a.a.s. nowhere-dense if there exists a function f such that for all r n →∞ P [ ω ( G n � lim ▽ r ) ≤ f ( r )] � 1 where G n is a random variable modeling a graph with n ver- tices randomly drawn from G . 7
Sparse in the limit Definition (a.a.s. somewhere-dense) A random graph model G is a.a.s. somewhere-dense if for all functions f there exists an r , such that n →∞ P [ ω ( G n � lim ▽ r ) > f ( r )] � 1 where G n is a random variable modeling a graph with n ver- tices randomly drawn from G . 8
Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 9
Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n 9
Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n a.a.s. nowhere-dense → 9
Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n a.a.s. nowhere-dense → 2. p � 1 − 1 / n 9
Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n a.a.s. nowhere-dense → 2. p � 1 − 1 / n a.a.s. somewhere-dense → 9
Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n a.a.s. nowhere-dense → 2. p � 1 − 1 / n a.a.s. somewhere-dense → 3. p � 1 / 2 9
Sparse in the limit (not as clear cut) Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty: 1. p � 1 / n a.a.s. nowhere-dense → 2. p � 1 − 1 / n a.a.s. somewhere-dense → 3. p � 1 / 2 Neither! → 9
Preferential attachment graphs “the rich get richer”, “preferential attachment”, “Barabási–Albert graphs” Start with some small fixed graph. Add vertices. Connect them to m vertices with a probability proportional to their degrees. Interesting properties: � power law degree distribution � scale free 10
Preferential attachment graphs m � 2, n � 100: 11
Preferential attachment graphs m � 2, n � 100: E [ d n � m ( v i )] ∼ m n / i 11
T AIL BOUNDS
Degree Concentrations � Tail bounds exists for number of vertices with degree d . [Bollobás et al. 2001] 13
Degree Concentrations � Tail bounds exists for number of vertices with degree d . [Bollobás et al. 2001] � Via Martingales + Azuma-Hoeffding inequality 13
Degree Concentrations � Tail bounds exists for number of vertices with degree d . [Bollobás et al. 2001] � Via Martingales + Azuma-Hoeffding inequality � Does not work for large d (i.e. order √ n ) 13
Degree Concentrations � Tail bounds exists for number of vertices with degree d . [Bollobás et al. 2001] � Via Martingales + Azuma-Hoeffding inequality � Does not work for large d (i.e. order √ n ) � But we need high degree vertices! 13
Concentration of a single vertex No vertex is sharply concentrated! 14
Concentration of a single vertex No vertex is sharply concentrated! P [ d n 1 ( v t ) � 1 ] 14
Concentration of a single vertex No vertex is sharply concentrated! n 1 � P [ d n 1 ( v t ) � 1 ] � ( 1 − 2 i − 1 ) i � t 14
Concentration of a single vertex No vertex is sharply concentrated! n 2 i − 1 ) ≥ 1 1 � P [ d n 1 ( v t ) � 1 ] � ( 1 − n i � t 14
Concentration of a single vertex No vertex is sharply concentrated! n 2 i − 1 ) ≥ 1 1 � P [ d n 1 ( v t ) � 1 ] � ( 1 − n i � t We can not hope for general exponential bounds. 14
Concentration of a single vertex p 0.02 0.01 k 0 250 500 Distribution of d 10000 ( v 1 ) . 1 15
Concentration of a single vertex p 0.02 0.01 k 0 250 500 Distribution of d 10000 ( v 1 ) conditioned under d 100 1 ( v 1 ) � 18. 1 15
Concentration of a single vertex p 0.02 0.01 k 0 250 500 Distribution of d 10000 ( v 1 ) conditioned under d 1000 ( v 1 ) � 56. 1 1 15
The rich stay rich Theorem Let 0 < ε ≤ 1 / 40 , t , m , n ∈ N , t > 1 ε 6 and S ⊆ { v 1 , . . . , v t } . Then � � � � n n m ( S ) for all n ≥ t � t d t m ( S ) < d n t d t � d t ( 1 − ε ) m ( S ) < ( 1 + ε ) � m ( S ) P ≥ 1 − ln ( 15 t ) e ε − O ( 1 ) d t m ( S ) . 16
The rich stay rich Theorem (The approximate version) Let ε ≥ 0 , t , m , n ∈ N , and S ⊆ { v 1 , . . . , v t } : � � m ( S )] � ≥ 1 − e − ε d t ( 1 − ε ) E [ d n m ( S )] < d n m ( S ) < ( 1 + ε ) E [ d n � d t m ( S ) P � m ( S ) 16
The rich stay rich Theorem (The approximate version) Let ε ≥ 0 , t , m , n ∈ N , and S ⊆ { v 1 , . . . , v t } : � � m ( S )] � ≥ 1 − e − ε d t ( 1 − ε ) E [ d n m ( S )] < d n m ( S ) < ( 1 + ε ) E [ d n � d t m ( S ) P � m ( S ) � The rich stay rich 16
The rich stay rich Theorem (The approximate version) Let ε ≥ 0 , t , m , n ∈ N , and S ⊆ { v 1 , . . . , v t } : � � m ( S )] � ≥ 1 − e − ε d t ( 1 − ε ) E [ d n m ( S )] < d n m ( S ) < ( 1 + ε ) E [ d n � d t m ( S ) P � m ( S ) � The rich stay rich � At first there is chaos 16
The rich stay rich Theorem (The approximate version) Let ε ≥ 0 , t , m , n ∈ N , and S ⊆ { v 1 , . . . , v t } : � � m ( S )] � ≥ 1 − e − ε d t ( 1 − ε ) E [ d n m ( S )] < d n m ( S ) < ( 1 + ε ) E [ d n � d t m ( S ) P � m ( S ) � The rich stay rich � At first there is chaos � If we have information for t we can better predict n > t 16
Proving the theorem 17
Proving the theorem 17
Proving the theorem 17
SOMEWHERE-DENSE
Recommend
More recommend