persisting randomness in randomly growing discrete
play

Persisting randomness in randomly growing discrete structures: - PowerPoint PPT Presentation

Persisting randomness in randomly growing discrete structures: graphs and search trees R. Gr ubel Leibniz Universit at Hannover Paris, AofA 2014 Examples: Coin tossing vs. P olya urn Examples: Coin tossing vs. P olya urn


  1. Persisting randomness in randomly growing discrete structures: graphs and search trees R. Gr¨ ubel Leibniz Universit¨ at Hannover Paris, AofA 2014

  2. Examples: Coin tossing vs. P´ olya urn

  3. Examples: Coin tossing vs. P´ olya urn

  4. Examples: Coin tossing vs. P´ olya urn

  5. Examples: Coin tossing vs. P´ olya urn

  6. Examples: Coin tossing vs. P´ olya urn

  7. Examples: Coin tossing vs. P´ olya urn

  8. Examples: Coin tossing vs. P´ olya urn

  9. Persistence of randomness: What is it?

  10. Persistence of randomness: What is it? In words: The influence of early values may or may not go away in the long run.

  11. Persistence of randomness: What is it? In words: The influence of early values may or may not go away in the long run. Formally, we have a sequence X = ( X n ) n ∈ N of random variables on some probability space (Ω , A , P ), and the tail σ -field ∞ � � � T ( X ) := σ { X m : m ≥ n } n =1 may or may not be P -trivial in the sense of P ( A ) = 0 or P ( A ) = 1 for all A ∈ T ( X ) .

  12. Persistence of randomness: What is it? In words: The influence of early values may or may not go away in the long run. Formally, we have a sequence X = ( X n ) n ∈ N of random variables on some probability space (Ω , A , P ), and the tail σ -field ∞ � � � T ( X ) := σ { X m : m ≥ n } n =1 may or may not be P -trivial in the sense of P ( A ) = 0 or P ( A ) = 1 for all A ∈ T ( X ) . Kolmogorov’s zero-one law: No persisting randomness in i.i.d. sequences.

  13. Tail σ -fields (and topologies) via boundary theory

  14. Tail σ -fields (and topologies) via boundary theory • Let F be a combinatorial family, F n : objects of size n .

  15. Tail σ -fields (and topologies) via boundary theory • Let F be a combinatorial family, F n : objects of size n . • Let X be a Markov chain with P ( X n ∈ F n ) = 1, n ∈ N .

  16. Tail σ -fields (and topologies) via boundary theory • Let F be a combinatorial family, F n : objects of size n . • Let X be a Markov chain with P ( X n ∈ F n ) = 1, n ∈ N . • Say that ( y n ) n ∈ N with y n ∈ F n , n ∈ N , converges iff � � P ( X 1 = x 1 , . . . , X l = x l | X n = y n ) n ∈ N converges for all fixed l ∈ N , x 1 ∈ F 1 , . . . , x l ∈ F l .

  17. Tail σ -fields (and topologies) via boundary theory • Let F be a combinatorial family, F n : objects of size n . • Let X be a Markov chain with P ( X n ∈ F n ) = 1, n ∈ N . • Say that ( y n ) n ∈ N with y n ∈ F n , n ∈ N , converges iff � � P ( X 1 = x 1 , . . . , X l = x l | X n = y n ) n ∈ N converges for all fixed l ∈ N , x 1 ∈ F 1 , . . . , x l ∈ F l . • This leads to a compactification of F with boundary ∂ F .

  18. Tail σ -fields (and topologies) via boundary theory • Let F be a combinatorial family, F n : objects of size n . • Let X be a Markov chain with P ( X n ∈ F n ) = 1, n ∈ N . • Say that ( y n ) n ∈ N with y n ∈ F n , n ∈ N , converges iff � � P ( X 1 = x 1 , . . . , X l = x l | X n = y n ) n ∈ N converges for all fixed l ∈ N , x 1 ∈ F 1 , . . . , x l ∈ F l . • This leads to a compactification of F with boundary ∂ F . Theorem (Doob, Dynkin, . . . ) (a) X n → X ∞ almost surely with P ( X ∞ ∈ ∂ F ) = 1 . (b) X ∞ generates T ( X ) .

  19. Graph limits

  20. Graph limits • Let F = G be the family of finite simple graphs. • Let V ( G ) and E ( G ) be the vertices and edges of G ∈ G .

  21. Graph limits • Let F = G be the family of finite simple graphs. • Let V ( G ) and E ( G ) be the vertices and edges of G ∈ G . • For G , H ∈ G let Γ( H , G ) be the set of injective functions φ : V ( H ) → V ( G ). • Let T ( H , G ) be the set of all φ ∈ Γ( H , G ) that satisfy { i , j } ∈ E ( H ) ⇐ ⇒ { φ ( i ) , φ ( j ) } ∈ E ( G ) . t ( H , G ) := # T ( H , G ) / #Γ H • Let G .

  22. Graph limits • Let F = G be the family of finite simple graphs. • Let V ( G ) and E ( G ) be the vertices and edges of G ∈ G . • For G , H ∈ G let Γ( H , G ) be the set of injective functions φ : V ( H ) → V ( G ). • Let T ( H , G ) be the set of all φ ∈ Γ( H , G ) that satisfy { i , j } ∈ E ( H ) ⇐ ⇒ { φ ( i ) , φ ( j ) } ∈ E ( G ) . t ( H , G ) := # T ( H , G ) / #Γ H • Let G . In words: Sample # V ( H ) vertices from V ( G ) without replacement. Then t ( H , G ) is the probability that the induced subgraph is isomorphic to H .

  23. Graph limits • Let F = G be the family of finite simple graphs. • Let V ( G ) and E ( G ) be the vertices and edges of G ∈ G . • For G , H ∈ G let Γ( H , G ) be the set of injective functions φ : V ( H ) → V ( G ). • Let T ( H , G ) be the set of all φ ∈ Γ( H , G ) that satisfy { i , j } ∈ E ( H ) ⇐ ⇒ { φ ( i ) , φ ( j ) } ∈ E ( G ) . t ( H , G ) := # T ( H , G ) / #Γ H • Let G . In words: Sample # V ( H ) vertices from V ( G ) without replacement. Then t ( H , G ) is the probability that the induced subgraph is isomorphic to H . The subgraph sampling topology: ( G n ) n ∈ N ⊂ G of converges iff ( t ( H , G n )) n ∈ N converges for all H ∈ G . (Aldous, Lovasz, . . . ).

  24. A structural view

  25. A structural view • Let F be a family of bounded functions f : F → R . • Embed F into R F via evaluation, y �→ � � f �→ f ( y ) .

  26. A structural view • Let F be a family of bounded functions f : F → R . • Embed F into R F via evaluation, y �→ � � f �→ f ( y ) . • Doob-Martin: F = { K ( x , · ) : x ∈ F } with K ( x , y ) = P ( X n = y | X m = x ) . P ( X n = y ) This is the Martin kernel.

  27. A structural view • Let F be a family of bounded functions f : F → R . • Embed F into R F via evaluation, y �→ � � f �→ f ( y ) . • Doob-Martin: F = { K ( x , · ) : x ∈ F } with K ( x , y ) = P ( X n = y | X m = x ) . P ( X n = y ) This is the Martin kernel. • Subgraph sampling: F = { t ( H , · ) : H ∈ G } .

  28. A structural view • Let F be a family of bounded functions f : F → R . • Embed F into R F via evaluation, y �→ � � f �→ f ( y ) . • Doob-Martin: F = { K ( x , · ) : x ∈ F } with K ( x , y ) = P ( X n = y | X m = x ) . P ( X n = y ) This is the Martin kernel. • Subgraph sampling: F = { t ( H , · ) : H ∈ G } . General questions for a given graph model of the Markovian type, growing one node at a time: (a) Are these topologies the same? (b) What is the respective tail σ -field?

  29. Graphs: The Erd˝ os-R´ enyi model

  30. Graphs: The Erd˝ os-R´ enyi model Model description, as a Markov chain that grows by one node at a time, with parameter p , 0 < p < 1: • X ER is the single element of G 1 . 1

  31. Graphs: The Erd˝ os-R´ enyi model Model description, as a Markov chain that grows by one node at a time, with parameter p , 0 < p < 1: • X ER is the single element of G 1 . 1 • To move from X ER to X ER n +1 , n – add the node n + 1, – add the edges { i , n + 1 } , i ∈ { 1 , . . . , n } , independently and with probability p . – randomly relabel V ( X n +1 ),

  32. Graphs: The Erd˝ os-R´ enyi model Model description, as a Markov chain that grows by one node at a time, with parameter p , 0 < p < 1: • X ER is the single element of G 1 . 1 • To move from X ER to X ER n +1 , n – add the node n + 1, – add the edges { i , n + 1 } , i ∈ { 1 , . . . , n } , independently and with probability p . – randomly relabel V ( X n +1 ), Theorem For X ER = ( X ER n ) n ∈ N the Doob-Martin and the graph testing topology coincide. In particular, T ( X ER ) is trivial.

  33. Graphs: The Erd˝ os-R´ enyi model Model description, as a Markov chain that grows by one node at a time, with parameter p , 0 < p < 1: • X ER is the single element of G 1 . 1 • To move from X ER to X ER n +1 , n – add the node n + 1, – add the edges { i , n + 1 } , i ∈ { 1 , . . . , n } , independently and with probability p . – randomly relabel V ( X n +1 ), Theorem For X ER = ( X ER n ) n ∈ N the Doob-Martin and the graph testing topology coincide. In particular, T ( X ER ) is trivial. – If we omit the relabelling then the Markov chain is of the complete memory type, and the Doob-Martin boundary is the projective limit (‘the sequence is the limit’).

  34. Graphs: The uniform attachment model

  35. Graphs: The uniform attachment model • Again, X ua 1 is the single element of G 1 . • Construct X ua n +1 from X ua n by adding all edges { i , j } ⊂ [ n + 1] not yet in X n , independently and with probability 1 / ( n + 1).

  36. Graphs: The uniform attachment model • Again, X ua 1 is the single element of G 1 . • Construct X ua n +1 from X ua n by adding all edges { i , j } ⊂ [ n + 1] not yet in X n , independently and with probability 1 / ( n + 1). Let 1 { i , j } , 1 ≤ i < j , be the edge indicator functions.

  37. Graphs: The uniform attachment model • Again, X ua 1 is the single element of G 1 . • Construct X ua n +1 from X ua n by adding all edges { i , j } ⊂ [ n + 1] not yet in X n , independently and with probability 1 / ( n + 1). Let 1 { i , j } , 1 ≤ i < j , be the edge indicator functions. Theorem In the Doob-Martin topology associated with X ua convergence of a sequence of graphs is equivalent to the pointwise convergence of all edge indicator functions. Further, T ( X ua ) is trivial.

Recommend


More recommend