building random trees from blocks
play

Building Random Trees from Blocks Mohan Gopaladesikan Department of - PowerPoint PPT Presentation

Building Random Trees from Blocks Mohan Gopaladesikan Department of Statistics, Purdue University 18th September 2012 Joint work with Dr. Hosam Mahmoud Department of Statistics, The George Washington University Dr. Mark Daniel Ward


  1. Sketch of Proof. We color the leaves and internal nodes with white(W) and blue(B) colors respectively. This will enable us to use the powerful theory of P´ olya urns. Suppose T i has ℓ i leaves (and consequently it has t − ℓ i internal nodes). Let Λ C be number of leaves in a randomly chosen block, i.e., Λ C has probability mass � P (Λ C = ℓ ) = j p j , where the sum is taken over all j such that block T j has ℓ leaves. 7

  2. Sketch of Proof. For example: Figure: A collection of building blocks of size 4, with probabilities 1 3 , and 2 3 , respectively. 8

  3. Sketch of Proof. For example: Figure: A collection of building blocks of size 4, with probabilities 1 3 , and 2 3 , respectively. P (Λ C = 2) = 1 P (Λ C = 3) = 2 3 , and 3 . 8

  4. Sketch of Proof. For example: Figure: A collection of building blocks of size 4, with probabilities 1 3 , and 2 3 , respectively. P (Λ C = 2) = 1 P (Λ C = 3) = 2 3 , and 3 . The replacement matrix for this urn:   W B A = W Λ C − 1 t − Λ C + 1     t − Λ C B Λ C 8

  5. Sketch of Proof. For example: Figure: A collection of building blocks of size 4, with probabilities 1 3 , and 2 3 , respectively. P (Λ C = 2) = 1 P (Λ C = 3) = 2 3 , and 3 . The replacement matrix for this urn:   W B A = W Λ C − 1 t − Λ C + 1     t − Λ C B Λ C Row sum for this matrix is a constant! Balanced Urns! 8

  6. Sketch of Proof. For balanced urns it is shown by [Athreya 1968] that W n a . s . − → λ 1 v 1 , n where λ 1 is the principal (largest real) eigenvalue of the average of the replacement matrix, and ( v 1 , v 2 ) is the corresponding left eigenvector of E [ A ]. 9

  7. Sketch of Proof. For balanced urns it is shown by [Athreya 1968] that W n a . s . − → λ 1 v 1 , n where λ 1 is the principal (largest real) eigenvalue of the average of the replacement matrix, and ( v 1 , v 2 ) is the corresponding left eigenvector of E [ A ]. Calculating these for our blocks tree problem gives us: λ 2 = − 1 , λ 1 = t and 9

  8. Sketch of Proof. For balanced urns it is shown by [Athreya 1968] that W n a . s . − → λ 1 v 1 , n where λ 1 is the principal (largest real) eigenvalue of the average of the replacement matrix, and ( v 1 , v 2 ) is the corresponding left eigenvector of E [ A ]. Calculating these for our blocks tree problem gives us: λ 2 = − 1 , λ 1 = t and and hence, W n t a . s . − → t + 1 E [Λ C ] . n 9

  9. Sketch of Proof. Further we also notice that ℜ λ 2 < 1 2 λ 1 . [Smythe 1996] showed that W n − λ 1 v 1 D → N (0 , σ 2 ) , √ n − for some variance σ 2 and states that σ 2 is generally hard to compute, but we will obtain the exact variance of W n . 10

  10. Sketch of Proof. Let I ( W ) be the indicator of the event that a white ball is picked at n the n th draw. 11

  11. Sketch of Proof. Let I ( W ) be the indicator of the event that a white ball is picked at n the n th draw. W n = W n − 1 − I ( W ) + Λ C n 11

  12. Sketch of Proof. Let I ( W ) be the indicator of the event that a white ball is picked at n the n th draw. W n = W n − 1 − I ( W ) + Λ C n I ( W ) � � � � W n | W n − 1 = W n − 1 − E | W n − 1 + E [Λ C ] E n 11

  13. Sketch of Proof. Let I ( W ) be the indicator of the event that a white ball is picked at n the n th draw. W n = W n − 1 − I ( W ) + Λ C n I ( W ) � � � � W n | W n − 1 = W n − 1 − E | W n − 1 + E [Λ C ] E n W n − 1 � � W n | W n − 1 = W n − 1 − t ( n − 1) + E [Λ C ] E 11

  14. Sketch of Proof. Let I ( W ) be the indicator of the event that a white ball is picked at n the n th draw. W n = W n − 1 − I ( W ) + Λ C n I ( W ) � � � � W n | W n − 1 = W n − 1 − E | W n − 1 + E [Λ C ] E n W n − 1 � � W n | W n − 1 = W n − 1 − t ( n − 1) + E [Λ C ] E � t ( n − 1) − 1 � E [ W n ] = E [ W n − 1 ] + E [Λ C ] , for n ≥ 2 t ( n − 1) 11

  15. Sketch of Proof. Let I ( W ) be the indicator of the event that a white ball is picked at n the n th draw. W n = W n − 1 − I ( W ) + Λ C n I ( W ) � � � � W n | W n − 1 = W n − 1 − E | W n − 1 + E [Λ C ] E n W n − 1 � � W n | W n − 1 = W n − 1 − t ( n − 1) + E [Λ C ] E � t ( n − 1) − 1 � E [ W n ] = E [ W n − 1 ] + E [Λ C ] , for n ≥ 2 t ( n − 1) Solving this recurrence, we get 11

  16. Sketch of Proof. Let I ( W ) be the indicator of the event that a white ball is picked at n the n th draw. W n = W n − 1 − I ( W ) + Λ C n I ( W ) � � � � W n | W n − 1 = W n − 1 − E | W n − 1 + E [Λ C ] E n W n − 1 � � W n | W n − 1 = W n − 1 − t ( n − 1) + E [Λ C ] E � t ( n − 1) − 1 � E [ W n ] = E [ W n − 1 ] + E [Λ C ] , for n ≥ 2 t ( n − 1) Solving this recurrence, we get ∼ t E [Λ C ] n − 1 / t � � � � ( t + 1) n + O , E W n 11

  17. Sketch of Proof. The second moment n − 1 + I ( W ) C + 2Λ C W n − 1 − 2Λ C I ( W ) − 2 W n − 1 I ( W ) W 2 n = W 2 + Λ 2 n n n 12

  18. Sketch of Proof. The second moment n − 1 + I ( W ) C + 2Λ C W n − 1 − 2Λ C I ( W ) − 2 W n − 1 I ( W ) W 2 n = W 2 + Λ 2 n n n Taking expectations the second moment is got by similar fashion as the first moment. 12

  19. Sketch of Proof. The second moment n − 1 + I ( W ) C + 2Λ C W n − 1 − 2Λ C I ( W ) − 2 W n − 1 I ( W ) W 2 n = W 2 + Λ 2 n n n Taking expectations the second moment is got by similar fashion as the first moment. Subtracting the square of the mean we get the variance. 12

  20. Sketch of Proof. The second moment n − 1 + I ( W ) C + 2Λ C W n − 1 − 2Λ C I ( W ) − 2 W n − 1 I ( W ) W 2 n = W 2 + Λ 2 n n n Taking expectations the second moment is got by similar fashion as the first moment. Subtracting the square of the mean we get the variance. Var [ W n ] = E [ W 2 n ] − E [ W n ] 2 + E [Λ C ]( t + 1 − E [Λ C ]) � Var [Λ C ] � ∼ tn (1 + t ) 2 (2 + t ) t + 2 + O ( n 1 − ǫ ) := σ 2 C n . 12

  21. Sketch of Proof. The second moment n − 1 + I ( W ) C + 2Λ C W n − 1 − 2Λ C I ( W ) − 2 W n − 1 I ( W ) W 2 n = W 2 + Λ 2 n n n Taking expectations the second moment is got by similar fashion as the first moment. Subtracting the square of the mean we get the variance. Var [ W n ] = E [ W 2 n ] − E [ W n ] 2 + E [Λ C ]( t + 1 − E [Λ C ]) � Var [Λ C ] � ∼ tn (1 + t ) 2 (2 + t ) t + 2 + O ( n 1 − ǫ ) := σ 2 C n . Hence the theorem follows! 12

  22. Depth Definition The depth of a node is the length of the shortest path from the node to the root. 13

  23. Depth Definition The depth of a node is the length of the shortest path from the node to the root. Theorem Let D n be the depth of the root of the nth inserted block in a random tree built from the building blocks T 1 , . . . , T k , which are selected at each step with probabilities p 1 , . . . , p k . Let E [∆ C ] be the average depth of a node in the given collection, and Var [∆ C ] be the variance of that depth. Then, D n − ( E [∆ C ] + 1) ln n D 0 , Var [∆ C ] + ( E [∆ C ] + 1) 2 � � √ − → N . ln n 13

  24. Sketch of Proof. At step n , the newcomer can join any of the n − 1 existing blocks. 14

  25. Sketch of Proof. At step n , the newcomer can join any of the n − 1 existing blocks. The root of the n th block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1. 14

  26. Sketch of Proof. At step n , the newcomer can join any of the n − 1 existing blocks. The root of the n th block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1. We call the block to which the parent belongs the “parent block”, which can be any member i of the collection, with probability p i . 14

  27. Sketch of Proof. At step n , the newcomer can join any of the n − 1 existing blocks. The root of the n th block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1. We call the block to which the parent belongs the “parent block”, which can be any member i of the collection, with probability p i . δ n : be the random depth at which the n th parent node appears in its own block. 14

  28. Sketch of Proof. At step n , the newcomer can join any of the n − 1 existing blocks. The root of the n th block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1. We call the block to which the parent belongs the “parent block”, which can be any member i of the collection, with probability p i . δ n : be the random depth at which the n th parent node appears in its own block. δ 1 , δ 2 , . . . are equidistributed. 14

  29. Sketch of Proof. At step n , the newcomer can join any of the n − 1 existing blocks. The root of the n th block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1. We call the block to which the parent belongs the “parent block”, which can be any member i of the collection, with probability p i . δ n : be the random depth at which the n th parent node appears in its own block. δ 1 , δ 2 , . . . are equidistributed. ∆ C (representing the depth of a parent node), which is completely determined by the structure of the trees in the collection; 14

  30. Sketch of Proof. At step n , the newcomer can join any of the n − 1 existing blocks. The root of the n th block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1. We call the block to which the parent belongs the “parent block”, which can be any member i of the collection, with probability p i . δ n : be the random depth at which the n th parent node appears in its own block. δ 1 , δ 2 , . . . are equidistributed. ∆ C (representing the depth of a parent node), which is completely determined by the structure of the trees in the collection; Each δ n has the same random distribution as ∆ C . 14

  31. For example: Figure: A collection of building blocks of size 4, with probabilities 1 3 , and 2 3 , respectively. 15

  32. For example: Figure: A collection of building blocks of size 4, with probabilities 1 3 , and 2 3 , respectively. 15

  33. For example: Figure: A collection of building blocks of size 4, with probabilities 1 3 , and 2 3 , respectively.  0 , with probability 3 / 12;    ∆ C = 1 , with probability 7 / 12;  2 , with probability 2 / 12 .   15

  34. For example: Figure: A collection of building blocks of size 4, with probabilities 1 3 , and 2 3 , respectively.  0 , with probability 3 / 12;    ∆ C = 1 , with probability 7 / 12;  2 , with probability 2 / 12 .   D n = D i + δ n + 1 with probability 1 / ( n − 1) 15

  35. For example: Figure: A collection of building blocks of size 4, with probabilities 1 3 , and 2 3 , respectively.  0 , with probability 3 / 12;    ∆ C = 1 , with probability 7 / 12;  2 , with probability 2 / 12 .   D n = D i + δ n + 1 with probability 1 / ( n − 1) Hence the moment generating function of D n conditioned on F n − 1 , the sigma field generated by the first n − 1 insertions,would be, 15

  36. For example: Figure: A collection of building blocks of size 4, with probabilities 1 3 , and 2 3 , respectively.  0 , with probability 3 / 12;    ∆ C = 1 , with probability 7 / 12;  2 , with probability 2 / 12 .   D n = D i + δ n + 1 with probability 1 / ( n − 1) Hence the moment generating function of D n conditioned on F n − 1 , the sigma field generated by the first n − 1 insertions,would be, � n − 1 1 e D n u | F n − 1 � � � e ( D i + δ n +1) u � � E = E � F n − 1 � n − 1 i =1 e ( δ n +1) u � n − 1 1 � e D i u . � = n − 1 E i =1 15

  37. Sketch of Proof. Taking expectation again, n − 1 φ D n ( u ) = e u ψ C ( u ) � φ D i ( u ) , n − 1 i =1 16

  38. Sketch of Proof. Taking expectation again, n − 1 φ D n ( u ) = e u ψ C ( u ) � φ D i ( u ) , n − 1 i =1 valid for n ≥ 2, with the initial condition φ D 1 ( u ) = 1. 16

  39. Sketch of Proof. Taking expectation again, n − 1 φ D n ( u ) = e u ψ C ( u ) � φ D i ( u ) , n − 1 i =1 valid for n ≥ 2, with the initial condition φ D 1 ( u ) = 1. Solving this full history recurrence and after appropriate centering and scaling we get �� D n − ( E [∆ C ] + 1) ln n � Var [∆ C ]+( E [∆ C ]+1) 2 � � � �� 1 u 2 . √ E exp u → e 2 ln n 16

  40. Sketch of Proof. Taking expectation again, n − 1 φ D n ( u ) = e u ψ C ( u ) � φ D i ( u ) , n − 1 i =1 valid for n ≥ 2, with the initial condition φ D 1 ( u ) = 1. Solving this full history recurrence and after appropriate centering and scaling we get �� D n − ( E [∆ C ] + 1) ln n � Var [∆ C ]+( E [∆ C ]+1) 2 � � � �� 1 u 2 . √ E exp u → e 2 ln n Therefore we have the theorem, D n − ( E [∆ C ] + 1) ln n D 0 , Var [∆ C ] + ( E [∆ C ] + 1) 2 � � √ − → N . ln n 16

  41. Sketch of Proof. Taking expectation again, n − 1 φ D n ( u ) = e u ψ C ( u ) � φ D i ( u ) , n − 1 i =1 valid for n ≥ 2, with the initial condition φ D 1 ( u ) = 1. Solving this full history recurrence and after appropriate centering and scaling we get �� D n − ( E [∆ C ] + 1) ln n � Var [∆ C ]+( E [∆ C ]+1) 2 � � � �� 1 u 2 . √ E exp u → e 2 ln n Therefore we have the theorem, D n − ( E [∆ C ] + 1) ln n D 0 , Var [∆ C ] + ( E [∆ C ] + 1) 2 � � √ − → N . ln n Remark. The expressions for mean and variance are valid, even if t = t ( n ) grows with n . However, in the asymptotic derivations of the central limit theorem we have to keep t relatively small, compared to n . 16

  42. Total path length Definition The total path length of a tree is the sum of the depths of all the nodes. 17

  43. Total path length Definition The total path length of a tree is the sum of the depths of all the nodes. Theorem Let X n be the total path length of a tree built from the blocks of a collection C . Then, there is an absolutely integrable random � � variable X, such that X n / n − t + E [ χ C ] H n + t converges to X, both in L 2 and almost surely. 17

  44. Sketch of Proof. Let x i be the total path length of a block T i . 18

  45. Sketch of Proof. Let x i be the total path length of a block T i . Let χ C be a discrete random variable that assumes the value x i , with probability � j p j , where the sum is taken over every block T j with total path length x i . 18

  46. Sketch of Proof. Let x i be the total path length of a block T i . Let χ C be a discrete random variable that assumes the value x i , with probability � j p j , where the sum is taken over every block T j with total path length x i . For example: Figure: A collection of building blocks of size 4, with probabilities 1 3 , and 2 3 , respectively. P ( χ C = 5) = 1 P ( χ C = 3) = 2 3 , and 3 . 18

  47. Sketch of Proof. Let X n be the total path length of the entire tree T n built from the first n block insertions. 19

  48. Sketch of Proof. Let X n be the total path length of the entire tree T n built from the first n block insertions. If the n th block is adjoined to a node v ∈ T n − 1 , at depth ˜ D ( v ) in the tree T n − 1 then each node in the last inserted block appears at distance equal to ˜ D ( v ) + 1, plus its own depth in the last block. 19

  49. Sketch of Proof. Let X n be the total path length of the entire tree T n built from the first n block insertions. If the n th block is adjoined to a node v ∈ T n − 1 , at depth ˜ D ( v ) in the tree T n − 1 then each node in the last inserted block appears at distance equal to ˜ D ( v ) + 1, plus its own depth in the last block. So we have the following stochastic recurrence for n ≥ 2: 1 � � � ˜ E [ X n | F n − 1 ] = X n − 1 + t D ( v ) + 1 t ( n − 1) v ∈T n − 1 � � + E χ C | F n − 1 = X n − 1 + X n − 1 n − 1 + t + E [ χ C ] . 19

  50. Sketch of Proof. Let X n be the total path length of the entire tree T n built from the first n block insertions. If the n th block is adjoined to a node v ∈ T n − 1 , at depth ˜ D ( v ) in the tree T n − 1 then each node in the last inserted block appears at distance equal to ˜ D ( v ) + 1, plus its own depth in the last block. So we have the following stochastic recurrence for n ≥ 2: 1 � � � ˜ E [ X n | F n − 1 ] = X n − 1 + t D ( v ) + 1 t ( n − 1) v ∈T n − 1 � � + E χ C | F n − 1 = X n − 1 + X n − 1 n − 1 + t + E [ χ C ] . Taking expectations again, n E [ X n ] = n − 1 E [ X n − 1 ] + t + E [ χ C ] . 19

  51. Sketch of Proof. Let X n be the total path length of the entire tree T n built from the first n block insertions. If the n th block is adjoined to a node v ∈ T n − 1 , at depth ˜ D ( v ) in the tree T n − 1 then each node in the last inserted block appears at distance equal to ˜ D ( v ) + 1, plus its own depth in the last block. So we have the following stochastic recurrence for n ≥ 2: 1 � � � ˜ E [ X n | F n − 1 ] = X n − 1 + t D ( v ) + 1 t ( n − 1) v ∈T n − 1 � � + E χ C | F n − 1 = X n − 1 + X n − 1 n − 1 + t + E [ χ C ] . Taking expectations again, n E [ X n ] = n − 1 E [ X n − 1 ] + t + E [ χ C ] . Solving we get � � � � E [ X n ] = t + E [ χ C ] n H n − nt ∼ t + E [ χ C ] n ln n , 19

  52. Sketch of Proof. If ˇ D n be the depth of the node chosen as parent for the root of the n th block. 20

  53. Sketch of Proof. If ˇ D n be the depth of the node chosen as parent for the root of the n th block. Then, we have the stochastic recurrence X n = X n − 1 + t ( ˇ D n + 1) + χ C . 20

  54. Sketch of Proof. If ˇ D n be the depth of the node chosen as parent for the root of the n th block. Then, we have the stochastic recurrence X n = X n − 1 + t ( ˇ D n + 1) + χ C . Squaring and taking double expectation we get, n ] = n + 1 2 n E [ X 2 n − 1 E [ X 2 � � n − 1 ] + E [ χ C ] + t E [ X n − 1 ] n − 1 n ] + t 2 + 2 t E [ χ C ] + E [ χ 2 + t 2 E [ ˇ D 2 C ] . 20

  55. Sketch of Proof. If ˇ D n be the depth of the node chosen as parent for the root of the n th block. Then, we have the stochastic recurrence X n = X n − 1 + t ( ˇ D n + 1) + χ C . Squaring and taking double expectation we get, n ] = n + 1 2 n E [ X 2 n − 1 E [ X 2 � � n − 1 ] + E [ χ C ] + t E [ X n − 1 ] n − 1 n ] + t 2 + 2 t E [ χ C ] + E [ χ 2 + t 2 E [ ˇ D 2 C ] . We develop a separate recurrence for E [ ˇ D 2 n ] and solving for E [ X 2 n ] we get, � t 2 E [∆ 2 C ] + t 2 ( E [∆ C ]) 2 Var [ X n ] ∼ � 2 + Var [ χ C ] + 2 t 2 − � E [ χ C ] + 3 t 2 E [∆ C ] + t E [ χ C ] + t E [ χ C ] E [∆ C ] − π 2 � 2 � n 2 . � E [ χ C ] + t 6 20

  56. Sketch of Proof. We know the conditional relation: E [ X n | F n − 1 ] = X n − 1 + X n − 1 n − 1 + t + E [ χ C ] , 21

  57. Sketch of Proof. We know the conditional relation: E [ X n | F n − 1 ] = X n − 1 + X n − 1 n − 1 + t + E [ χ C ] , Hence we see that X ∗ n = X n � � n − t + E [ χ C ] H n + t is a martingale. 21

  58. Sketch of Proof. We know the conditional relation: E [ X n | F n − 1 ] = X n − 1 + X n − 1 n − 1 + t + E [ χ C ] , Hence we see that X ∗ n = X n � � n − t + E [ χ C ] H n + t is a martingale. From the variance expression we see that ( X ∗ n ) 2 � � E = c + o (1) . n 2 21

  59. Sketch of Proof. We know the conditional relation: E [ X n | F n − 1 ] = X n − 1 + X n − 1 n − 1 + t + E [ χ C ] , Hence we see that X ∗ n = X n � � n − t + E [ χ C ] H n + t is a martingale. From the variance expression we see that ( X ∗ n ) 2 � � E = c + o (1) . n 2 � ( X ∗ n ) 2 � Hence, we have sup n ≥ 1 E < ∞ , and the theorem follows from Doob’s martingale convergence theorem. 21

  60. Sketch of Proof. We know the conditional relation: E [ X n | F n − 1 ] = X n − 1 + X n − 1 n − 1 + t + E [ χ C ] , Hence we see that X ∗ n = X n � � n − t + E [ χ C ] H n + t is a martingale. From the variance expression we see that ( X ∗ n ) 2 � � E = c + o (1) . n 2 � ( X ∗ n ) 2 � Hence, we have sup n ≥ 1 E < ∞ , and the theorem follows from Doob’s martingale convergence theorem. 21

  61. Sketch of Proof. We know the conditional relation: E [ X n | F n − 1 ] = X n − 1 + X n − 1 n − 1 + t + E [ χ C ] , Hence we see that X ∗ n = X n � � n − t + E [ χ C ] H n + t is a martingale. From the variance expression we see that ( X ∗ n ) 2 � � E = c + o (1) . n 2 � ( X ∗ n ) 2 � Hence, we have sup n ≥ 1 E < ∞ , and the theorem follows from Doob’s martingale convergence theorem. Remark. The expressions for mean and variance are valid, even when t = t ( n ) is no longer fixed but grows with n . 21

  62. Height Definition The height of the tree is the maximum depth among all the existing nodes. H n = max v ∈T n D ( v ) . 22

  63. Height Definition The height of the tree is the maximum depth among all the existing nodes. H n = max v ∈T n D ( v ) . Theorem Let H n be the height of a random tree built from the building blocks T 1 , . . . , T k , which are selected at each step with probabilities p 1 , . . . , p k . We then have H n a . s . � � − → e E [∆ C ] + 1 . ln n 22

  64. Sketch of Proof. Recall a standard recursive trees: 23

  65. Sketch of Proof. Recall a standard recursive trees: grows out of a root node in steps. 23

  66. Sketch of Proof. Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). 23

  67. Sketch of Proof. Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ H n be the height of the recursive tree, [Pittel 1994] showed that ˆ a . s . H n − → e . ln n 23

  68. Sketch of Proof. Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ H n be the height of the recursive tree, [Pittel 1994] showed that ˆ a . s . H n − → e . ln n Bursting argument: 23

  69. Sketch of Proof. Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ H n be the height of the recursive tree, [Pittel 1994] showed that ˆ a . s . H n − → e . ln n Bursting argument: The blocks tree can be obtained from a recursive tree by “bursting” its nodes. 23

  70. Sketch of Proof. Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ H n be the height of the recursive tree, [Pittel 1994] showed that ˆ a . s . H n − → e . ln n Bursting argument: The blocks tree can be obtained from a recursive tree by “bursting” its nodes. Each node in the recursive tree is replaced with (bursts into) a building block, with block T i being chosen with probability p i 23

  71. Sketch of Proof. Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ H n be the height of the recursive tree, [Pittel 1994] showed that ˆ a . s . H n − → e . ln n Bursting argument: The blocks tree can be obtained from a recursive tree by “bursting” its nodes. Each node in the recursive tree is replaced with (bursts into) a building block, with block T i being chosen with probability p i Then each child of that node in the recursive tree independently chooses a parent in the parent block at random, with all nodes of that parent being equally likely (each may be taken as parent with probability 1 / t ). 23

  72. Sketch of Proof. Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ H n be the height of the recursive tree, [Pittel 1994] showed that ˆ a . s . H n − → e . ln n Bursting argument: The blocks tree can be obtained from a recursive tree by “bursting” its nodes. Each node in the recursive tree is replaced with (bursts into) a building block, with block T i being chosen with probability p i Then each child of that node in the recursive tree independently chooses a parent in the parent block at random, with all nodes of that parent being equally likely (each may be taken as parent with probability 1 / t ). When a child of a node bursts into a block, it is its root that gets joined to a randomly chosen parent in the parent block. 23

Recommend


More recommend