R EVENONS ` A NOS MOUTONS Let X be the number of isolated vertices in G ( n , p ) , where p ( n ) = 1 n ( log n + γ ( n )) and γ → −∞ . Then E X = ( 1 + o ( 1 ) e − γ → ∞ . What about E X ( X − 1 ) ?
R EVENONS ` A NOS MOUTONS Let X be the number of isolated vertices in G ( n , p ) , where p ( n ) = 1 n ( log n + γ ( n )) and γ → −∞ . Then E X = ( 1 + o ( 1 ) e − γ → ∞ . What about E X ( X − 1 ) ?
R EVENONS ` A NOS MOUTONS Let X be the number of isolated vertices in G ( n , p ) , where p ( n ) = 1 n ( log n + γ ( n )) and γ → −∞ . Then E X = ( 1 + o ( 1 ) e − γ → ∞ . What about E X ( X − 1 ) ? n − 1 E X ( X − 1 ) = n ( n − 1 )( 1 − p ) 2 ( n − 1 ) − 1 = n ( 1 − p ) n − 1 � 2 � n ( 1 − p ) = ( 1 + o ( 1 ))( E X ) 2 .
R EVENONS ` A NOS MOUTONS Let X be the number of isolated vertices in G ( n , p ) , where p ( n ) = 1 n ( log n + γ ( n )) and γ → −∞ . Then E X = ( 1 + o ( 1 ) e − γ → ∞ . What about E X ( X − 1 ) ? n − 1 E X ( X − 1 ) = n ( n − 1 )( 1 − p ) 2 ( n − 1 ) − 1 = n ( 1 − p ) n − 1 � 2 � n ( 1 − p ) = ( 1 + o ( 1 ))( E X ) 2 . Thus, Pr ( X > 0 ) → 1.
E RD ˝ OS -R´ ENYI T HEOREM IS FINALLY SHOWN ! T HEOREM E RD ˝ OS , R´ ENYI ’59 Let p ( n ) = 1 n ( ln n + γ ( n )) . Then ( I ) If γ → −∞ , then aas G ( n , p ) contains isolated vertices (and so aas it is not connected); ( II ) If γ → ∞ , then aas G ( n , p ) is connected (and so contains no isolated vertices).
E RD ˝ OS -R´ ENYI T HEOREM IS FINALLY SHOWN ! T HEOREM E RD ˝ OS , R´ ENYI ’59 Let p ( n ) = 1 n ( ln n + γ ( n )) . Then ( I ) If γ → −∞ , then aas G ( n , p ) contains isolated vertices (and so aas it is not connected); ( II ) If γ → ∞ , then aas G ( n , p ) is connected (and so contains no isolated vertices). Can we define (and prove) even stronger result which relates connectivity to the absence of isolated vertices?
T HE HITTING TIME T HE RANDOM GRAPH PROCESS G ( n , M ) can be viewed as the ( M + 1 ) th stage of a Markov � n � chain { G ( n , M ) : 0 ≤ M ≤ } , where we add edges to a graph 2 in a random order. T HE HITTING TIME Let h 1 = min { M : δ ( G ( n , M )) ≥ 1 } and h conn = min { M : G ( n , M ) is connected } . Note that both h 1 and h conn are random variables! T HEOREM E RD ˝ OS , R´ ENYI ; B OLLOB ´ AS Aas h 1 = h conn .
T HE HITTING TIME T HE RANDOM GRAPH PROCESS G ( n , M ) can be viewed as the ( M + 1 ) th stage of a Markov � n � chain { G ( n , M ) : 0 ≤ M ≤ } , where we add edges to a graph 2 in a random order. T HE HITTING TIME Let h 1 = min { M : δ ( G ( n , M )) ≥ 1 } and h conn = min { M : G ( n , M ) is connected } . Note that both h 1 and h conn are random variables! T HEOREM E RD ˝ OS , R´ ENYI ; B OLLOB ´ AS Aas h 1 = h conn .
T HE HITTING TIME T HE RANDOM GRAPH PROCESS G ( n , M ) can be viewed as the ( M + 1 ) th stage of a Markov � n � chain { G ( n , M ) : 0 ≤ M ≤ } , where we add edges to a graph 2 in a random order. T HE HITTING TIME Let h 1 = min { M : δ ( G ( n , M )) ≥ 1 } and h conn = min { M : G ( n , M ) is connected } . Note that both h 1 and h conn are random variables! T HEOREM E RD ˝ OS , R´ ENYI ; B OLLOB ´ AS Aas h 1 = h conn .
{ G ( n , p ) : 0 ≤ p ≤ 1 } T HE RANDOM GRAPH PROCESS ( FOR G ( n , p ) ) G ( n , p ) can also be viewed as a stage of a Markov process { G ( n , M ) : 0 ≤ p ≤ 1 } .
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 }
{ G ( n , p ) : 0 ≤ p ≤ 1 } T HE HITTING TIMES FOR G ( n , p ) We can define ˆ h 1 = min { p : δ ( G ( n , p )) ≥ 1 } and ˆ h conn = min { p : G ( n , p ) is connected } . As in the case of h 1 and h conn both ˆ h 1 and ˆ h conn are random variables, but they take values in the interval [ 0 , 1 ] . T HE HITTING TIMES However, the statement that aas h 1 = h conn is clearly equivalent to the statement that aas ˆ h 1 = ˆ h conn .
{ G ( n , p ) : 0 ≤ p ≤ 1 } T HE HITTING TIMES FOR G ( n , p ) We can define ˆ h 1 = min { p : δ ( G ( n , p )) ≥ 1 } and ˆ h conn = min { p : G ( n , p ) is connected } . As in the case of h 1 and h conn both ˆ h 1 and ˆ h conn are random variables, but they take values in the interval [ 0 , 1 ] . T HE HITTING TIMES However, the statement that aas h 1 = h conn is clearly equivalent to the statement that aas ˆ h 1 = ˆ h conn .
T HE RANDOM GRAPH PROCESS : COUPLING Since we can view G ( n , M ) as the stage of the random graph process, for M 1 ≤ M 2 we have G ( n , M 1 ) ⊆ G ( n , M 2 ) ,
T HE RANDOM GRAPH PROCESS : COUPLING Since we can view G ( n , M ) as the stage of the random graph process, for M 1 ≤ M 2 we have G ( n , M 1 ) ⊆ G ( n , M 2 ) , and make sense out of it!
T HE RANDOM GRAPH PROCESS : COUPLING Since we can view G ( n , M ) as the stage of the random graph process, for M 1 ≤ M 2 we have G ( n , M 1 ) ⊆ G ( n , M 2 ) , and make sense out of it! In a similar way, for p 1 ≤ p 2 we have G ( n , p 1 ) ⊆ G ( n , p 2 ) .
T HE EVOLUTION OF THE RANDOM GRAPH If M = o ( √ n ) then aas G ( n , p ) consists of isolated vertices and isolated edges. If M = o ( n ( k − 1 ) / k ) then aas all components of G ( n , p ) are trees with at most k vertices. If M = o ( n ) then aas all components of G ( n , p ) are trees of size o ( log n ) .
T HE EVOLUTION OF THE RANDOM GRAPH If M = o ( √ n ) then aas G ( n , p ) consists of isolated vertices and isolated edges. If M = o ( n ( k − 1 ) / k ) then aas all components of G ( n , p ) are trees with at most k vertices. If M = o ( n ) then aas all components of G ( n , p ) are trees of size o ( log n ) .
T HE EVOLUTION OF THE RANDOM GRAPH If M = o ( √ n ) then aas G ( n , p ) consists of isolated vertices and isolated edges. If M = o ( n ( k − 1 ) / k ) then aas all components of G ( n , p ) are trees with at most k vertices. If M = o ( n ) then aas all components of G ( n , p ) are trees of size o ( log n ) .
T HE SUBCRITICAL PHASE
T HE SUBCRITICAL PHASE
T HE CRITICAL PHASE
T HE CRITICAL PHASE
T HE SUPERCRITICAL PHASE
T HE RIGHT SCALING T HEOREM E RD ˝ OS , R´ ENYI ’60 The “coagulation phase” takes place when M = ( 1 / 2 + o ( 1 )) n . Thus, for instance, the largest component of G ( n , 0 . 4999 n ) has aas Θ( log n ) vertices, while the size of the largest component of G ( n , 0 . 5001 n ) is aas Θ( n ) . T HEOREM B OLLOB ´ AS ’84, Ł UCZAK ’90 The components start to merge when they are of size Θ( n 2 / 3 ) . It happens when M = n / 2 + Θ( n 2 / 3 ) .
T HE RIGHT SCALING T HEOREM E RD ˝ OS , R´ ENYI ’60 The “coagulation phase” takes place when M = ( 1 / 2 + o ( 1 )) n . Thus, for instance, the largest component of G ( n , 0 . 4999 n ) has aas Θ( log n ) vertices, while the size of the largest component of G ( n , 0 . 5001 n ) is aas Θ( n ) . T HEOREM B OLLOB ´ AS ’84, Ł UCZAK ’90 The components start to merge when they are of size Θ( n 2 / 3 ) . It happens when M = n / 2 + Θ( n 2 / 3 ) .
T HE RIGHT SCALING T HEOREM E RD ˝ OS , R´ ENYI ’60 The “coagulation phase” takes place when M = ( 1 / 2 + o ( 1 )) n . Thus, for instance, the largest component of G ( n , 0 . 4999 n ) has aas Θ( log n ) vertices, while the size of the largest component of G ( n , 0 . 5001 n ) is aas Θ( n ) . T HEOREM B OLLOB ´ AS ’84, Ł UCZAK ’90 The components start to merge when they are of size Θ( n 2 / 3 ) . It happens when M = n / 2 + Θ( n 2 / 3 ) .
T RIANGLES T HEOREM E RD ˝ OS , R´ ENYI ’60 If np → 0, then aas G ( n , p ) contains no triangles. If np → ∞ , then aas G ( n , p ) contains triangles. This can be easily proved using the 1st and 2nd moment method we mastered ten minutes ago. P ROBLEM How fast does the probability Pr ( G ( n , p ) �⊇ K 3 ) tends to 0 for np → ∞ ?
T RIANGLES T HEOREM E RD ˝ OS , R´ ENYI ’60 If np → 0, then aas G ( n , p ) contains no triangles. If np → ∞ , then aas G ( n , p ) contains triangles. This can be easily proved using the 1st and 2nd moment method we mastered ten minutes ago. P ROBLEM How fast does the probability Pr ( G ( n , p ) �⊇ K 3 ) tends to 0 for np → ∞ ?
T RIANGLES T HEOREM E RD ˝ OS , R´ ENYI ’60 If np → 0, then aas G ( n , p ) contains no triangles. If np → ∞ , then aas G ( n , p ) contains triangles. This can be easily proved using the 1st and 2nd moment method we mastered ten minutes ago. P ROBLEM How fast does the probability Pr ( G ( n , p ) �⊇ K 3 ) tends to 0 for np → ∞ ?
L ARGE DEVIATION INEQUALITIES Let us consider a partition P of the set of all edges of K n into small sets.
L ARGE DEVIATION INEQUALITIES Let us consider a partition P of the set of all edges of K n into small sets.
L IPSCHITZ CONDITION Take any graph parameter A and compute for each part of the partition its “Lipschitz constant”. c 3 c 2 c 1 c 4 c k
L IPSCHITZ CONDITION Take any graph parameter A and compute for each part of the partition its “Lipschitz constant” . c 3 c 2 c 1 c 4 c k
E XAMPLES � n � Consider a partition of the set of edges into singletons. 2
E XAMPLES � n � Consider a partition of the set of edges into singletons. 2 (i) The independence number α has Lipschitz constants 1, since changing one edge cannot affect it by more than 1.
E XAMPLES � n � Consider a partition of the set of edges into singletons. 2 (i) The independence number α has Lipschitz constants 1, since changing one edge cannot affect it by more than 1. (ii) The chromatic number χ has also Lipschitz constants 1.
E XAMPLES � n � Consider a partition of the set of edges into singletons. 2 (i) The independence number α has Lipschitz constants 1, since changing one edge cannot affect it by more than 1. (ii) The chromatic number χ has also Lipschitz constants 1. (iii) The number of triangles has Lipschitz constants n − 2.
E XAMPLES � n � Consider a partition of the set of edges into singletons. 2 (i) The independence number α has Lipschitz constants 1, since changing one edge cannot affect it by more than 1. (ii) The chromatic number χ has also Lipschitz constants 1. (iii) The number of triangles has Lipschitz constants n − 2. (iv) The size of the maximum family of edge-disjoint triangles has Lipschitz constants 1.
E XAMPLES Consider a partition of the set of edges into n − 1 stars.
E XAMPLES Consider a partition of the set of edges into n − 1 stars.
E XAMPLES Consider a partition of the set of edges into n − 1 stars.
E XAMPLES Consider a partition of the set of edges into n − 1 stars.
E XAMPLES Consider a partition of the set of edges into n − 1 stars. (i) The independence number α has Lipschitz constants 1, since changing edges incident to one vertex cannot affect it by more than 1.
E XAMPLES Consider a partition of the set of edges into n − 1 stars. (i) The independence number α has Lipschitz constants 1, since changing edges incident to one vertex cannot affect it by more than 1. (ii) The chromatic number χ has also Lipschitz constants 1.
E XAMPLES Consider a partition of the set of edges into n − 1 stars. (i) The independence number α has Lipschitz constants 1, since changing edges incident to one vertex cannot affect it by more than 1. (ii) The chromatic number χ has also Lipschitz constants 1. � n − 1 � (iii) The number of triangles has Lipschitz constants . 2
E XAMPLES Consider a partition of the set of edges into n − 1 stars. (i) The independence number α has Lipschitz constants 1, since changing edges incident to one vertex cannot affect it by more than 1. (ii) The chromatic number χ has also Lipschitz constants 1. � n − 1 � (iii) The number of triangles has Lipschitz constants . 2 (iv) The size of the maximum family of vertex-disjoint triangles has Lipschitz constants 1.
A ZUMA ’ S INEQUALITY A ZUMA ’ S INEQUALITY Let P be a partition, A be a graph parameter, and c 1 , . . . , c k denote Lipschitz constants for P and A . Consider the random variable X = A ( G ( n , p )) for some p . Then, for every t , t 2 � � � � Pr | X − E X | ≥ t ≤ 2 exp − . i c 2 2 � i In particular, − ( E X ) 2 � � Pr ( X = 0 ) ≤ 2 exp . i c 2 2 � i
T HE INDEPENDENCE AND CHROMATIC NUMBERS A ZUMA ’ S INEQUALITY Let P be a partition, A be a graph parameter, and c 1 , . . . , c k denote Lipschitz constants for P and A . Consider the random variable X = A ( G ( n , p )) for some p . Then, for every t , t 2 � � � � Pr | X − E X | ≥ t ≤ 2 exp − . i c 2 2 � i T IGHT CONCENTRATION RESULTS Let γ ( n ) → ∞ . Then, for every p , √ � � Pr | α ( G ( n , p )) − E α ( G ( n , p )) | ≥ γ n → 0 , and √ � � Pr | χ ( G ( n , p )) − E χ ( G ( n , p )) | ≥ γ n → 0 .
T HE INDEPENDENCE AND CHROMATIC NUMBERS A ZUMA ’ S INEQUALITY Let P be a partition, A be a graph parameter, and c 1 , . . . , c k denote Lipschitz constants for P and A . Consider the random variable X = A ( G ( n , p )) for some p . Then, for every t , t 2 � � � � Pr | X − E X | ≥ t ≤ 2 exp − . i c 2 2 � i Applying it to the star partition, we get the following result. T IGHT CONCENTRATION RESULTS Let γ ( n ) → ∞ . Then, for every p , √ � � Pr | α ( G ( n , p )) − E α ( G ( n , p )) | ≥ γ n → 0 , and √ � � Pr | χ ( G ( n , p )) − E χ ( G ( n , p )) | ≥ γ n → 0 .
T HE INDEPENDENCE AND CHROMATIC NUMBERS A ZUMA ’ S INEQUALITY Let P be a partition, A be a graph parameter, and c 1 , . . . , c k denote Lipschitz constants for P and A . Consider the random variable X = A ( G ( n , p )) for some p . Then, for every t , t 2 � � � � Pr | X − E X | ≥ t ≤ 2 exp − . i c 2 2 � i Applying it to the star partition, we get the following result. T IGHT CONCENTRATION RESULTS Let γ ( n ) → ∞ . Then, for every p , √ � � Pr | α ( G ( n , p )) − E α ( G ( n , p )) | ≥ γ n → 0 , and √ � � Pr | χ ( G ( n , p )) − E χ ( G ( n , p )) | ≥ γ n → 0 .
T ALAGRAND ’ S INEQUALITY A ZUMA ’ S INEQUALITY Let P be a partition, A be a graph parameter, and c 1 , . . . , c k denote Lipschitz constants for P and A . Consider the random variable X = A ( G ( n , p )) for some p . Then, for every t , t 2 � � � � Pr | X − E X | ≥ t ≤ 2 exp − . 2 � k i = 1 c 2 i O UR AIM We want to replace the full sum � k i = 1 c 2 i by a partial sum of c i ’s.
T ALAGRAND ’ S INEQUALITY A ZUMA ’ S INEQUALITY Let P be a partition, A be a graph parameter, and c 1 , . . . , c k denote Lipschitz constants for P and A . Consider the random variable X = A ( G ( n , p )) for some p . Then, for every t , t 2 � � � � Pr | X − E X | ≥ t ≤ 2 exp − . 2 � k i = 1 c 2 i O UR AIM We want to replace the full sum � k i = 1 c 2 i by a partial sum of c i ’s.
T ALAGRAND ’ S INEQUALITY A ZUMA ’ S INEQUALITY Let P be a partition, A be a graph parameter, and c 1 , . . . , c k denote Lipschitz constants for P and A . Consider the random variable X = A ( G ( n , p )) for some p . Then, for every t , t 2 � � � � Pr | X − E X | ≥ t ≤ 2 exp − . 2 � k i = 1 c 2 i T ALAGRAND ’ S INEQUALITY − t 2 � � � � Pr | X − µ X | ≥ t ≤ 4 exp , 4 w where µ X is the median of X and � � � c 2 w = max i Λ i ∈ Λ where the maximum is taken over all certificates Λ for A .
C ERTIFICATES Take any graph parameter A and find the set of partitions which can certify that A ≥ r c 3 c 2 c 1 c 4 c k
Recommend
More recommend