Randomized Algorithms Lecture 3: “Occupancy, Moments and deviations, Randomized selection ” Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013 - 2014 Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 1 / 34
1. Some basic inequalities (I) ) n ≤ e ( 1 + 1 (i) n Proof: It is: ∀ x ≥ 0: 1 + x ≤ e x . For x = 1 n , we get ( ) n ) n ≤ ( 1 1 + 1 e = e n n ) n − 1 ≥ 1 ( 1 − 1 (ii) n e ( ) n − 1 ) n − 1 ≥ 1 ( n − 1 n Proof: It suffices that e ⇔ ≤ e n n − 1 ( ) n − 1 n 1 1 But n − 1 = 1 + n − 1 , so it suffices that 1 + ≤ e n − 1 which is true by (i). Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 2 / 34
1. Some basic inequalities (II) ( n ) n (iii) n ! ≥ e ∞ Proof: It is obviously n n n i ∑ n ! ≤ i ! i =0 ∞ n i ∑ i ! = e n from Taylor’s expansion of f ( x ) = e x . But i =0 ( n ( ne ) k ≤ ( n ) ) k (iv) For any k ≤ n : ≤ k k k Proof: Indeed, k ≤ n ⇒ n k ≤ n − 1 k − 1 k ≤ n − i Inductively k ≤ n ⇒ n k − i , (1 ≤ i ≤ k − 1) ) k ≤ n ( n ( n ) k − ( k − 1) = n k k − 1 · · · n − ( k − 1) k · n − 1 Thus k ! = k k ( n ) ≤ n k For the right inequality we obviously have k k ! ( k ) k and by (iii) it is k ! ≥ e Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 3 / 34
2. Preliminaries (i) Boole’s inequality (or union bound) Let random events E 1 , E 2 , . . . , E n . Then { n } n ∪ ∑ Pr E i = Pr {E 1 ∪ E 2 ∪ · · · ∪ E n } ≤ Pr {E i } i =1 i =1 Note: If the events are disjoint, then we get equality. Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 4 / 34
2. Preliminaries (ii) Expectation (or Mean) Let X a random variable with probability density function (pdf) f ( x ). Its expectation is: ∑ µ x = E [ X ] = x · Pr { X = x } x ∞ ∫ If X is continuous, µ x = xf ( x ) dx −∞ Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 5 / 34
2. Preliminaries (ii) Expectation (or Mean) Properties: [ n ] n ∑ ∑ ∀ X i ( i = 1 , 2 , . . . , n ) : E X i = E [ X i ] i =1 i =1 This important property is called “linearity of expectation”. E [ cX ] = cE [ X ], where c constant if X, Y stochastically independent, then E [ X · Y ] = E [ X ] · E [ Y ] Let f ( X ) a real-valued function of X . Then ∑ E [ f ( x )] = f ( x ) Pr { X = x } x Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 6 / 34
2. Preliminaries (iii) Markov’s inequality Theorem: Let X a non-negative random variable. Then, ∀ t > 0 Pr { X ≥ t } ≤ E [ X ] t ∑ ∑ Proof: E [ X ] = xPr { X = x } ≥ xPr { X = x } x x ≥ t ∑ ∑ ≥ tPr { X = x } = t Pr { X = x } = t Pr { X ≥ t } x ≥ t x ≥ t Note: Markov is a (rather weak) concentration inequality, e.g. Pr { X ≥ 2 E [ X ] } ≤ 1 2 Pr { X ≥ 3 E [ X ] } ≤ 1 3 etc Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 7 / 34
2. Preliminaries (iv) Variance (or second moment) Definition: V ar ( X ) = E [( X − µ ) 2 ], where µ = E [ X ] i.e. it measures (statistically) deviations from mean. Properties: V ar ( X ) = E [ X 2 ] − E 2 [ X ] V ar ( cX ) = c 2 V ar ( X ), where c constant. if X, Y independent, it is V ar ( X + Y ) = V ar ( X ) + V ar ( Y ) √ Note: We call σ = V ar ( X ) the standard deviation of X . Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 8 / 34
2. Preliminaries (v) Chebyshev’s inequality Theorem: Let X a r.v. with mean µ = E [ X ]. It is: Pr {| X − µ | ≥ t } ≤ V ar ( X ) ∀ t > 0 t 2 Proof: Pr {| X − µ | ≥ t } = Pr { ( X − µ ) 2 ≥ t 2 } From Markov’s inequality: Pr { ( X − µ ) 2 ≥ t 2 } ≤ E [( X − µ ) 2 ] = V ar ( X ) t 2 t 2 Note: Chebyshev’s inequality provides stronger (than Markov’s) concentration bounds, e.g. Pr {| X − µ | ≥ 2 σ } ≤ 1 4 Pr {| X − µ | ≥ 3 σ } ≤ 1 9 etc Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 9 / 34
3. Occupancy - importance occupancy procedures are actually stochastic processes (i.e, random processes in time). Particularly, the occupancy process consists in placing randomly balls into bins, one at a time. occupancy problems/processes have fundamental importance for the analysis of randomized algorithms, such as for data structures (e.g. hash tables), routing etc. Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 10 / 34
3. Occupancy - definition and basic questions general occupancy process: we uniformly randomly and independently put, one at a time, m distinct objects (“balls”) each one into one of n distinct classes (“bins”). basic questions: what is the maximum number of balls in any bin? how many balls are needed so as no bin remains empty, with high probability? what is the number of empty bins? what is the number of bins with k balls in them? Note: in the next lecture we will study the coupon collector’s problem, a variant of occupancy. Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 11 / 34
3. Occupancy - the case m = n Let us randomly place m = n balls into n bins. Question: What is the maximum number of balls in any bin? Remark: Let us first estimate the expected number of balls in any bin. For any bin i (1 ≤ i ≤ n ) let X i = # balls in bin i . Clearly X i ∼ B ( m, 1 n ) (binomial) So E [ X i ] = m 1 n = n 1 n = 1 We however expect this “mean” (expected) behaviour to be highly improbable, i.e., some bins get no balls at all some bins get many balls Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 12 / 34
3. Occupancy - the case m = n Theorem 1. With probability at least 1 − 1 n , no bin gets more than k ∗ = 3 ln n ln ln n balls. Proof: Let E j ( k ) the event “bin j gets k or more balls”. Because of symmetry, we first focus on a given bin (say bin 1). It is ) ( 1 ( n ) i ( ) n − i 1 − 1 Pr { bin 1 gets exactly i balls } = i n n since we have a binomial B ( n, 1 n ). But ) ( 1 ) ( 1 ) i ( 1 ) n − i ≤ ) i ≤ ( ne ) i = ( n ) i ( ( n ( e ) i 1 − 1 i n n i n i n i (from basic inequality iv) ( e ( e n ( ) ( e ) i ) k 1 + e ) 2 ∑ Thus Pr {E 1 ( k ) } ≤ ≤ · k + + · · · = i k k i = k ( e ) k 1 = 1 − e k k Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 13 / 34
3. Occupancy - the case m = n ⌈ 3 ln n ⌉ Now, let k ∗ = . Then: ln ln n ( ) k ∗ ( e ) k ∗ Pr {E 1 ( k ∗ ) } ≤ 1 e k ∗ ≤ 2 1 − e k ∗ 3 ln n ln ln n k ∗ − e ≤ 2 ⇔ k ∗ ≤ 2 k ∗ − 2 e ⇔ 1 k ∗ since it suffices k ∗ ≤ 2 ⇔ 1 − e ⇔ k ∗ ≥ 2 e which is true. ( ) k ∗ ( e 1 − ln 3 − ln ln n +ln ln ln n ) k ∗ e But 2 = 2 3 ln n ln ln n e − ln ln n +ln ln ln n ) k ∗ ≤ 2 exp ( ( ) − 3 ln n + 6 ln n ln ln ln n ≤ 2 ln ln n 1 ≤ 2 exp( − 3 ln n + 0 . 5 ln n ) = 2 exp( − 2 . 5 ln n ) ≤ n 2 for n large enough. Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 14 / 34
3. Occupancy - the case m = n Thus, n ∪ Pr { any bin gets more than k ∗ balls } = Pr E j ( k ∗ ) j =1 n Pr {E j ( k ∗ ) } ≤ nPr {E 1 ( k ∗ ) } ≤ n 1 n 2 = 1 ∑ ≤ n (by symmetry) � j =1 Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 15 / 34
3. Occupancy - the case m = n log n We showed that when m = n the mean number of balls in any bin is 1, but the maximum can be as high as k ∗ = 3 ln n ln ln n The next theorem shows that when m = n log n the maximum number of balls in any bin is more or less the same as the expected number of balls in any bin. Theorem 2. When m = n ln n , then with probability 1 − o (1) every bin has O (log n ) balls. Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 16 / 34
3. Occupancy - the case m = n - An improvement If at each iteration we randomly pick d bins and throw the ball into the bin with the smallest number of balls, we can do much better than in Theorem 2: Theorem 3. We place m = n balls sequentially in n bins as follows: For each ball, d ≥ 2 bins are chosen uniformly at random (and independently). Each ball is placed in the least full of the d bins (ties broken randomly). When all balls are placed, the maximum load at any bin is at most ln ln n ln d + O (1), with probability at least 1 − o (1) (in other words, a more balanced balls distribution is achieved). Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 17 / 34
3. Occupancy - tightness of Theorem 1 Theorem 1 shows that when m = n then the maximum load in ( ln n ) any bin is O , with high probability. We now show that ln ln n this result is tight: ( ln n ) Lemma 1: There is a k = Ω such that bin 1 has k balls ln ln n 1 with probability at least √ n . ) ( 1 ( n ) k ( ) n − k 1 − 1 Proof: Pr [ k balls in bin 1] = k n n ( n ) k ( ) n − k 1 1 − 1 ≥ (from basic inequality iv) n k k n ( 1 ) n − k ≥ ( 1 ) k ( 1 ( 1 ) k ( ) ) k 1 − 1 = 1 = (for n ≥ 2) k n k 2 e 2 e k Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 3 18 / 34
Recommend
More recommend