compsci 514 algorithms for data science
play

compsci 514: algorithms for data science Cameron Musco University - PowerPoint PPT Presentation

compsci 514: algorithms for data science Cameron Musco University of Massachusetts Amherst. Fall 2019. Lecture 2 0 reminder By Next Thursday 9/12: member email me the names of the members and a group name. me via email if you dont


  1. compsci 514: algorithms for data science Cameron Musco University of Massachusetts Amherst. Fall 2019. Lecture 2 0

  2. reminder By Next Thursday 9/12: member email me the names of the members and a group name. me via email if you don’t consent. 1 • Sign up for Piazza. • Pick a problem set group with 3 people and have one • Fill out the Gradescope consent poll on Piazza and contact

  3. last time Last Class We Covered: are independent. small expectation is unlikely to be very large: t CAPTCHA database efficiently. 2 • Linearity of expectation: E [ X + Y ] = E [ X ] + E [ Y ] always. • Linearity of variance: Var [ X + Y ] = Var [ X ] + Var [ Y ] if X and Y • Markov’s inequality: a non-negative random variable with a Pr ( X ≥ t ) ≤ E [ X ] . • Talked about an application to estimating the size of a

  4. today Today: We’ll see how a simple twist on Markov’s inequality can give much stronger bounds. But First: Another example of how powerful linearity of expectation and Markov’s inequality can be in randomized algorithm design. in randomized methods for data processing. 3 • Enough to prove a version of the law of large numbers. • Will learn about random hash functions, which are a key tool

  5. hash tables Want to store a set of items from some finite but massive universe of items (e.g., images of a certain size, text documents, 128-bit IP addresses). Classic Solution: Hash tables deletion today. 4 Goal: support query ( x ) to check if x is in the set in O ( 1 ) time. • Static hashing since we won’t worry about insertion and

  6. hash tables may have to store multiple items in the same location (typically as a linked list). 5 • hash function h : U → [ n ] maps elements from the universe to indices 1 , · · · , n of an array. • Typically | U | ≫ n . Many elements map to the same index. • Collisions: when we insert m items into the hash table we

  7. collisions in a table entry is c (i.e., must traverse a linked list of size c ). How Can We Bound c ? same location). chosen randomly from the universe U or 2) the hash function is chosen randomly. 6 Query runtime: O ( c ) when the maximum number of collisions • In the worst case could have c = m (all items hash to the • Two approaches: 1) we assume the items inserted are

  8. random function. We will see how a hash function random hash function Assuming we insert m elements into a hash table of size n , what is the expected total number of pairwise collisions? 7 Let h : U → [ n ] be a random hash function. • I.e., for x ∈ U , Pr ( h ( x ) = i ) = 1 n for all i = 1 , . . . , n and h ( x ) , h ( y ) are independent for any two items x ̸ = y . • Caveat: It is very expensive to represent and compute such a computable in O ( 1 ) time function can be used instead.

  9. linearity of expectation 1 C : total pairwise collisions in table, h : random hash function. Identical to the CAPTCHA analysis from last class! 2 n n 2 8 (linearity of expectation) otherwise. The number of pairwise duplicates is: Let C i , j = 1 if items i and j collide ( h ( x i ) = h ( x j ) ), and 0 ∑ E [ C ] = E [ C i , j ] . i , j For any pair i , j : E [ C i , j ] = Pr [ C i , j = 1 ] = Pr [ h ( x i ) = h ( x j )] = 1 n . ( m ) = m ( m − 1 ) ∑ E [ C ] = n = . i , j x i , x j : pair of stored items, m : total number of stored items, n : hash table size,

  10. collision free hashing 1 in table. m : total number of stored items, n : hash table size, C : total pairwise collisions 9 2 n 8 m 2 E [ C ] = m ( m − 1 ) . • For n = 4 m 2 we have: E [ C ] = m ( m − 1 ) ≤ 1 8 . • Can you give a lower bound on the probability that we have no collisions, i.e., Pr [ C = 0 ] ? Apply Markov’s Inequality: Pr [ C ≥ 1 ] ≤ E [ C ] = 1 8 . Pr [ C = 0 ] = 1 − Pr [ C ≥ 1 ] ≥ 1 − 1 8 = 7 8 . Pretty good...but we are using O ( m 2 ) space to store m items.

  11. two level hashing Two-Level Hashing: expectation to find a collision free one. 10 Want to preserve O ( 1 ) query time while using O ( m ) space. • For each bucket with s i values, pick a collision free hash function mapping [ s i ] → [ s 2 i ] . • Just Showed: A random function is collision free with probability ≥ 7 8 so only requires checking O ( 1 ) random functions in

  12. • For j • For j space usage i Collisions again! k , h x j i h x k i h x j i 2 Pr h x j n 1 h x k k , h x j i h x k i Pr h x j i h x k i 1 n 2 i i h x j m 11 j k Query time for two level hashing is O ( 1 ) : requires evaluating two hash functions. What is the expected space usage? Up to constants, space used is: E [ S ] = n + ∑ n i = 1 E [ s 2 i ]  2    ∑ E [ s 2 i ] = E I h ( x j )= i       j = 1   ∑ = E I h ( x j )= i · I h ( x k )= i  j , k x j , x k : stored items, n : hash table size, h : random hash function, S : space usage of two level hashing, s i : # items stored in hash table at position i .

  13. space usage m 1 11 Query time for two level hashing is O ( 1 ) : requires evaluating two hash functions. What is the expected space usage? Up to constants, space used is: E [ S ] = n + ∑ n i = 1 E [ s 2 i ]  2    ∑ E [ s 2 i ] = E I h ( x j )= i       j = 1   [ ] ∑  = ∑ = E I h ( x j )= i · I h ( x k )= i I h ( x j )= i · I h ( x k )= i . E j , k j , k [( ) 2 ] [ ] • For j = k , E I h ( x j )= i · I h ( x k )= i = E = Pr [ h ( x j ) = i ] = 1 n . I h ( x j )= i [ ] • For j ̸ = k , E I h ( x j )= i · I h ( x k )= i = Pr [ h ( x j ) = i ∩ h ( x k ) = i ] = n 2 . x j , x k : stored items, n : hash table size, h : random hash function, S : space usage of two level hashing, s i : # items stored in hash table at position i .

  14. space usage i n . 1 Total Expected Space Usage: (if we set n m ) S n n 1 2 (If we set n s 2 i n n 2 3 n 3 m Near optimal space with O 1 query time! m .) 12 n 2 n 2 1 m n 2 m m [ ] ∑ E [ s 2 i ] = I h ( x j )= i · I h ( x k )= i E j , k ( m ) = m · 1 n + 2 · · 1 [ ] • For j = k , E I h ( x j )= i · I h ( x k )= i = 1 [ ] • For j ̸ = k , E I h ( x j )= i · I h ( x k )= i = n 2 . x j , x k : stored items, m : # stored items, n : hash table size, h : random hash function, S : space usage of two level hashing, s i : # items stored at pos i .

  15. space usage i n . 1 Total Expected Space Usage: (if we set n m ) S n n 1 2 (If we set n s 2 i n n 2 3 n 3 m Near optimal space with O 1 query time! m .) 12 n 2 n 2 1 m n 2 m m [ ] ∑ E [ s 2 i ] = I h ( x j )= i · I h ( x k )= i E j , k ( m ) = m · 1 n + 2 · · 1 [ ] • For j = k , E I h ( x j )= i · I h ( x k )= i = 1 [ ] • For j ̸ = k , E I h ( x j )= i · I h ( x k )= i = n 2 . x j , x k : stored items, m : # stored items, n : hash table size, h : random hash function, S : space usage of two level hashing, s i : # items stored at pos i .

  16. space usage i n . 1 Total Expected Space Usage: (if we set n m ) S n n 1 2 (If we set n s 2 i n n 2 3 n 3 m Near optimal space with O 1 query time! m .) 12 n 2 n 2 1 m n 2 m m [ ] ∑ E [ s 2 i ] = I h ( x j )= i · I h ( x k )= i E j , k ( m ) = m · 1 n + 2 · · 1 [ ] • For j = k , E I h ( x j )= i · I h ( x k )= i = 1 [ ] • For j ̸ = k , E I h ( x j )= i · I h ( x k )= i = n 2 . x j , x k : stored items, m : # stored items, n : hash table size, h : random hash function, S : space usage of two level hashing, s i : # items stored at pos i .

  17. space usage 2 n 1 n . n 2 n 2 12 [ ] ∑ E [ s 2 i ] = I h ( x j )= i · I h ( x k )= i E j , k ( m ) = m · 1 n + 2 · · 1 n + m ( m − 1 ) = m ≤ 2 (If we set n = m .) [ ] • For j = k , E I h ( x j )= i · I h ( x k )= i = 1 [ ] • For j ̸ = k , E I h ( x j )= i · I h ( x k )= i = n 2 . Total Expected Space Usage: (if we set n = m ) ∑ E [ S ] = n + E [ s 2 i ] ≤ n + n · 2 = 3 n = 3 m . i = 1 Near optimal space with O ( 1 ) query time! x j , x k : stored items, m : # stored items, n : hash table size, h : random hash function, S : space usage of two level hashing, s i : # items stored at pos i .

  18. something to think about What if we want to store a set and answer membership queries Many Applications: duplicate Tweets. 13 in O ( 1 ) time. But we allow a small probability of a false positive: query ( x ) says that x is in the set when in fact it isn’t. Can we do better than O ( m ) space? • Filter spam email addresses, phone numbers, suspect IPs, • Quickly check if an item has been stored in a cache or is new. • Counting distinct elements (e.g., unique search queries.)

  19. efficiently computable hash function 14 So Far: we have assumed a fully random hash function h ( x ) with Pr [ h ( x ) = i ] = 1 n for i ∈ 1 , . . . , n and h ( x ) , h ( y ) independent for x ̸ = y . • To store a random hash function we have to store a table of x values and their hash values. Would take at least O ( m ) space and O ( m ) query time if we hash m values. Making our whole quest for O ( 1 ) query time pointless!

  20. efficiently computable hash functions Exercise: Rework the two level hashing proof to show that this n . What properties did we use of the randomly chosen hash function? property is really all that is needed. 15 2-Universal Hash Function (low collision probability). A ran- dom hash function from h : U → [ n ] is two universal if: Pr [ h ( x ) = h ( y )] ≤ 1 n . When h ( x ) and h ( y ) are chosen independently at random from [ n ] , Pr [ h ( x ) = h ( y )] = 1 Efficient Alternative: Let p be a prime with p ≥ | U | . Choose random a , b ∈ [ p ] with a ̸ = 0. Let: h ( x ) = ( ax + b mod p ) mod n .

Recommend


More recommend