randomized computation
play

randomized computation Sometimes randomness helps in computation. - PowerPoint PPT Presentation

randomized computation Sometimes randomness helps in computation. randomized computation Augment our usual Turing machines with a read-only tape of random bits . start 0 2 accept 1 # # 0 1 1 0


  1. randomized computation Sometimes randomness helps in computation.

  2. randomized computation Augment our usual Turing machines with a read-only tape of random bits . ๐‘Ÿ start ๐‘Ÿ 0 ๐‘Ÿ 2 ๐‘Ÿ accept ๐‘Ÿ 1 # # ๐‘ ๐‘ ๐‘ ๐‘ 0 1 1 0 0 0 1 1

  3. randomized computation The notion of acceptance changes. What does it mean for a TM ๐‘ with access to random bits to decide a language ๐‘€ ? One-sided error [two variants]: 1 ๐‘ฆ โˆˆ ๐‘€ โ‡’ Pr ๐‘ accepts ๐‘ฆ = 1 ๐‘ฆ โˆˆ ๐‘€ โ‡’ Pr ๐‘ accepts ๐‘ฆ > 10 9 ๐‘ฆ โˆ‰ ๐‘€ โ‡’ Pr ๐‘ accepts ๐‘ฆ < ๐‘ฆ โˆ‰ ๐‘€ โ‡’ Pr ๐‘ accepts ๐‘ฆ = 0 10 Two-sided error: 2 In both cases, we can make ๐‘ฆ โˆˆ ๐‘€ โ‡’ Pr ๐‘ accepts ๐‘ฆ > 3 the error much smaller by 1 ๐‘ฆ โˆ‰ ๐‘€ โ‡’ Pr ๐‘ accepts ๐‘ฆ < repeating a few times. 3

  4. randomized computation For instance, the two-sided error variant of ๐ is called ๐‚๐๐ . ๐‚๐๐ stands for โ€œbounded - error probabilistic polynomial timeโ€ Derandomization problem: Can every efficient randomized algorithm be replaced by a deterministic one? We donโ€™t know the answer, though it is expected to be โ€œyes.โ€ This is the question of ๐ vs. ๐‚๐๐ .

  5. randomized computation Recall the complexity class ๐Œ = SPACE log ๐‘œ The randomized one-sided error variant of ๐Œ is called ๐’๐Œ . A language ๐ต is in ๐’๐Œ if there is a randomized ๐‘ƒ log ๐‘œ space Turing machine such that [one way access to random tape]: 1 ๐‘ฆ โˆˆ ๐ต โ‡’ Pr ๐‘ accepts ๐‘ฆ โ‰ฅ 2 ๐‘ฆ โˆ‰ ๐ต โ‡’ Pr ๐‘ accepts ๐‘ฆ = 0 The question of whether ๐Œ = ๐’๐Œ is also open . We know that ๐’๐Œ โŠ† ๐Ž๐Œ โŠ† SPACE log ๐‘œ 2 Nisan showed how to do this using a pseudo-random number generator for space-bounded machines.

  6. undirected connectivity Recall the language: DIRPATH = ๐ป, ๐‘ก, ๐‘ข โˆถ ๐ป is a directed graph with a directed ๐‘ก โ†’ ๐‘ข path We saw that DIRPATH is ๐Ž๐Œ -complete. Thus ๐Œ = ๐Ž๐Œ if and only if DIRPATH โˆˆ ๐‘ด . What about the language? PATH = ๐ป, ๐‘ก, ๐‘ข โˆถ ๐ป is an ๐ฏ๐จ๐ž๐ฃ๐ฌ๐Ÿ๐๐ฎ๐Ÿ๐ž graph with an ๐‘ก โ†” ๐‘ข path

  7. undirected connectivity What about the language? PATH = ๐ป, ๐‘ก, ๐‘ข โˆถ ๐ป is an ๐ฏ๐จ๐ž๐ฃ๐ฌ๐Ÿ๐๐ฎ๐Ÿ๐ž graph with an ๐‘ก โ†” ๐‘ข path Is PATH โˆˆ ๐Œ ?

  8. random walks Letโ€™s show that PATH โˆˆ ๐’๐Œ . One step of random walk : Move to a uniformly random neighbor.

  9. random walks What we will show: If ๐ป = (๐‘Š, ๐น) is a graph with ๐‘œ vertices and ๐‘› edges, then a random walker started at ๐‘ก โˆˆ ๐‘Š who takes 4๐‘›๐‘œ steps will visit every vertex in the connected component of ๐‘ก with probability at least ยฝ . In particular, if there is an ๐‘ก โ†” ๐‘ข path in ๐ป , she will visit ๐‘ข with probability at least ยฝ .

  10. random walks What we will show: If ๐ป = (๐‘Š, ๐น) is a graph with ๐‘œ vertices and ๐‘› edges, then a random walker started at ๐‘ก โˆˆ ๐‘Š who takes 4๐‘›๐‘œ steps will visit every vertex in the connected component of ๐‘ก with probability at least ยฝ . In particular, if there is an ๐‘ก โ†” ๐‘ข path in ๐ป , she will visit ๐‘ข with probability at least ยฝ . ๐’๐Œ Algorithm: Start a random walker at ๐‘ก and use the random bits to simulate a random walk for 4๐‘›๐‘œ steps (can count this high in ๐‘ƒ log ๐‘œ space). If we visit ๐‘ข at any point, accept. Otherwise, reject. 1 If there is an ๐‘ก โ†” ๐‘ข path in ๐ป : Pr accept ๐ป, ๐‘ก, ๐‘ข โ‰ฅ 2 If no ๐‘ก โ†” ๐‘ข path in ๐ป : Pr accept ๐ป, ๐‘ก, ๐‘ข = 0

  11. cover time of a graph Cover time of a graph = expected # of steps before all vertices are visited.

  12. cover time of a graph We will show that if ๐ป has ๐‘œ vertices and ๐‘› then the expected time before the random walk covers ๐ป (from any starting point) is 2๐‘›๐‘œ. Then it must be that with probability at least ยฝ , we cover ๐ป after 4๐‘›๐‘œ steps. (This fact is called โ€œMarkovโ€™s inequality.โ€)

  13. heat dispersion on a graph

  14. spectral embedding ๐‘ค 2

  15. hitting times and commute times For vertices ๐‘ฃ, ๐‘ค โˆˆ ๐‘Š , the hitting time is ๐ผ ๐‘ฃ, ๐‘ค = expected time for random walk to hit ๐‘ค starting at ๐‘ฃ The commute time between ๐‘ฃ and ๐‘ค is ๐ท ๐‘ฃ, ๐‘ค = ๐ผ ๐‘ฃ, ๐‘ค + ๐ผ(๐‘ค, ๐‘ฃ) ๐‘ฃ ๐‘ค

  16. hitting times and commute times For vertices ๐‘ฃ, ๐‘ค โˆˆ ๐‘Š , the hitting time is ๐ผ ๐‘ฃ, ๐‘ค = expected time for random walk to hit ๐‘ค starting at ๐‘ฃ The commute time between ๐‘ฃ and ๐‘ค is ๐ท ๐‘ฃ, ๐‘ค = ๐ผ ๐‘ฃ, ๐‘ค + ๐ผ(๐‘ค, ๐‘ฃ) Claim: If {๐‘ฃ, ๐‘ค} is an edge of ๐ป , then ๐ท ๐‘ฃ, ๐‘ค โ‰ค 2๐‘› . ๐‘ฃ ๐‘ค

  17. hitting times and commute times Claim: If {๐‘ฃ, ๐‘ค} is an edge of ๐ป , then ๐ท ๐‘ฃ, ๐‘ค โ‰ค 2๐‘› . Letโ€™s use the claim to show that the cover time of ๐ป is at most 2๐‘›(๐‘œ โˆ’ 1) . ๐‘ก

  18. hitting times and commute times Claim: If {๐‘ฃ, ๐‘ค} is an edge of ๐ป , then ๐ท ๐‘ฃ, ๐‘ค โ‰ค 2๐‘› . Letโ€™s use the claim to show that the cover time of ๐ป is at most 2๐‘›(๐‘œ โˆ’ 1) . ๐‘ก Choose an arbitrary spanning tree of ๐ป .

  19. hitting times and commute times Claim: If {๐‘ฃ, ๐‘ค} is an edge of ๐ป , then ๐ท ๐‘ฃ, ๐‘ค โ‰ค 2๐‘› . Letโ€™s use the claim to show that the cover time of ๐ป is at most 2๐‘›(๐‘œ โˆ’ 1) . ๐‘ก Choose an arbitrary spanning tree of ๐ป . Consider the โ€œtour around the spanning treeโ€ This is a path ๐‘ก = ๐‘ฃ 0 , ๐‘ฃ 1 , ๐‘ฃ 2 , โ€ฆ , ๐‘ฃ 2๐‘œ = ๐‘ก Where ๐‘ฃ ๐‘— , ๐‘ฃ ๐‘—+1 is an edge for each ๐‘— , and each edge appears exactly twice.

  20. hitting times and commute times Claim: If {๐‘ฃ, ๐‘ค} is an edge of ๐ป , then ๐ท ๐‘ฃ, ๐‘ค โ‰ค 2๐‘› . Letโ€™s use the claim to show that the cover time of ๐ป is at most 2๐‘›(๐‘œ โˆ’ 1) . ๐‘ก Choose an arbitrary spanning tree of ๐ป . Consider the โ€œtour around the spanning treeโ€ Claim: Cover time of ๐ป is at most ๐ท ๐‘ฃ 0 , ๐‘ฃ 1 + ๐ท ๐‘ฃ 1 , ๐‘ฃ 2 + ๐ท ๐‘ฃ 2 , ๐‘ฃ 3 + โ‹ฏ + ๐ท ๐‘ฃ 2๐‘œโˆ’1 , ๐‘ฃ 2๐‘œ

  21. hitting times and commute times Claim: If {๐‘ฃ, ๐‘ค} is an edge of ๐ป , then ๐ท ๐‘ฃ, ๐‘ค โ‰ค 2๐‘› . Letโ€™s use the claim to show that the cover time of ๐ป is at most 2๐‘›(๐‘œ โˆ’ 1) . ๐‘ก Choose an arbitrary spanning tree of ๐ป . Consider the โ€œtour around the spanning treeโ€ Claim: Cover time of ๐ป is at most ๐ท(๐‘ฃ, ๐‘ค) โ‰ค 2๐‘›(๐‘œ โˆ’ 1) ๐‘ฃ,๐‘ค โˆˆ๐‘ˆ

  22. high school physics Claim: If {๐‘ฃ, ๐‘ค} is an edge of ๐ป , then ๐ท ๐‘ฃ, ๐‘ค โ‰ค 2๐‘› .

  23. high school physics Claim: If {๐‘ฃ, ๐‘ค} is an edge of ๐ป , then ๐ท ๐‘ฃ, ๐‘ค โ‰ค 2๐‘› . View the graph ๐ป as an electrical network with unit resistances on the edges. When a potential difference is applied between the nodes from an external source, current flows in the network in accordance with Kirchoffโ€™s and Ohmโ€™s laws. K1: The total current flowing into a vertex = total current flowing out K2: The sum of potential differences around any cycle is zero. Ohm: The current flowing along any edge {๐‘ฃ, ๐‘ค} is equal to (potential ๐‘ฃ โˆ’ potential ๐‘ค )/resistance(๐‘ฃ, ๐‘ค) ๐‘ฃ The effective resistance between ๐‘ฃ and ๐‘ค is ๐‘ค defined as the potential difference required to send one unit of current from ๐‘ฃ to ๐‘ค .

  24. presentation agenda Simple uses of randomness in computation: 1. Checking matrix multiplication [Mahmoud] 2. Karp-Rabin matching algorithm [Stanislav] random Error-correcting codes: 1. Intro [Amr] 2. Reed-Solmon codes [Jie] 3. Expander codes [Ali] pseudo-random

Recommend


More recommend