Randomness in Computing L ECTURE 25 Last time • Drunkard’s walk • Markov chains • Randomized algorithm for 2SAT Today • Randomized algorithm for 3SAT • Gambler’s ruin • Classification of Markov chains • Stationary distributions 4/23/2020 Sofya Raskhodnikova;Randomness in Computing; based on slides by Baranasuriya et al.
Application: Algorithm for 3SAT Recall: A 3CNF formula is an AND of clauses • Each clause is an OR of literals. • Each literal is a Boolean variable or its negation. • E.g. 𝑦 1 ∨ 𝑦 2 ∨ 𝑦 3 ∧ 𝑦 2 ∨ 𝑦 3 ∨ 𝑦 4 ∧ 𝑦 1 ∨ 𝑦 3 ∨ 𝑦 4 3SAT Problem (search version): Given a 3CNF formula, find a satisfying assignment if it is satisfiable. Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Application: Algorithm for 3SAT • First try: the same algorithm as for 2SAT Input: a 3 CNF formula 𝜚 on 𝑜 variables parameter 1. Start with an arbitrary truth assignment, e.g., all 0’s. 2. Repeat R times, terminating if 𝜚 is satisfied: a) Choose an arbitrary clause 𝐷 that is not satisfied. b) Pick a uniformly random literal in 𝐷 and flip its assignment. 3. If a satisfying assignment is found, return it. 4. Otherwise, return ``unsatisfiable ’’. • We want to analyze the number of steps (iterations) necessary. Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Analysis: What should we change? • Let 𝑇 = a satisfying assignment of 𝜚. • 𝐵 𝑗 = an assignment to 𝜚 after 𝑗 steps • 𝑌 𝑗 = number of variables that have the same value in 𝐵 𝑗 and 𝑇 When 𝑌 𝑗 = 𝑜 , the algorithm terminates with a satisfying assignment. • If 𝑌 𝑗 = 0 then 𝑌 𝑗+1 = 1 Pr 𝑌 𝑗+1 = 1 𝑌 𝑗 = 0 = 1 1 to 3 • If 𝑌 𝑗 ∈ [1, 𝑜 − 1] then 𝐵 𝑗 disagrees with 𝑇 on 1 or 2 literals of 𝐷 1/3 Pr 𝑌 𝑗+1 = 𝑘 + 1 𝑌 𝑗 = 𝑘 ≥ 1/2 Pr 𝑌 𝑗+1 = 𝑘 − 1 𝑌 𝑗 = 𝑘 ≤ 1/2 2/3 𝑌 0 , 𝑌 1 , 𝑌 2 , … is not necessarily a Markov chain Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Analysis: What should we change? • Define a Markov Chain 𝑍 0 , 𝑍 1 , 𝑍 2 , … 𝑍 0 = 𝑌 0 Pr 𝑍 𝑗+1 = 1 𝑍 𝑗 = 0 = 1 Pr 𝑍 𝑗+1 = 𝑘 + 1 𝑍 𝑗 = 𝑘 = 1/2 1/3 Pr 𝑍 𝑗+1 = 𝑘 − 1 𝑍 𝑗 = 𝑘 = 1/2 2/3 • ``Pessimistic version’’ of stochastic process 𝑌 0 , 𝑌 1 , 𝑌 2 , … The expected time to reach 𝑜 is larger for 𝑍 0 , 𝑍 1 , 𝑍 2 , … than for 𝑌 0 , 𝑌 1 , 𝑌 2 , … Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Expected time to reach 𝒐 Tipsy 2/3 1/3 𝒌 𝒐 0 𝑡 𝑘 = expected number of steps to reach position 𝑜, starting at postion 𝑘 𝑡 0 = 𝑡 1 + 1 𝑡 𝑜 = 0 2𝑡 𝑡 𝑡 𝑘−1 𝑡 𝑘+1 𝑘−1 𝑘+1 for 𝑘 ∈ [1, 𝑜 − 1] : 𝑡 𝑘 = 1 + + 3 3 2 2 𝒕 𝒌 = 𝟑 𝒐+𝟑 − 𝟑 𝒌+𝟑 − 𝟒 𝒐 − 𝒌 Θ 2 𝑜 steps on average Not good Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Ideas • The longer the algorithm runs, the more likely it is to move towards 0. Idea: Restart after a fixed number of steps. • How do we get better at the starting assignment? Idea: Choose one at random. What’s the distribution of the number of variables that match S? With nonnegligible probability we significantly exceed 𝑜/2 matches Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Modified algorithm for 3SAT parameter Input: a 3 CNF formula 𝜚 on 𝑜 variables 1. Repeat R times, terminating if 𝜚 is satisfied: a) Start with a uniformly random truth assignment. b) Repeat 𝟒𝒐 times: Choose an arbitrary clause 𝐷 that is not satisfied. i. Pick a uniformly random literal in 𝐷 and flip its assignment. ii. 2. If a satisfying assignment is found, return it. 3. Otherwise, return ``unsatisfiable ’’. Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Analysis Want to understand: the probability of reaching assignment 𝑇 in 3𝑜 steps starting from a random assignment. • Let 𝑟 be the probability that Markov chain 𝑍 reaches state 𝑜 in 3𝑜 steps starting from a state that corresponds to a random assignment. • Let 𝒓 𝒌 be the probability that Markov chain 𝑍 reaches state 𝑜 in 3𝑜 steps starting from the state 𝒐 − 𝒌 . 𝑜 Pr starting in state 𝑜 − 𝑘 ⋅ 𝑟 𝑘 𝑟 = 𝑘=0 • One way for 𝑍 to reach state 𝑜 from state 𝑜 − 𝑘 is to move left 𝑘 times and right 2𝑘 times in the first 3𝑘 moves. 𝑘 2𝑘 2 1 3𝑘 2/3 1/3 𝑟 𝑘 ≥ 𝑘 3 3 𝒐 0 𝒌 Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Analysis: Bounding 𝒓 𝒌 𝑘 2𝑘 2 1 3𝑘 So far: 𝑟 𝑘 ≥ 𝑘 3 3 𝑛 𝑛 • By Stirling’s formula, 𝑛! = Θ 𝑛 ⋅ 𝑓 • When 𝑘 > 0, Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Analysis: Bounding 𝒓 So far: 𝑟 is the probability that Markov chain 𝑍 reaches state 𝑜 in 3𝑜 steps starting from a state that corresponds to a random assignment. 𝑜 1 ⋅ 1 Pr starting in state 𝑜 − 𝑘 ⋅ 𝑟 𝑘 ; 𝑟 𝑘 = Ω when j > 0 𝑟 = 2 𝑘 𝑘 𝑘=0 Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Analysis: final touches • When 𝜚 is satisfiable, one run finds a satisfying assignment with 𝑜 1 3 probability at least 𝑟 = Ω 𝑜 ⋅ 4 • The number of runs until finding a satisfying assignment is a geometric random variable with expectation at most 𝑜 1 𝑜 ⋅ 4 𝑟 = 𝑃 3 • Each run uses 3𝑜 steps, so the expected number of steps is 𝑜 𝑜 𝑜 ⋅ 4 𝑃 3 • As for 2SAT, we set 𝑆 to 2𝑏 times the expected number of steps to get a Monte Carlo algorithm that fails w. p. at most 2 −𝑏 . Sofya Raskhodnikova; Randomness in Computing 4/23/2020
The Gambler’s Ruin Player 1 Player 2 with probability ½ Player 1 loses 1 dollar with probability ½ Player 2 loses 1 dollar Limit: ℓ 𝟐 dollars Limit: ℓ 𝟑 dollars • State at time 𝑢: number of dollars won by Player 1 (could be negative) • Find the probability that Player 1 wins ℓ 2 dollars before losing ℓ 1 dollars and the expected time to finish the game. Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Recall: Drunkard’s walk Tipsy 𝒌 𝒐 0 𝑘 • Pr[Tipsy goes home he started at position 𝑘 = 𝑜 • Expected number of steps to finish the walk, starting at postion 𝑘 , is 𝑘(𝑜 − 𝑘) Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Poll questions The probability Player 1 wins ℓ 2 before losing ℓ 1 dollars is ℓ 1 ℓ 2 A. C. ℓ 2 ℓ 1 +ℓ 2 1 ℓ 1 D. B. 2 ℓ 1 +ℓ 2 • The expected time to finish the game is A. ℓ 1 (ℓ 2 − ℓ 1 ) C. (ℓ 1 + ℓ 2 )(ℓ 2 − ℓ 1 ) 2 B. ℓ 1 ℓ 2 D. ℓ 2 Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Classification of Markov chains • We want to study Markov chains that ``mix’’ well. • We will define Markov chains that avoid some problematic behaviors: irreducible and aperiodic, and eventually ergodic. • A finite Markov chain is irreducible if its graph representation consists of one strongly connected component. Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Periodicity • Example: a Markov chain whose states are integers and it moves to each neighboring state with probability ½. If the chain starts at 0, when can it be in an even-numbered state? • A state is periodic if there exists an integer Δ > 1 such that Pr 𝑌 𝑢+𝑡 = 𝑘 𝑌 𝑢 = 𝑘 = 0 unless 𝑡 is divisible by Δ ; otherwise, it is aperiodic. • A Markov chain is aperiodic if all its states are aperiodic. Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Ergodicity 𝑢 be the probability that, starting at state 𝑗, • Let return probability 𝑠 𝑗,𝑘 the first transition to state 𝑘 occurs at time 𝑢: 𝑢 = Pr 𝑌 𝑢 = 𝑘 and, for 𝑡 ∈ 𝑢 − 1 , 𝑌 𝑡 ≠ 𝑘 𝑌 0 = 𝑗] 𝑠 𝑗,𝑘 𝑢 = 1 and transient if σ 𝑢≥1 𝑠 𝑢 < 1 . • A state 𝑗 is recurrent if σ 𝑢≥1 𝑠 𝑗,𝑗 𝑗,𝑗 A Markov chain is recurrent if every state in it is recurrent. – If a state 𝑗 is recurrent, once the chain visits 𝑗 , it will return again and again. • The hitting time ℎ 𝑗,𝑘 is the expected time to first reach state 𝑘 from 𝑢 . state 𝑗 : ℎ 𝑗,𝑘 = σ 𝑢≥1 𝑢 ⋅ 𝑠 𝑗,𝑘 • A recurrent state 𝑗 is positive recurrent if ℎ 𝑗,𝑗 is finite. – In a finite Markov chain, all recurrent states are positive recurrent. • A state is ergodic if it is aperiodic and positive recurrent. • A Markov chain is ergodic if all its states are ergodic. Sofya Raskhodnikova; Randomness in Computing 4/23/2020
Recommend
More recommend