Part 2: Boolean functions and Noise sensitivity Basic Set up for Noise Sensitivity • x := x 1 , . . . , x n i.i.d. ± 1 ( 1 / 2 , 1 / 2 ) • f : {− 1 , 1 } n → {± 1 } ( f is called a Boolean function) • x ǫ := x ǫ 1 , . . . , x ǫ n small perturbation of x : each x i is independently with probability ǫ flipped. Question: Is f ( x ) and f ( x ǫ ) very likely to be the same (high correlation) or almost independent (low correlation)? Jeff Steif October 21, 2019 14 / 35
Part 2: Boolean functions and Noise sensitivity Basic Set up for Noise Sensitivity • x := x 1 , . . . , x n i.i.d. ± 1 ( 1 / 2 , 1 / 2 ) • f : {− 1 , 1 } n → {± 1 } ( f is called a Boolean function) • x ǫ := x ǫ 1 , . . . , x ǫ n small perturbation of x : each x i is independently with probability ǫ flipped. Question: Is f ( x ) and f ( x ǫ ) very likely to be the same (high correlation) or almost independent (low correlation)? Of course if n and f are fixed and ǫ is very small, then f ( x ) and f ( x ǫ ) are very likely to be the same. Jeff Steif October 21, 2019 14 / 35
Part 2: Boolean functions and Noise sensitivity Basic Set up for Noise Sensitivity • x := x 1 , . . . , x n i.i.d. ± 1 ( 1 / 2 , 1 / 2 ) • f : {− 1 , 1 } n → {± 1 } ( f is called a Boolean function) • x ǫ := x ǫ 1 , . . . , x ǫ n small perturbation of x : each x i is independently with probability ǫ flipped. Question: Is f ( x ) and f ( x ǫ ) very likely to be the same (high correlation) or almost independent (low correlation)? Of course if n and f are fixed and ǫ is very small, then f ( x ) and f ( x ǫ ) are very likely to be the same. So, we think of ǫ as fixed (and small) and then n is taken to be very large. Jeff Steif October 21, 2019 14 / 35
Noise Sensitivity Definition (Benjamini, Kalai, Schramm) A sequence of Boolean functions f n : {− 1 , 1 } n → {± 1 } is called noise sensitive (NS) if for any fixed ǫ > 0, � 2 = 0 . f n ( x ) f n ( x ǫ ) � � � n →∞ E lim − E f n ( x ) Jeff Steif October 21, 2019 15 / 35
3 examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) Jeff Steif October 21, 2019 16 / 35
3 examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) Not noise sensitive. Jeff Steif October 21, 2019 16 / 35
3 examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) Not noise sensitive. (In fact Noise Stable) Jeff Steif October 21, 2019 16 / 35
3 examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) Not noise sensitive. (In fact Noise Stable) • f n ( x 1 , . . . , x n ) := � n i = 1 x i (Parity) Jeff Steif October 21, 2019 16 / 35
3 examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) Not noise sensitive. (In fact Noise Stable) • f n ( x 1 , . . . , x n ) := � n i = 1 x i (Parity) Noise sensitive. Jeff Steif October 21, 2019 16 / 35
3 examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) Not noise sensitive. (In fact Noise Stable) • f n ( x 1 , . . . , x n ) := � n i = 1 x i (Parity) Noise sensitive. • f n ( x 1 , . . . , x n ) := sign ( � i x i ) (Majority Function, n odd) Not noise sensitive. Jeff Steif October 21, 2019 16 / 35
3 examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) Not noise sensitive. (In fact Noise Stable) • f n ( x 1 , . . . , x n ) := � n i = 1 x i (Parity) Noise sensitive. • f n ( x 1 , . . . , x n ) := sign ( � i x i ) (Majority Function, n odd) Not noise sensitive. (In fact Noise Stable) Jeff Steif October 21, 2019 16 / 35
Percolation on the hexagonal lattice Consider an R × R box and let f R : {− 1 , 1 } R 2 → {− 1 , 1 } , be the Boolean function given by: • + 1 if there is a Left-Right black crossing • − 1 otherwise Jeff Steif October 21, 2019 17 / 35
Are percolation crossing events noise sensitive? ω ω ǫ ǫ -noised ?
Noise sensitivity of percolation Theorem (Benjamini, Kalai & Schramm 1999) Percolation crossings are noise sensitive. Jeff Steif October 21, 2019 19 / 35
Noise sensitivity of percolation Theorem (Benjamini, Kalai & Schramm 1999) Percolation crossings are noise sensitive. Question: What happens if the amount of noise ǫ R decreases to 0 with R ? Jeff Steif October 21, 2019 19 / 35
Noise sensitivity of percolation Theorem (Benjamini, Kalai & Schramm 1999) Percolation crossings are noise sensitive. Question: What happens if the amount of noise ǫ R decreases to 0 with R ? (If it decreases too quickly to 0, one trivially has high correlation.) Jeff Steif October 21, 2019 19 / 35
Noise sensitivity of percolation Theorem (Benjamini, Kalai & Schramm 1999) Percolation crossings are noise sensitive. Question: What happens if the amount of noise ǫ R decreases to 0 with R ? (If it decreases too quickly to 0, one trivially has high correlation.) Benjamini, Kalai & Schramm showed decorrelation (asymptotic independence) still occurs if ǫ R ≥ C / log( R ) for a sufficiently large C . Jeff Steif October 21, 2019 19 / 35
Noise sensitivity of percolation Theorem (Benjamini, Kalai & Schramm 1999) Percolation crossings are noise sensitive. Question: What happens if the amount of noise ǫ R decreases to 0 with R ? (If it decreases too quickly to 0, one trivially has high correlation.) Benjamini, Kalai & Schramm showed decorrelation (asymptotic independence) still occurs if ǫ R ≥ C / log( R ) for a sufficiently large C . Question: What happens if the amount of noise ǫ = ǫ R decreases to 0 as 1 / R ǫ for a small ǫ > 0? Jeff Steif October 21, 2019 19 / 35
Pivotality and Influences (A key player) Definition For a Boolean function f on n variables and for i ∈ { 1 , 2 , . . . , n } , the event that i is pivotal is the event that changing the i th bit changes the output of the function. Jeff Steif October 21, 2019 20 / 35
Pivotality and Influences (A key player) Definition For a Boolean function f on n variables and for i ∈ { 1 , 2 , . . . , n } , the event that i is pivotal is the event that changing the i th bit changes the output of the function. The influence of the i th bit, I i ( f ) , is the probability that i is pivotal. (Also called the Banzhaf-Penrose index.) Jeff Steif October 21, 2019 20 / 35
Pivotality and Influences (A key player) Definition For a Boolean function f on n variables and for i ∈ { 1 , 2 , . . . , n } , the event that i is pivotal is the event that changing the i th bit changes the output of the function. The influence of the i th bit, I i ( f ) , is the probability that i is pivotal. (Also called the Banzhaf-Penrose index.) John Banzhaf and Lionel Penrose Jeff Steif October 21, 2019 20 / 35
Pivotality and Influences: Examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) Jeff Steif October 21, 2019 21 / 35
Pivotality and Influences: Examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) The first bit is always pivotal and so I 1 ( f ) = 1. Jeff Steif October 21, 2019 21 / 35
Pivotality and Influences: Examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) The first bit is always pivotal and so I 1 ( f ) = 1. The other bits are never pivotal and so have influence are 0. Jeff Steif October 21, 2019 21 / 35
Pivotality and Influences: Examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) The first bit is always pivotal and so I 1 ( f ) = 1. The other bits are never pivotal and so have influence are 0. • f n ( x 1 , . . . , x n ) := � n i = 1 x i (Parity) Jeff Steif October 21, 2019 21 / 35
Pivotality and Influences: Examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) The first bit is always pivotal and so I 1 ( f ) = 1. The other bits are never pivotal and so have influence are 0. • f n ( x 1 , . . . , x n ) := � n i = 1 x i (Parity) All the bits are always pivotal and so all have influence 1. Jeff Steif October 21, 2019 21 / 35
Pivotality and Influences: Examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) The first bit is always pivotal and so I 1 ( f ) = 1. The other bits are never pivotal and so have influence are 0. • f n ( x 1 , . . . , x n ) := � n i = 1 x i (Parity) All the bits are always pivotal and so all have influence 1. • f n ( x 1 , . . . , x n ) := sign ( � i x i ) (Majority Function, n odd) Jeff Steif October 21, 2019 21 / 35
Pivotality and Influences: Examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) The first bit is always pivotal and so I 1 ( f ) = 1. The other bits are never pivotal and so have influence are 0. • f n ( x 1 , . . . , x n ) := � n i = 1 x i (Parity) All the bits are always pivotal and so all have influence 1. • f n ( x 1 , . . . , x n ) := sign ( � i x i ) (Majority Function, n odd) A bit is pivotal iff there is a tie among the other bits. Jeff Steif October 21, 2019 21 / 35
Pivotality and Influences: Examples • f n ( x 1 , . . . , x n ) := x 1 (Dictator) The first bit is always pivotal and so I 1 ( f ) = 1. The other bits are never pivotal and so have influence are 0. • f n ( x 1 , . . . , x n ) := � n i = 1 x i (Parity) All the bits are always pivotal and so all have influence 1. • f n ( x 1 , . . . , x n ) := sign ( � i x i ) (Majority Function, n odd) A bit is pivotal iff there is a tie among the other bits. Hence all influences are about c / n 1 / 2 . Jeff Steif October 21, 2019 21 / 35
Influences are relevant for noise sensitivity Jeff Steif October 21, 2019 22 / 35
Influences are relevant for noise sensitivity Theorem (Benjamini, Kalai & Schramm 1999) If � I 2 lim i ( f n ) = 0 , n →∞ i then { f n } is noise sensitive. Jeff Steif October 21, 2019 22 / 35
Influences are relevant for noise sensitivity Theorem (Benjamini, Kalai & Schramm 1999) If � I 2 lim i ( f n ) = 0 , n →∞ i then { f n } is noise sensitive. Remarks: • The Parity function shows that this condition is not necessary. Jeff Steif October 21, 2019 22 / 35
Influences are relevant for noise sensitivity Theorem (Benjamini, Kalai & Schramm 1999) If � I 2 lim i ( f n ) = 0 , n →∞ i then { f n } is noise sensitive. Remarks: • The Parity function shows that this condition is not necessary. • However, this condition is necessary for monotone (increasing) functions. Jeff Steif October 21, 2019 22 / 35
Influences are relevant for noise sensitivity Theorem (Benjamini, Kalai & Schramm 1999) If � I 2 lim i ( f n ) = 0 , n →∞ i then { f n } is noise sensitive. Remarks: • The Parity function shows that this condition is not necessary. • However, this condition is necessary for monotone (increasing) functions. (All examples other than Parity you have seen are monotone.) Jeff Steif October 21, 2019 22 / 35
Influences are relevant for noise sensitivity Theorem (Benjamini, Kalai & Schramm 1999) If � I 2 lim i ( f n ) = 0 , n →∞ i then { f n } is noise sensitive. Remarks: • The Parity function shows that this condition is not necessary. • However, this condition is necessary for monotone (increasing) functions. (All examples other than Parity you have seen are monotone.) • The majority functions just miss satisfying this condition and are an extremal sequence in many respects. Jeff Steif October 21, 2019 22 / 35
Influences are relevant for noise sensitivity Theorem (Benjamini, Kalai & Schramm 1999) If � I 2 lim i ( f n ) = 0 , n →∞ i then { f n } is noise sensitive. Remarks: • The Parity function shows that this condition is not necessary. • However, this condition is necessary for monotone (increasing) functions. (All examples other than Parity you have seen are monotone.) • The majority functions just miss satisfying this condition and are an extremal sequence in many respects. How do we get noise sensitivity of percolation crossings from this? Jeff Steif October 21, 2019 22 / 35
What is the probability of a hexagon being pivotal for percolation? The probability of being pivotal is (away from the boundary) about 1 / R 5 / 4 (4-arm exponent!) yielding the sum of the squared influences to be R 2 × 1 / R 5 / 2 which goes to 0. Jeff Steif October 21, 2019 23 / 35
Quantitative Noise sensitivity of percolation Recall BKS showed percolation crossings decorrelate under noise even if ǫ R → 0 provided ǫ R ≥ C / log( R ) for a sufficiently large C . Jeff Steif October 21, 2019 24 / 35
Quantitative Noise sensitivity of percolation Recall BKS showed percolation crossings decorrelate under noise even if ǫ R → 0 provided ǫ R ≥ C / log( R ) for a sufficiently large C . Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α for sufficiently small α ? Jeff Steif October 21, 2019 24 / 35
Quantitative Noise sensitivity of percolation Recall BKS showed percolation crossings decorrelate under noise even if ǫ R → 0 provided ǫ R ≥ C / log( R ) for a sufficiently large C . Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α for sufficiently small α ? If so, what is the largest α we could use? Jeff Steif October 21, 2019 24 / 35
Quantitative Noise sensitivity of percolation Recall BKS showed percolation crossings decorrelate under noise even if ǫ R → 0 provided ǫ R ≥ C / log( R ) for a sufficiently large C . Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α for sufficiently small α ? If so, what is the largest α we could use? Heuristic: Yes and the largest α is 3 / 4. Jeff Steif October 21, 2019 24 / 35
A heuristic for the noise sensitivity exponent for percolation Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α ? Jeff Steif October 21, 2019 25 / 35
A heuristic for the noise sensitivity exponent for percolation Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α ? Heuristic: Yes and the largest α is 3 / 4. Jeff Steif October 21, 2019 25 / 35
A heuristic for the noise sensitivity exponent for percolation Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α ? Heuristic: Yes and the largest α is 3 / 4. • We have seen that the probability that a hexagon is pivotal is about R − 5 / 4 . Jeff Steif October 21, 2019 25 / 35
A heuristic for the noise sensitivity exponent for percolation Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α ? Heuristic: Yes and the largest α is 3 / 4. • We have seen that the probability that a hexagon is pivotal is about R − 5 / 4 . • Hence the expected number of pivotal hexagons is about R 3 / 4 . Jeff Steif October 21, 2019 25 / 35
A heuristic for the noise sensitivity exponent for percolation Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α ? Heuristic: Yes and the largest α is 3 / 4. • We have seen that the probability that a hexagon is pivotal is about R − 5 / 4 . • Hence the expected number of pivotal hexagons is about R 3 / 4 . • Therefore (with ǫ R = 1 / R α ) the expected number of pivotal hexagons that we flip is R 3 / 4 − α . Jeff Steif October 21, 2019 25 / 35
A heuristic for the noise sensitivity exponent for percolation Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α ? Heuristic: Yes and the largest α is 3 / 4. • We have seen that the probability that a hexagon is pivotal is about R − 5 / 4 . • Hence the expected number of pivotal hexagons is about R 3 / 4 . • Therefore (with ǫ R = 1 / R α ) the expected number of pivotal hexagons that we flip is R 3 / 4 − α . • If α > 3 / 4, we don’t flip a pivotal and things don’t change. Jeff Steif October 21, 2019 25 / 35
A heuristic for the noise sensitivity exponent for percolation Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α ? Heuristic: Yes and the largest α is 3 / 4. • We have seen that the probability that a hexagon is pivotal is about R − 5 / 4 . • Hence the expected number of pivotal hexagons is about R 3 / 4 . • Therefore (with ǫ R = 1 / R α ) the expected number of pivotal hexagons that we flip is R 3 / 4 − α . • If α > 3 / 4, we don’t flip a pivotal and things don’t change. (This can be made rigorous relatively easily). Jeff Steif October 21, 2019 25 / 35
A heuristic for the noise sensitivity exponent for percolation Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α ? Heuristic: Yes and the largest α is 3 / 4. • We have seen that the probability that a hexagon is pivotal is about R − 5 / 4 . • Hence the expected number of pivotal hexagons is about R 3 / 4 . • Therefore (with ǫ R = 1 / R α ) the expected number of pivotal hexagons that we flip is R 3 / 4 − α . • If α > 3 / 4, we don’t flip a pivotal and things don’t change. (This can be made rigorous relatively easily). • If α < 3 / 4, we are likely to hit a pivotal and things should get “mixed up” and decorrelate. Jeff Steif October 21, 2019 25 / 35
A heuristic for the noise sensitivity exponent for percolation Might we believe that percolation crossings can decorrelate even if ǫ R is 1 / R α ? Heuristic: Yes and the largest α is 3 / 4. • We have seen that the probability that a hexagon is pivotal is about R − 5 / 4 . • Hence the expected number of pivotal hexagons is about R 3 / 4 . • Therefore (with ǫ R = 1 / R α ) the expected number of pivotal hexagons that we flip is R 3 / 4 − α . • If α > 3 / 4, we don’t flip a pivotal and things don’t change. (This can be made rigorous relatively easily). • If α < 3 / 4, we are likely to hit a pivotal and things should get “mixed up” and decorrelate. (This is much harder to make rigorous). Jeff Steif October 21, 2019 25 / 35
Randomized algorithms approach Definition Let f be a Boolean function. A randomized algorithm A for f examines the input bits one by one (where the choice of the next bit examined can be random and may depend on the values of the bits examined so far). Let J ⊆ { 1 , 2 , . . . , n } be the (random) set of bits examined by the algorithm . Define the revealment of A to be δ A := sup { P ( i ∈ J ) : i ∈ { 1 , 2 , . . . , n }} . Jeff Steif October 21, 2019 26 / 35
Noise sensitivity and Randomized algorithms Theorem (Schramm & S. 2010) Let { f n } be a sequence of Boolean functions with A n being a randomized algorithm for f n . Jeff Steif October 21, 2019 27 / 35
Noise sensitivity and Randomized algorithms Theorem (Schramm & S. 2010) Let { f n } be a sequence of Boolean functions with A n being a randomized algorithm for f n . 1. If lim n →∞ δ A n = 0 , then { f n } is noise sensitive. Jeff Steif October 21, 2019 27 / 35
Noise sensitivity and Randomized algorithms Theorem (Schramm & S. 2010) Let { f n } be a sequence of Boolean functions with A n being a randomized algorithm for f n . 1. If lim n →∞ δ A n = 0 , then { f n } is noise sensitive. 2. If δ A n ≤ C / n α , then for all β < α/ 2 � 2 = 0 . f n ( x ) f n ( x 1 / n β ) � � � lim − E f n ( x ) n →∞ E Jeff Steif October 21, 2019 27 / 35
Oded Schramm and J.S. Jeff Steif October 21, 2019 28 / 35
What does this give for percolation? Follow the interface from the bottom right corner to the top left corner. Jeff Steif October 21, 2019 29 / 35
What does this give for percolation? Follow the interface from the bottom right corner to the top left corner. A hexagon H near the center of the picture is examined only if there is a 1 ( the two arm event ) which has probability about ( 1 4 . R ) Jeff Steif October 21, 2019 29 / 35
What does this give for percolation? Follow the interface from the bottom right corner to the top left corner. A hexagon H near the center of the picture is examined only if there is a 1 ( the two arm event ) which has probability about ( 1 4 . R ) Hence we obtain decorrelation if ǫ R is larger than 1 / R 1 / 8 . Jeff Steif October 21, 2019 29 / 35
What does this give for percolation? Follow the interface from the bottom right corner to the top left corner. A hexagon H near the center of the picture is examined only if there is a 1 ( the two arm event ) which has probability about ( 1 4 . R ) Hence we obtain decorrelation if ǫ R is larger than 1 / R 1 / 8 . This is a factor of 6 off from the 3 / 4 conjecture . Jeff Steif October 21, 2019 29 / 35
Part 3: The Fourier set-up (used in all! three methods) The set of all functions f : {− 1 , 1 } n → R is a 2 n dimensional vector space with orthogonal basis { χ S } S ⊆{ 1 ,..., n } where � χ S ( x 1 , . . . , x n ) := x i i ∈ S Jeff Steif October 21, 2019 30 / 35
Part 3: The Fourier set-up (used in all! three methods) The set of all functions f : {− 1 , 1 } n → R is a 2 n dimensional vector space with orthogonal basis { χ S } S ⊆{ 1 ,..., n } where � χ S ( x 1 , . . . , x n ) := x i i ∈ S We then can write � ^ f := f ( S ) χ S . S ⊆{ 1 ,..., n } Jeff Steif October 21, 2019 30 / 35
Part 3: The Fourier set-up (used in all! three methods) The set of all functions f : {− 1 , 1 } n → R is a 2 n dimensional vector space with orthogonal basis { χ S } S ⊆{ 1 ,..., n } where � χ S ( x 1 , . . . , x n ) := x i i ∈ S We then can write � ^ f := f ( S ) χ S . S ⊆{ 1 ,..., n } It is elementary to check that n � 2 = f ( x ) f ( x ǫ ) � ( 1 − ǫ ) k � ^ f ( S ) 2 . � � � E − E f ( x ) k = 1 | S | = k Jeff Steif October 21, 2019 30 / 35
Part 3: The Fourier set-up (used in all! three methods) The set of all functions f : {− 1 , 1 } n → R is a 2 n dimensional vector space with orthogonal basis { χ S } S ⊆{ 1 ,..., n } where � χ S ( x 1 , . . . , x n ) := x i i ∈ S We then can write � ^ f := f ( S ) χ S . S ⊆{ 1 ,..., n } It is elementary to check that n � 2 = f ( x ) f ( x ǫ ) � ( 1 − ǫ ) k � ^ f ( S ) 2 . � � � E − E f ( x ) k = 1 | S | = k Noise sensitivity corresponds to the “Fourier weights” being concentrated on large S ’s. Jeff Steif October 21, 2019 30 / 35
Recommend
More recommend