# Noise Sensitivity and Influence 1

Last week, Simon gave a junior probability seminar about noise sensitivity, and its applications in particular to dynamic critical percolation, so I thought I would write something about this topic based on his talk, and some other sources.

The motivation is critical site percolation on the triangular lattice. This has critical probability $p_c=\frac12$, and, at this critical probability, almost surely there is no infinite component. But it turns out that there is ‘almost’ an infinite component, in the following technical sense. Dynamical site percolation is given by switching the state of each site according to independent Poisson processes with the same parameter. Then, by construction, if we start at time 0 with the critical probability, then at any fixed time T>0, the configuration is distributed as critical site percolation, and hence almost surely has no infinite component. If, however, we consider the interval of times [0,T], then almost surely there *is* a time when there is an infinite component, and indeed the set of such times has positive Hausdorff dimension.

Definitions

Noise sensitivity addresses the question of how likely some function of a product measure is to change under small perturbations to the state. To give a precise definition, we take a function $f_n:\{0,1\}^n \rightarrow \{0,1\}$ on a product space, with product measure $\mu$. Then given $\omega\in\Omega_n=\{0,1\}^n$, define $w^\epsilon$ to be the outcome when each component of $\omega$ is resampled with probability $\epsilon$. Note that it matters quantitatively whether we consider ‘resampled’ or ‘swapped’, but it doesn’t affect any of the results qualitatively.

Then the noise stability of f at level $\epsilon$ is defined to be

$S_\epsilon(f):=\text{Cov}(f(\omega)f(\omega^\epsilon),$

where $\omega \sim \mu$. Note that $\omega^\epsilon\sim \mu$ also, but obviously $\omega,\omega^\epsilon$ are not independent. Noise stability measures the magnitude of this dependence. The sequence of functions $(f_n)$ is said to be noise-sensitive if $S_\epsilon(f_n)\rightarrow 0$ as $n\rightarrow\infty$.

A related concept is influence. This should be viewed as a local version of noise sensitivity, where we consider the likelihood that the function will change as a result of swapping a single fixed component of the state. Perhaps abusing standard notation, define $\omega^{(i)}$ to be the result of swapping the ith component of $\omega$. Then the influence of the ith coordinate is

$I_i(f) := \mu(\{\omega\in\Omega: f(\omega)\ne f(\omega^{(i)})\}).$

Examples and Results

As in so many models, we are interested in the relationship between the local setting and the global setting. Benjamini, Kalai and Schramm (99) showed that, in the case of the uniform product measure,

$\sum_{i\in[n]} I_i(f_n)^2 \stackrel{n\rightarrow\infty}\rightarrow 0,$

is a sufficient condition for noise-sensitivity of $(f_n)$. Note the converse is trivially false, by considering the function f(w) that records the parity of the number of 1s in w. Here, the influence is 1, since changing one component by definition changes the value of the function. But the function is noise sensitive, as changing a proportion $\epsilon$ of the components gives a roughly 1/2 probability for large n that the value of the function changes.

However, if we restrict to the class of monotone increasing functions, where changing a 0 to a 1 can never cause f to decrease, the converse is true. Outside the setting of uniform product measure, lots of versions of this theorem are open.

This perhaps seems counter-intuitive. After all, we would expect high global sensitivity to follow from high local sensitivity. I don’t have a good counter-intuition for why this should be true. The original authors themselves describe this as ‘paradoxical’ in [1].

It is natural to ask which component has the largest influence, and in particular whether we can guarantee the existence of a component with influence at least some particular value. Obviously, if the Boolean function is constant, all the influences are zero, and other examples, such as the indicator function of a specific state, will also have very small influences. But such functions also have small variance (under the uniform product measure), so we need to make comparison to this. We can show that

$\text{Var}(f)\le \frac14 \sum_{i\in[n]}I_i(f),$

which is a (discrete) example of a Poincare inequality, where deviations of a function from its mean on a particular space in $L^p$ are bounded in terms of the derivative of the function. Here, influence is a discrete analogue of derivative.

Proof: We observe first that if $\mathbb{P}(f(w)=1)=p$, then $\mathbb{E}[f(w)]=p$, and

$\text{Var}(f)=p(1-p)^2+(1-p)p^2=p(1-p)=\frac12 \mathbb{P}(f(w)\ne f(\bar w)),$

where $w,\bar w$ are independent realisations of the underlying measure. We move from $w\mapsto \bar w$ one component at a time, examining the influence at each step. Let $v^{(i)}\in \{0,1\}^n$ be given by the first i components of $\bar w$ and the final (n-i) components of $w$. Then $v^{(0)}=w, v^{(n)}=\bar w$. Finally, define

$J:=\{i:w_i\ne \bar w_i\}.$

We also say that a state is i-pivotal, if changing the ith component changes the value of the function. Note that if $f(w)\ne f(\bar w)$, then there exists some i<n such that $f(v^{(i)})\ne f(v^{(i+1)})$. In particular:

$\{f(w)\ne f(\bar w)\}\subset \{1\in J, v^{(0)}\text{ 1-pivotal}\}\cup$

$\{2\in J, v^{(1)}\text{ 2-pivotal}\}\cup\ldots\cup\{n\in J, v^{(n-1)}\text{ n-pivotal}\}.$

Note now that for any i<n, $v^{(i-1)}$ has uniform product measure on $\Omega$, since it is an interpolation of sorts between independent realisations of the uniform product measure. Furthermore, since we haven’t interpolated at component i yet, it is independent of the event $\{i \in J\}$. Thus

$\mathbb{P}(i\in J, v^{(i-1)}\text{ i-pivotal})=\frac12 I_i(f).$

Putting everything together with a union bound gives

$\text{Var}(f)=\frac12 \mathbb{P}(f(w)\ne f(\bar w))\le \frac14 \sum_{i\in [n]}I_i(f),$

as desired.

We might ask when this result is tight. It is certainly not tight if we during the interpolation procedure, the value of f changes lots of times. Since a consequence of this Poincare inequality is that there is some component i such that $I_i(f)\ge \frac{1}{4n}$, we might conjecture that there is a better lower bound.

It turns out, as shown in [2], that we can add an extra logarithmic factor. Precisely, for some absolute constant c>0, there exists an i such that:

$I_i(f)\ge c \text{Var}(f)\frac{\log n}{n}.$

Indeed, there’s an example which shows that this is optimal. Consider partitioning the n components into groups of size roughly log n – log log n, all in base 2. Then the function is 1, if there exists at least one group, which we call tribes, where all the components are 1. So a particular component is pivotal precisely when all the other components in its tribe are already 1. Thus

$I_i(f)\approx \left(\frac{1}{2}\right)^{\log n - \log\log n}= \frac{\log n}{n}.$

One consequence of this is that by BKS, we learn that the function is noise-sensitive. Is this clear directly? Note that if $w$ contains a tribe of 1s, then with high probability $w^{(\epsilon)}$ for large n, roughly $\epsilon(\log n-\log \log n)$ of these get changed. In particular, it is very unlikely that the same tribe will be a tribe of 1s in $w^\epsilon$. So heuristically, the presence of a tribe in $w^\epsilon$ feels almost independent of the presence of a tribe in $w$. But because of the (approximate) symmetry of the setup, it is much easier to work with influences, and use BKS.

I will continue the discussion of the Fourier techniques used to derive some of these results in a new post soon.

References

[1] – Benjamini, Kalai, Schramm (1999) – Noise Sensitivity of Boolean Functions and Applications to Percolation. (arXiv)

[2] – Kahn, Kalai, Linial (1988) – The influence of variables on Boolean Functions. (link)

Garban, Steif – Noise sensitivity of Boolean functions and Percolation. (arXiv) I have borrowed heavily from these excellent notes. The discrete Poincare inequality appears in Section 1.2, with an outline of the solution in Exercise 1.5.