Sharpness of Phase Transition in Percolation

On the flight to Romania a couple of weeks ago, I read this very nice paper by Duminil-Copin and Tassion in which, as a corollary to a more general result in a companion paper, they provide a short proof of a result about percolation on \mathbb{Z}^d. Mitch talked about this paper at our final Junior Probability Seminar of this term yesterday, so it seemed appropriate to write up something about this nice argument. I must confess I know relatively little about these problems, and in particular know nothing about how the original authors, Aizenmann + Barsky (1987), and Menshikov (1986) approached this problem, but experts have said that this is substantially easier to digest.

Rather than reference the first paper repeatedly, I remark now that everything which follows comes from there.

We consider conventional bond percolation on the edges of \mathbb{Z}^d, for d\ge 2, and are interested in whether the the origin percolates with positive probability. That is, that zero is contained in an infinite component. As usual we define p_c=\sup\{p: \mathbb{P}_p(0\leftrightarrow\infty)=0\} to be the critical probability above which percolation happens with positive probability. Defining \theta(p)=\mathbb{P}_p(0\leftrightarrow\infty), we do not know whether \theta(p_c)=0 for some values of d, notably d=3.

If the origin is connected to infinity, it is by definition connected to the boundary of every \Lambda_n:=[-n,n]^d. The number of distinct paths from the origin to \partial \Lambda_n is bounded by the number of self-avoiding walks on the lattice of length n starting from 0, which grows at most as fast as (2d-1)^n. In particular, we know that p_c\ge \frac{1}{2d-1}, but also, for any p<\frac{1}{2d-1}, the probability \mathbb{P}_p[0\leftrightarrow \partial\Lambda_n] decays exponentially in n. We would expect this in fact to hold for all p<p_c, and this is something that the authors prove, called Item 1. They also show that the percolation probability grows at least linearly beyond p_c, specifically (called Item 2)

\theta(p)\ge \frac{p-p_c}{p(1-p_c)}.

The proof here proceeds by considering the function of subsets S which contain the origin:

\varphi_p(S):= p\sum_{(x,y)\in\Delta S} \mathbb{P}_p[0\stackrel{S}{\leftrightarrow} x],\quad S\subset \mathbb{Z}^d.

In words, this gives the expected number of edges across the boundary which are connected to the origin by a path within S. So this gives a measure of how likely we are to escape S, and in particular, an upper bound on the probability that an open path exists from 0 to outside S. The authors then define the alternative critical probability

\tilde p_c := \sup_\{p\text{ s.t. }\exists\text{ finite }0\in S\text{ with }\varphi_p(S)<1\}.

They will show that \tilde p_c satisfies the statements of both Item 1 and Item 2. Item 2 for \tilde p_c implies \tilde p_c\le p_c, and Item 1 for \tilde p_c implies p_c\le \tilde p_c, so this is exactly what we need.

They show Item 1 first. We consider this set S for which \varphi_p(S)<1, and take some box \Lambda_L which strictly contains S. Now we consider the probability of escaping from a box of size kL. The reason why considering this definition of S works really nicely is that it makes it possible to split this event of escaping from \Lambda_{kL} into an event involving subjects of various disjoint sets of edges being open, so we can use independence.

We decompose the path from 0 to \partial\Lambda_{kL} based on the first time it leaves S. We are mindful that there might be lots of paths from from 0 to this boundary. The way we are bounding means it doesn’t matter if we have lots of suitable paths, but they should all spend a maximal number of steps in S, in the sense that whenever the path re-enters S, say to vertex z, there is no open path from 0 to z contained in S. Let the vertex on \partial S we leave from for the first time be x. Then, for all vertices y later in the path, 0\stackrel{S}{\not\leftrightarrow}y.

So under any suitable path, now take y to be the vertex directly following x, hence (x,y)\in\Delta S. If we take \mathcal{C} to be the set of vertices z for which 0\stackrel{S}{\leftrightarrow}z, we can split the expression based on S to obtain:

\mathbb{P}_p[0\leftrightarrow \partial \Lambda_{kL}]\le p \sum_{(x,y)\in\Delta S}\sum_{C\subset S} \mathbb{P}_p[0\stackrel{S}{\leftrightarrow} x,\mathcal{C}=C] \mathbb{P}_p[y\stackrel{\mathbb{Z}^d\backslash C}{\leftrightarrow}\partial\Lambda_{kL}].

Splitting based on C gives us independence between all of these sets of edges, but then we immediately forget about it. Irrespective of choice of y (recall, y\in S\subset \Lambda_L), this final probability is definitely bounded by \mathbb{P}_p[0\leftrightarrow \partial \Lambda_{(k-1)L}], while the p and the first term can be summed over C to give \varphi_p(S). They obtain:

\mathbb{P}_p[0\leftrightarrow \partial \Lambda_{kL}] \le \varphi_p(S)\mathbb{P}_p[y\leftrightarrow \partial \Lambda_{(k-1)L}] \le \varphi_p(S)^{k-1},

where the final relation holds by induction, and clearly gives exponential decay as required.

For Item 2 we use Russo’s formula. Here we have a slightly simpler example than the most general version, since the event under consideration A_n=\{0\leftrightarrow \partial\Lambda_n\} is increasing with respect to adding edges. It is also a function of a finite number of edges. Then we can consider \frac{d}{dp}\mathbb{P}_p[A] under the coupling which adds each edge independently as a Poisson process with (locally) rate \frac{1}{1-t}. (We take this to be the rate to avoid having to reparameterise exponentially between time and probability. Here t=p.)

Just for ease, we only consider the right-derivative at p. Then with \mathbb{P} as the law of the coupled process:

\frac{d}{dp}\mathbb{P}_p[A] \approx \frac{1}{\epsilon} \mathbb{P}[A\text{ holds at }p+\epsilon,\text{ but not at }p]

= \frac{1}{\epsilon} \sum_{e\in E}\mathbb{P}[e\text{ added between }p,p+\epsilon\text{ and }e\text{ completes event }A]

+ \frac{1}{\epsilon}\mathbb{P}[\text{two or more edges relevant to }A\text{ added}].

Since the number of edges whose states determine A is finite, this second term vanishes as \epsilon\rightarrow 0. So

=\frac{1}{\epsilon}\sum \frac{\epsilon}{1-p} \mathbb{P}(A \text{ doesn't hold at p, and edge }e\text{ pivotal at p}).

Taking the limit in \epsilon in this example gives

\frac{d}{dp}\mathbb{P}_p[0\leftrightarrow \partial\Lambda_n] = \frac{1}{1-p} \sum_{e\in\Lambda_n} \mathbb{P}_p[e\text{ pivotal, }0\not\leftrightarrow \partial \Lambda_n].

The argument then proceeds in a similar way to Item 1, decomposing \{0\not\leftrightarrow \partial \Lambda_n\} by conditioning on the set of vertices \mathcal{S} from which it is not possible to get to \partial \Lambda_n. In particular, this set is an excellent candidate to view as S, since on this event it contains 0 by definition. Once we have specified \mathcal{S} we know which edges might be pivotal, namely those across the boundary of \mathcal{S}. Crucially, the event \{\mathcal{S}=S\} only depends on those edges between the boundary of S and \partial \Lambda_n, so is independent of the event \{0\stackrel{S}{\leftrightarrow}x\} whenever x\in\partial \mathcal{S}. So applying this version of Russo gives

\frac{d}{dp}\mathbb{P}_p[0\leftrightarrow \partial\Lambda_n] = \frac{1}{1-p}\sum_{0\in S\subset \Lambda_n} \sum_{(x,y)\in\Delta S} \mathbb{P}_p[0\stackrel{S}{\leftrightarrow} x]\mathbb{P}_p[\mathcal{S}=S].

It is clear where \varphi_p(S) might turn up within the sum (after removing a factor of p), so for a bound we can take \inf_S \varphi_p(S) outside the sum, and arrive at

\frac{d}{dp}\mathbb{P}_p[0\leftrightarrow \partial\Lambda_n] \ge \frac{1}{p(1-p)}\inf_{0\in S\subset \Lambda_n} (1-\mathbb{P}_p[0\leftrightarrow \partial \Lambda_n].

It wasn’t immediately clear to me immediately that this implied the given form of Item 2 (though it certainly is consistent). I think perhaps I was trying to be too complicated and thinking about Gronwall’s lemma when in fact everything really follows from bounding \inf \varphi_p(S) below by 1 (since we have assumed p>\tilde p_c here), then integrating the differential inequality

\frac{d}{dp}\left[ \frac{p}{1-p}f(p) \right] = \frac{p}{1-p}f'(p)+\frac{1}{(1-p)^2}f(p) \ge \frac{1}{(1-p)^2}.

I include this not because it’s an interesting part of the argument (I don’t think it is really) but because I struggled to digest it first time round.

What is interesting is how well this choice to consider \varphi_p(S) works out. In both parts of the argument, sets which work well for splitting the crossing probabilities into disjoint edge events mesh nicely with considering this function after conditioning on sub- or super-criticality with respect to \tilde p_c.

Advertisements

Discontinuous Phase Transitions

Yesterday, Demeter Kiss from Cambridge gave a seminar in Oxford about a model for self-destructive percolation on \mathbb{Z}^2 that had implications for the (non-)existence of an infinite-parameter forest fire model on the same lattice. I enjoyed talking about this and his recent work on the related model of frozen percolation on \mathbb{Z}^2. Considering these models in the lattice setting present a whole range of interesting geometric challenges that are not present in the mean-field case that has mainly occupied my research direction so far.

The afternoon’s discussion included lots of open problems about percolation. Several of these are based around continuity of the phase transition, so I thought I would write a quite post about some simple examples of this, and one example where it does not hold.

A helpful base example is bond percolation on the lattice \mathbb{Z}^2. Here, we specify some probability p in [0,1], and we declare edges of the lattice open with probability p, independently of each other. We then consider the graph induced by the open edges. We say that percolation occurs if the origin is contained in an infinite open component. The terminology arises from the interpretation as fluid being added at the origin and flowing down open edges. We define \theta(p) to be the probability that the origin is in an infinite component when the parameter is p. By translation-invariance, we can get some sort of 0-1 law, to conclude that there is an infinite component somewhere in the system with probability either 0 or 1, depending on whether \theta(p) is positive or zero. Indeed, we can further show that if it is positive, then with probability 1 there is a unique infinite component.

We define the critical probability p_c:= \inf\{\theta(p)>0\}. A question worth asking is then, what is \theta(p_c)? In some examples, we can find p_c, but we cannot prove that \theta(p) is continuous around p_c. In the case of \mathbb{Z}^2 this is known, and it is known from work of Kesten that p_c=1/2. See below for a plot of \theta(p) in this setting (obtained from this blog, though possibly originating elsewhere).

percolation probabilityThe aim is to find an example where we do not have such a continuous phase transition. The original work on frozen percolation took place on trees, and one of Kiss’s results is confirms that these show qualitatively different phenomena to the same process on the lattice. In some sense, trees lie halfway between a lattice and a mean-field model, since there is often some independence when we look down the tree from a given generation, if it is well-defined to use such language.

Anyway, first we consider percolation on an infinite regular rooted k-ary tree. This means we have a root, which has k children, each of which in turn has k children, and so on. As before we consider bond percolation with parameter p. In this setting, we have a language to describe the resulting open component of the root. The offspring distribution of any vertex in the open component is given by Bin(k,p) independently of everything else, so we can view this component as the realisation of a Galton-Watson tree with this offspring distribution. This distribution has finite mean kp, and so we can state explicitly when the survival probability is positive. This happens when the mean is greater than 1, ie p>1/k.

For our actual example, we will consider the survival probability, but the technicalities are easier to explain if we look at the extinction probability, now using the language of branching processes. Suppose the offspring distribution has pgf given by

f(x)=p_0+p_1x+p_2x^2+\ldots.

Then the extinction probability q satisfies f(q)=q. I want to pause to consider what happens if this equation has multiple solutions. Indeed, in most interesting cases it will have multiple solutions, since f(1) will always be 1 if it is a non-defective offspring distribution. It is typically cited that: the extinction probability q is the smallest solution to this equation. I want to discuss why that is the case.

To approach this, we have to consider what extinction means. It is the limit in the event sense of the events {we are extinct after n generations}. Let the probabilities of these events be q_n, so q_0=0. Then by a straightforward coupling argument, we must have

0=q_0\le q_1\le q_2 \le\ldots\le q:= \lim q_n \le 1.

But, by the same generating function argument as before, q_{n+1}=f(q_n)\ge q_n. So if we split [0,1] into regions A where f(x)\ge x and B where f(x)<x, all the (q_n)s must occur in the former, and so since it is closed, their limit must be in A also. Note that if f(x) intersects x lots of times, then region A is not necessarily connected. In the diagram below, in moving from q_n to q_{n+1} we might jump across part of B.

Iterative percolation graphThis is bad, as we are trying to prove that q is the right boundary of the connected component of A containing 0. But this cannot happen, as f is monotonic. So if one of the roots of f(x)=x in between the hypothesised q_n<q_{n+1} is called z, then f(q_n)< f(z)=z < q_{n+1}, a contradiction.

Ok, so now we are ready to consider our counterexample to continuity over the percolation threshold. See references for a link to the original source of this example. We have to choose a slightly more complicated event than mere survival or extinction. We consider bond percolation as before on the infinite ternary tree, where every vertex has precisely 3 offspring. Our percolation event is now that the root is the root of an infinite binary tree. That is, the root has at least two children, each of which have at least two children, each of which, and so on.

If we set this probability equal to q, and the probability of an edge being open equal to p, then we have the recurrence:

q=3p^2(1-p)q^2+p^3[3q^2(1-q)+q^3].

The first term corresponds to the root having two open edges to offspring, and the second to the root having all three open edges to offspring. After manipulating, we end up with

q\left[2p^3q^2-3p^2q+1\right]=0.

We are therefore interested in roots of the quadratic lying between 0 and 1. The discriminant can be evaluated as

\Delta=p^3(9p-8),

and so there are no real roots where p<8/9. But when p=8/9, we have a repeated root at q=27/32, which is obviously not zero!

This equation is qualitatively different to the previous one for the extinction probability of a Galton-Watson tree. There, we had a quadratic, with one root at 1. As we varied p, the other root moved continuously from greater than one to less than one, so it passed through 1, giving continuity at the critical probability. Here, we have a cubic, again with one root at 1. But now the other roots are complex for small p, meaning that the local minimum of the cubic lies above the x-axis. As p gets to the critical value, it the local minimum passes below the x-axis, and suddenly we have a repeated root, not at zero.

I would like to have a neat probabilistic heuristic for this result, without having to make reference to generating functions. At the moment, the best I can come up with is to say that the original problem is simple, in the sense that the critical probability is as small as it could be while still making sense in expectation. To be concrete, when the mean of the offspring generation is less than 1, the expected size of the nth generation tends to zero, so there certainly could not be positive probability of having an infinite component.

Whereas in the binary tree example, we only require p=2/3 to have, in expectation, the right number of open edges to theoretically allow an infinite binary tree. If we think of percolation as a dynamic process by coupling in p, essentially as we move from p=2/3 to p=8/9 we need to add enough edges near the origin to be able to take advantage of the high density of edges available far from the origin. The probability of this working given you start from n vertices grows much faster (as n grows) than in the original problem, so you might expect a faster transition.

This is so content-free I’m reluctant even to call it a heuristic. I would be very interested to hear of any more convincing argument for this phenomenon!

REFERENCES

Dekking, Pakes – On family trees and subtrees of simple branching processes (link)

Enhanced by Zemanta