On the flight to Romania a couple of weeks ago, I read this very nice paper by Duminil-Copin and Tassion in which, as a corollary to a more general result in a companion paper, they provide a short proof of a result about percolation on . Mitch talked about this paper at our final Junior Probability Seminar of this term yesterday, so it seemed appropriate to write up something about this nice argument. I must confess I know relatively little about these problems, and in particular know nothing about how the original authors, Aizenmann + Barsky (1987), and Menshikov (1986) approached this problem, but experts have said that this is substantially easier to digest.
Rather than reference the first paper repeatedly, I remark now that everything which follows comes from there.
We consider conventional bond percolation on the edges of , for
, and are interested in whether the the origin percolates with positive probability. That is, that zero is contained in an infinite component. As usual we define
to be the critical probability above which percolation happens with positive probability. Defining
, we do not know whether
for some values of d, notably d=3.
If the origin is connected to infinity, it is by definition connected to the boundary of every . The number of distinct paths from the origin to
is bounded by the number of self-avoiding walks on the lattice of length n starting from 0, which grows at most as fast as
. In particular, we know that
, but also, for any
, the probability
decays exponentially in n. We would expect this in fact to hold for all
, and this is something that the authors prove, called Item 1. They also show that the percolation probability grows at least linearly beyond
, specifically (called Item 2)
The proof here proceeds by considering the function of subsets S which contain the origin:
.
In words, this gives the expected number of edges across the boundary which are connected to the origin by a path within S. So this gives a measure of how likely we are to escape S, and in particular, an upper bound on the probability that an open path exists from 0 to outside S. The authors then define the alternative critical probability
They will show that satisfies the statements of both Item 1 and Item 2. Item 2 for
implies
, and Item 1 for
implies
, so this is exactly what we need.
They show Item 1 first. We consider this set S for which , and take some box
which strictly contains S. Now we consider the probability of escaping from a box of size kL. The reason why considering this definition of S works really nicely is that it makes it possible to split this event of escaping from
into an event involving subjects of various disjoint sets of edges being open, so we can use independence.
We decompose the path from 0 to based on the first time it leaves S. We are mindful that there might be lots of paths from from 0 to this boundary. The way we are bounding means it doesn’t matter if we have lots of suitable paths, but they should all spend a maximal number of steps in S, in the sense that whenever the path re-enters S, say to vertex z, there is no open path from 0 to z contained in S. Let the vertex on
we leave from for the first time be x. Then, for all vertices y later in the path,
.
So under any suitable path, now take y to be the vertex directly following x, hence . If we take
to be the set of vertices z for which
, we can split the expression based on S to obtain:
Splitting based on C gives us independence between all of these sets of edges, but then we immediately forget about it. Irrespective of choice of y (recall, ), this final probability is definitely bounded by
, while the p and the first term can be summed over C to give
. They obtain:
where the final relation holds by induction, and clearly gives exponential decay as required.
For Item 2 we use Russo’s formula. Here we have a slightly simpler example than the most general version, since the event under consideration is increasing with respect to adding edges. It is also a function of a finite number of edges. Then we can consider
under the coupling which adds each edge independently as a Poisson process with (locally) rate
. (We take this to be the rate to avoid having to reparameterise exponentially between time and probability. Here t=p.)
Just for ease, we only consider the right-derivative at p. Then with as the law of the coupled process:
Since the number of edges whose states determine A is finite, this second term vanishes as . So
Taking the limit in in this example gives
The argument then proceeds in a similar way to Item 1, decomposing by conditioning on the set of vertices
from which it is not possible to get to
. In particular, this set is an excellent candidate to view as S, since on this event it contains 0 by definition. Once we have specified
we know which edges might be pivotal, namely those across the boundary of
. Crucially, the event
only depends on those edges between the boundary of S and
, so is independent of the event
whenever
. So applying this version of Russo gives
It is clear where might turn up within the sum (after removing a factor of p), so for a bound we can take
outside the sum, and arrive at
It wasn’t immediately clear to me immediately that this implied the given form of Item 2 (though it certainly is consistent). I think perhaps I was trying to be too complicated and thinking about Gronwall’s lemma when in fact everything really follows from bounding below by 1 (since we have assumed
here), then integrating the differential inequality
I include this not because it’s an interesting part of the argument (I don’t think it is really) but because I struggled to digest it first time round.
What is interesting is how well this choice to consider works out. In both parts of the argument, sets which work well for splitting the crossing probabilities into disjoint edge events mesh nicely with considering this function after conditioning on sub- or super-criticality with respect to
.