DGFF 3 – Gibbs-Markov property for entropic repulsion

In the previous post, we saw that it isn’t much extra effort to define the DGFF with non-zero boundary conditions, by adding onto the zero-BC DGFF the unique (deterministic) harmonic function which extends the boundary values into the domain. We also saw how a Gibbs-Markov property applies, whereby the values taken by the field on some sub-region A\subset D depend on the values taken on D\backslash A only through values taken on \partial A.

In this post, we look at how this property and some other methods are applied by Deuschel [1] to study the probability that the DGFF on a large box in \mathbb{Z}^d is positive ‘everywhere’. This event can be interpreted in a couple of ways, all of which are referred to there as entropic repulsion. Everything which follows is either taken directly or paraphrased directly from [1]. I have tried to phrase this in a way which avoids repeating most of the calculations, instead focusing on the methods and the motivation for using them.

Fix dimension d\ge 2 throughout. We let P^0_N be the law of the DGFF on V_N:=[-N,N]^d\subset \mathbb{Z}^d with zero boundary conditions. Then for any subset A\subset \mathbb{Z}^d, in an intuitively-clear abuse of notation, we let

\Omega^+(A):= \{ h_x\ge 0, x\in A\},

be the event that some random field h takes only non-negative values on A. The goal is to determine P^0_N ( \Omega^+(V_N)). But for the purposes of this post, we will focus on showing bounds on the probability that the field is non-negative on a thin annulus near the boundary of V_N, since this is a self-contained step in the argument which contains a blog-friendly number of ideas.

We set (L_N) to be a sequence of integers greater than one (to avoid dividing by zero in the statement), for which \frac{L_N}{N}\rightarrow 0. We now define for each N, the annulus

W_N = \{v\in V_N: L_N\le d_{\mathbb{Z}^d}(v, V_N^c)\le 2L_N \}

with radius L_N set a distance L_N inside the box V_N. We aim to control P^N_0 (\Omega^+(W_N)). This forms middle steps of Deuschel’s Propositions 2.5 and 2.9, which discuss P^N_0(\Omega^+(V_{N-L_N})). Clearly there is the upper bound

P^N_0(\Omega^+(V_{N-L_N})) \le P^N_0(\Omega^+(W_N)) (1)

and a lower bound on P^N_0(\Omega^+(V_{N-L_N})) is obtained in the second proposition by considering the box as a union of annuli then combining the bounds on each annulus using the FKG inequality.

Upper bound via odds and evens

After removing step (1), this is Proposition 2.5:

\limsup_{N\rightarrow \infty} \frac{L_N}{N^{d-1} \log L_N} \log P^N_0(\Omega^+(W_N)) < 0. (2)

This is giving a limiting upper bound on the probability of the form L_N^{-CN^{d-1}/L_N}, though as with all LDP estimates, the form given at (2) is more instructive.

Morally, the reason why it is unlikely that the field should be non-negative everywhere within the annulus is that the distribution at each location is centred, and even though any pair of values are positively correlated, this correlation is not strong enough to avoid this event being unlikely. But this is hard to corral into an upper bound argument directly. In many circumstances, we want to prove upper bounds for complicated multivariate systems by projecting to get an unlikely event for a one-dimensional random variable, or a family of independent variables, even if we have to throw away some probability. We have plenty of tools for tail probabilities in both of these settings. Since the DGFF is normal, a one-dimensional RV that is a linear combination (eg the sum) of all the field heights is a natural candidate. But in this case we would have thrown away too much probability, since the only way we could dominate is to demand that the sum \sum_{x\in W_N}h^N_x\ge 0, which obviously has probability 1/2 by symmetry. (3)

So Deuschel splits W_N into W_N^o,W_N^e, where the former includes all vertices with odd total parity in W_N and the latter includes all the vertices with even total parity in the interior of W_N. (Recall that \mathbb{Z}^d is bipartite in exactly this fashion). The idea is to condition on h^N\big|_{W^o_N}. But obviously each even vertex is exactly surrounded by odd vertices. So by the Gibbs-Markov property, conditional on the odd vertices, the values of the field at the even vertices are independent. Indeed, if for each v\in W_N^e we define \bar h_v to be the average of its neighbours (which is measurable w.r.t to the sigma-algebra generated by the odd vertices), then

\{h_v: v\in W_N^e \,\big|\, \sigma(h_w: w\in W_N^o)\},

is a collection of independent normals with variance one, and where the mean of h_v is \bar h_v.

To start finding bounds, we fix some threshold m=m_N\gg 1 to be determined later, and consider the odd-measurable event A_N that at most half of the even vertices v have \bar h_v\ge m. So A_N^c\cap \Omega^+(W_N) says that all the odd vertices are non-negative and many are quite large. This certainly feels like a low-probability event, and unlike at (3), we might be able to obtain good tail bounds by projection into one dimension.

In the other case, conditional on A_N, there are a large number of even vertices with conditional mean at most m, and so we can control the probability that at least one is negative as a product

(1-\varphi(m))^{\frac12 |W_N^e|}. (4)

Note that for this upper bound, we can completely ignore the other even vertices (those with conditional mean greater than m).

So we’ll go back to A_N^c \cap \Omega^+(W_N). For computations, the easiest one-dimensional variable to work with is probably the mean of the \bar h_vs across v\in W_N^e, since on A_N^c\cap \Omega^+(W_N) this is at least \frac{m}{2}. Rather than focus on the calculations themselves involving

\bar S^e_N:= \frac{1}{|W_N^e|} \sum\limits_{v\in W_N^e} \bar h_v,

let us remark that it is certainly normal and centered, and so there are many methods to bound its tail, for example

P^0_N \left( \bar S^e_N \ge \frac{m}{2} \right) \le \exp\left( \frac{-m^2}{8\mathrm{Var}(\bar S^e_N)} \right), (5)

as used by Deuschel just follows from an easy comparison argument within the integral of the pdf. We can tackle the variance using the Green’s function for the random walk (recall the first post in this set). But before that, it’s worth making an observation which is general and useful, namely that \bar S^e_N is the expectation of

S^e_N:= \sum{1}{|W_N^e|}\sum\limits_{v\in W_N^e} h_v

conditional on the odds. Directly from the law of total variance, the variance of any random variable X is always larger than the variance of \mathbb{E}[X|Y].

So in this case, we can replace \mathrm{Var}(\bar S^e_N) in (5) with \mathrm{Var}(S^e_N), which can be controlled via the Green’s function calculation.

Finally, we choose m_N so that the probability at (4) matches the probability at (5) in scale, and this choice leads directly to (2).

In summary, we decomposed the event that everything is non-negative into two parts: either there are lots of unlikely local events in the field between an even vertex and its odd neighbours, or the field has to be atypically large at the odd sites. Tuning the parameter m_N allows us to control both of these probabilities in the sense required.

Lower bound via a sparse sub-lattice

To get a lower bound on the probability that the field is non-negative on the annulus, we need to exploit the positive correlations in the field. We use a similar idea to the upper bound. If we know the field is positive and fairly large in many places, then it is increasingly likely that it is positive everywhere. The question is how many places to choose?

We are going to consider a sub-lattice that lives in a slightly larger region than W_N itself, and condition the field to be larger than m=m_N everywhere on this lattice. We want the lattice to be sparse enough that even if we ignore positive correlations, the chance of this happening is not too small. But we also want the lattice to be dense enough that, conditional on this event, the chance that the field is actually non-negative everywhere in W_N is not too small either.

To achieve this, Deuschel chooses a sub-lattice of width \lfloor\epsilon L_N^{2/d}\rfloor, and sets \Lambda_N(\epsilon) to be the intersection of this with the annulus with radii [N-\frac{5}{2}L_N, N-\frac{1}{2}L_N], to ensure it lives in a slightly larger region than W_N itself. The scaling of this sub-lattice density is such that when a random walk is started at any v\in W_N, the probability that the RW hits \Lambda_N(\epsilon) before \partial V_N is asymptotically in (0,1). (Ie, not asymptotically zero or one – this requires some definitely non-trivial calculations.) In particular, for appropriate (ie large enough) choice of \epsilon, this probability is at least 1/2 for all v\in W_N. This means that after conditioning on event B_N:=\{h_v\ge m : v\in \Lambda_N(\epsilon)\}, the conditional expectation of h_w is at least \frac{m}{2} for all w\in W_N\backslash \Lambda_N(\epsilon). Again this uses the Gibbs-Markov property and the Gaussian nature of the field. In particular, this conditioning means we are left with the DGFF on V_N\backslash \Lambda_N(\epsilon), ie with boundary \partial V_N\cup \Lambda_N(\epsilon), and then by linearity, the mean at non-boundary points is given by the harmonic extension, which is linear (and so increasing) in the boundary values.

At this point, the route through the calculations is fairly clear. Since we are aiming for a lower bound on the probability of the event \Omega^+(W_N), it’s enough to find a lower bound on P^0_N(\Omega^+(W_N)\cap B).

Now, by positive correlation (or, formally, the FKG inequality) we can control P^0_N(B) just as a product of the probabilities that the field exceeds the threshold at each individual site in \Lambda_N(\epsilon). Since the value of the field at each site is normal with variance at least 1 (by definition), this is straightforward.

Finally, we treat P^0_N(\Omega^+(W_N) \,\big|\, B). We’ve established that, conditional on B, the mean at each point of W_N\backslash \Lambda_N(\epsilon) is at least \frac{m}{2}, and we can bound the variance above too. Again, this is a conditional variance, and so is at most the corresponding original variance, which is bounded above by \sigma_N^2:=\mathrm{Var}(h^N_0). (This fact that the variance is maximised at the centre is intuitively clear when phrased in terms of occupation times, but the proof is non-obvious, or at least non-obvious to me.)

Since each of the event h_v^N\ge 0 for v\in W_N\backslash \Lambda_N(\epsilon) is positively correlated with B, we can bound the probability it holds for all v by the product of the probabilities that it holds for each v. But having established that the conditional mean is at least \frac{m_N}{2} for each v, and the variance is uniformly bounded above (including in N), this gives an easy tail bound of the form we require.

Again it just remains to choose the sequence of thresholds m_N to maximise the lower bound on the probability that we’ve found in this way. In both cases, it turns out that taking m_N= \sqrt{C\log N} is sensible, and this turns out to be linked to the scaling of the maximum of the DGFF, which we will explore in the future.

References

[1] – J-D Deuschel, Entropic Repulsion of the Lattice Free Field, II. The 0-Boundary Case. Available at ProjectEuclid.

Advertisements

Discontinuous Phase Transitions

Yesterday, Demeter Kiss from Cambridge gave a seminar in Oxford about a model for self-destructive percolation on \mathbb{Z}^2 that had implications for the (non-)existence of an infinite-parameter forest fire model on the same lattice. I enjoyed talking about this and his recent work on the related model of frozen percolation on \mathbb{Z}^2. Considering these models in the lattice setting present a whole range of interesting geometric challenges that are not present in the mean-field case that has mainly occupied my research direction so far.

The afternoon’s discussion included lots of open problems about percolation. Several of these are based around continuity of the phase transition, so I thought I would write a quite post about some simple examples of this, and one example where it does not hold.

A helpful base example is bond percolation on the lattice \mathbb{Z}^2. Here, we specify some probability p in [0,1], and we declare edges of the lattice open with probability p, independently of each other. We then consider the graph induced by the open edges. We say that percolation occurs if the origin is contained in an infinite open component. The terminology arises from the interpretation as fluid being added at the origin and flowing down open edges. We define \theta(p) to be the probability that the origin is in an infinite component when the parameter is p. By translation-invariance, we can get some sort of 0-1 law, to conclude that there is an infinite component somewhere in the system with probability either 0 or 1, depending on whether \theta(p) is positive or zero. Indeed, we can further show that if it is positive, then with probability 1 there is a unique infinite component.

We define the critical probability p_c:= \inf\{\theta(p)>0\}. A question worth asking is then, what is \theta(p_c)? In some examples, we can find p_c, but we cannot prove that \theta(p) is continuous around p_c. In the case of \mathbb{Z}^2 this is known, and it is known from work of Kesten that p_c=1/2. See below for a plot of \theta(p) in this setting (obtained from this blog, though possibly originating elsewhere).

percolation probabilityThe aim is to find an example where we do not have such a continuous phase transition. The original work on frozen percolation took place on trees, and one of Kiss’s results is confirms that these show qualitatively different phenomena to the same process on the lattice. In some sense, trees lie halfway between a lattice and a mean-field model, since there is often some independence when we look down the tree from a given generation, if it is well-defined to use such language.

Anyway, first we consider percolation on an infinite regular rooted k-ary tree. This means we have a root, which has k children, each of which in turn has k children, and so on. As before we consider bond percolation with parameter p. In this setting, we have a language to describe the resulting open component of the root. The offspring distribution of any vertex in the open component is given by Bin(k,p) independently of everything else, so we can view this component as the realisation of a Galton-Watson tree with this offspring distribution. This distribution has finite mean kp, and so we can state explicitly when the survival probability is positive. This happens when the mean is greater than 1, ie p>1/k.

For our actual example, we will consider the survival probability, but the technicalities are easier to explain if we look at the extinction probability, now using the language of branching processes. Suppose the offspring distribution has pgf given by

f(x)=p_0+p_1x+p_2x^2+\ldots.

Then the extinction probability q satisfies f(q)=q. I want to pause to consider what happens if this equation has multiple solutions. Indeed, in most interesting cases it will have multiple solutions, since f(1) will always be 1 if it is a non-defective offspring distribution. It is typically cited that: the extinction probability q is the smallest solution to this equation. I want to discuss why that is the case.

To approach this, we have to consider what extinction means. It is the limit in the event sense of the events {we are extinct after n generations}. Let the probabilities of these events be q_n, so q_0=0. Then by a straightforward coupling argument, we must have

0=q_0\le q_1\le q_2 \le\ldots\le q:= \lim q_n \le 1.

But, by the same generating function argument as before, q_{n+1}=f(q_n)\ge q_n. So if we split [0,1] into regions A where f(x)\ge x and B where f(x)<x, all the (q_n)s must occur in the former, and so since it is closed, their limit must be in A also. Note that if f(x) intersects x lots of times, then region A is not necessarily connected. In the diagram below, in moving from q_n to q_{n+1} we might jump across part of B.

Iterative percolation graphThis is bad, as we are trying to prove that q is the right boundary of the connected component of A containing 0. But this cannot happen, as f is monotonic. So if one of the roots of f(x)=x in between the hypothesised q_n<q_{n+1} is called z, then f(q_n)< f(z)=z < q_{n+1}, a contradiction.

Ok, so now we are ready to consider our counterexample to continuity over the percolation threshold. See references for a link to the original source of this example. We have to choose a slightly more complicated event than mere survival or extinction. We consider bond percolation as before on the infinite ternary tree, where every vertex has precisely 3 offspring. Our percolation event is now that the root is the root of an infinite binary tree. That is, the root has at least two children, each of which have at least two children, each of which, and so on.

If we set this probability equal to q, and the probability of an edge being open equal to p, then we have the recurrence:

q=3p^2(1-p)q^2+p^3[3q^2(1-q)+q^3].

The first term corresponds to the root having two open edges to offspring, and the second to the root having all three open edges to offspring. After manipulating, we end up with

q\left[2p^3q^2-3p^2q+1\right]=0.

We are therefore interested in roots of the quadratic lying between 0 and 1. The discriminant can be evaluated as

\Delta=p^3(9p-8),

and so there are no real roots where p<8/9. But when p=8/9, we have a repeated root at q=27/32, which is obviously not zero!

This equation is qualitatively different to the previous one for the extinction probability of a Galton-Watson tree. There, we had a quadratic, with one root at 1. As we varied p, the other root moved continuously from greater than one to less than one, so it passed through 1, giving continuity at the critical probability. Here, we have a cubic, again with one root at 1. But now the other roots are complex for small p, meaning that the local minimum of the cubic lies above the x-axis. As p gets to the critical value, it the local minimum passes below the x-axis, and suddenly we have a repeated root, not at zero.

I would like to have a neat probabilistic heuristic for this result, without having to make reference to generating functions. At the moment, the best I can come up with is to say that the original problem is simple, in the sense that the critical probability is as small as it could be while still making sense in expectation. To be concrete, when the mean of the offspring generation is less than 1, the expected size of the nth generation tends to zero, so there certainly could not be positive probability of having an infinite component.

Whereas in the binary tree example, we only require p=2/3 to have, in expectation, the right number of open edges to theoretically allow an infinite binary tree. If we think of percolation as a dynamic process by coupling in p, essentially as we move from p=2/3 to p=8/9 we need to add enough edges near the origin to be able to take advantage of the high density of edges available far from the origin. The probability of this working given you start from n vertices grows much faster (as n grows) than in the original problem, so you might expect a faster transition.

This is so content-free I’m reluctant even to call it a heuristic. I would be very interested to hear of any more convincing argument for this phenomenon!

REFERENCES

Dekking, Pakes – On family trees and subtrees of simple branching processes (link)

Enhanced by Zemanta

Random Interlacements

In this post, I want to talk about another recently-introduced model that’s generating a lot of interest in probability theory, Sznitman’s model of random interlacements. We also want to see, at least heuristically, how this relates to more familiar models.

We fix our attention on a lattice, which we assume to be \mathbb Z ^d. We are interested in the union of an infinite collection of simple random walks on the lattice. The most sensible thing to consider is not a collection of random walks from at a random set of starting points, but rather a family of trajectories, that is a doubly-infinite random walk defined on times (-\infty,\infty). We will want this family to have some obvious properties, such as translation invariance, in order to make analysis possible and ideally obtain some 0-1 laws. The natural thing to do is then to choose the trajectories through a Poisson Point Process. The tricky part will be finding an intensity measure that has all the properties we want, and gives trajectories that genuinely do look like SRWs, and, most importantly, have a union that is neither too sparse nor too dense. For example, it wouldn’t be very interesting if with high probability every point appeared in the union…

For reasons we will mention shortly, we are interested in the complement of the union of the trajectories. We call this the vacant set. We will find an intensity which we can freely scale by some parameter u\in\mathbb{R}^+, which will give us a threshold for the complement to contain an infinite component. This is in the same sense as the phase transition for Bernoulli percolation. That is, there is a critical value u^* say, such that for u<u^* the vacant set contains an infinite component (or percolates) almost surely, and almost surely it does not when u>u^*. A later result of Teixeira shows that, as in percolation, this infinite component is unique.

Let us first recall why it is not interesting to consider this process for d=1 or 2. On \mathbb{Z}, with high probability a single SRW hits every integer point trivially, since it visits arbitrarily large and arbitrarily small integers. For d=2, the SRW is recurrent, and so consists of a countably infinite sequence of excursions from (0,0). Note that the probability that an excursion from 0 hits some point (x,y) is non-zero, as it is at least 2^{-2(|x|+|y|)} for example. Therefore, with high probability the SRW hits (x,y), and so whp it hits every point.

Therefore it is only for d=>3 that we start seeing interesting effects. It is worth mentioning at this point some of the problems that motivated considering this model. First is the disconnection time of a discrete cylinder by a simple walk. For example, Sznitman considers the random walk on \mathbb{Z}\times (\mathbb{Z}/N\mathbb{Z})^d. Obviously, it is more interesting to consider how long it takes a (1-dimensional in the natural sense) path to disconnect a d=>3 dimensional set than a 2-dimensional one, as the latter is given just by the first time the path self-intersects.

More generally, we might be interested in random walks up to some time an order of magnitude smaller than the cover time. Recall the cover time is the time to hit each point of the set. For example, for the random walk on the d-dimensional torus (\mathbb{Z}/N\mathbb{Z})^d the cover time (as discussed in Markov Chains and Mixing Times posts) is N^d \log N, but the log N represents in some sense only the ‘final few’ vertices. So we should ask what the set of unhit vertices looks like at time N^d. And it turns out that for large N, the structure of this vacant set is related to the vacant set in the random interlacement model, in a local sense.

Anyway, the main question to ask is: what should the intensity measure be?

We patch it together locally. Start with the observation that transience of the random walk means almost surely a trajectory spends only finitely many steps in a fixed finite set K. So we index all the trajectories which hit K by the first time they hit K. Given that a trajectory hits K, it is clear what the conditional distribution of this hitting point should be. Recall that SRW on Z^d is reversible, so we consider the SRW backwards from this hitting then. Then the probability that the hitting point is x (on the boundary of K) is proportional to the probability that a SRW started from x goes to infinity without hitting K again. So once we’ve settled on the distribution of the hitting point x, it is clear how to construct all the trajectories through K. We pick x on the boundary of K according to this distribution, and take the union of an SRW starting from x conditioned not to hit K again, and an SRW starting from x with no conditioning. These correspond to the trajectory before and after the hitting time, respectively.

In fact, it turns out that this is enough. Suppose we demand that the probability that the hitting point is x is equal to the probability that a SRW started from x goes to infinity without hitting K again (rather than merely proportional to). Sznitman proves that there is a unique measure on the set of trajectories that restricts to this measure for every choice of K. Furthermore, the Poisson Point Process with the globally-defined intensity, unsurprisingly restricts to a PPP with the intensity specific to K.

We have not so far said anything about trajectories which miss this set K. Note that under any sensible intensity with the translation-invariance property, the intensity measure of the trajectories which hit K must be positive, since we can cover \mathbb{Z}^d with countably many copies of K. So the number of trajectories hitting K is a Poisson random variable.

Recall how we defined the probability that the hitting point of K was some point x on the boundary. The sum of these probability is called the capacity of K. It follows that this is the parameter of the Poisson random variable. Ie, the probability that no trajectory passes through K is:

\exp(-u\mathrm{cap}(K)),

recalling that u is the free parameter in the intensity. This is the most convenient framework through which to start analysing the probability that there is an infinite connected set which is hit by no trajectory.

We conclude by summarising Sznitman’s Remark 1.2, explaining why it is preferable to work with the space of trajectories rather than the space of paths. Note that if we are working with paths, and we want translation invariance, then this restricts to translation invariance of the distribution of starting points as well, so it is in fact a stronger condition. Note then that either the intensity of starting at 0 is zero, in which case there are no trajectories at all, or it is positive, in which case the set of starting points looks like Bernoulli site percolation.

However, the results about capacity would still hold if there were a measure that restricted satisfactorily. And so the capacity of K would still be the measure of paths hitting K, which would be at least the probability that the path was started in K. But by translation invariance, this grows linearly with |K|. But capacity grows at most as fast as the size of the set of boundary points of K, which will be an order of magnitude smaller when K is, for example, a large ball.

REFERENCES

This was mainly based on

Sznitman – Vacant Set of Random Interlacements and Percolation (0704.2560)

Also

Sznitman – Random Walks on Discrete Cylinders and Random Interlacements (0805.4516)

Teixeira – On the Uniqueness of the Infinite Cluster of the Vacant Set of Random Interlacements (0805.4106)

and some useful slides by the same author (teixeira.pdf)