I’m at UBC this month for the PIMS probability summer school. One of the long courses is being given by Marek Biskup about the Discrete Gaussian Free Field (notes and outline here) so this seems like a good moment to revive the sequence of posts about the DGFF. Here’s DGFF1, DGFF2, DGFF3 from November.

The first draft of this post was about the maximum of the DGFF in a large box , and also about the Green’s function , which specifies the covariance structure of the DGFF. This first draft also became too long, so I’m splitting it into two somewhat shorter ones. As we’ll see, some understanding and standard estimates of the Green’s function is enough to say quite a bit about the maximum. In this first post, we’ll explore some ‘low-hanging fruit’ concerning the Green’s function, as defined through a simple random walk, which are useful, but rarely explained in the DGFF literature.

**Symmetry of Green’s function**

We start with one of these low-hanging fruit. If is to be a covariance matrix, it has to be symmetric. In the first post, showing that the definition of the DGFF as a random field with given Hamiltonian is equivalent to certainly can be viewed as a proof of symmetry. However, it would be satisfying if there was a direct argument in the language of the definition of the Green’s function.

To make this self-contained, recall the random walk definition of . Let be simple random walk on , and denote starting the random walk at . As usual, let denote the hitting time of a vertex y or a set A respectively. Then

That is, is the expected number of visits to y by a random walk from x, before it exits .

Let’s drop the superscript for now, as everything should hold for a more general subset of the lattice. I don’t think it’s immediately obvious at the level of Markov chains why G(x,y)=G(y,x). In particular, it’s not the case that

and it feels that we can’t map between paths and in a way that preserves the number of visits to y and x, respectively. However, we can argue that for any m

by looking at the suitable paths of . That is, if we have a path that stays within D, then the probability of seeing this path starting from x and its reverse direction starting from y are equal. Why? Because

and

Since and x,y are in the *interior* of D, we must have , and so these two expressions are equal. Summing over all such two-way paths, and then all m gives the result.

**Fixing one argument**

We now focus on , where the second argument is fixed. This is the solution to the Poisson equation

To see this, can use a standard hitting probability argument (as here) with the Markov property. This is harmonic in , and since we know

this uniquely specifies . Anyway, since harmonic functions achieve their maxima at the boundary, we have for all . We can also see this from the SRW definition as

**Changing the domain**

Now we want to consider nested domains , and compare and on DxD. The idea is that for SRW started from , we have , since one boundary is contained within the other. From this, we get

and we will use the particular case y=x.

For example, if , the box with width N, then the box with width 2N centred on x contains the whole of . So, if we set , then with reference to the diagram, we have

As we’ll see when we study the maximum of the DGFF on , uniform control over the pointwise variance will be a useful tool.

**Maximising the Green’s function**

The idea of bounding by for any is clever and useful. But a more direct approach would be to find the value of x that maximises . We would conjecture that when has a central vertex, then this is the maximiser.

We can prove this directly from the definition of the Green’s function in terms of random walk occupation times. Let’s assume we are working with for even N, so that 0 is the central vertex. Again, since

(*)

it would suffice to show that this probability is minimised when x=0. This feels right, since 0 is furthest from the boundary. Other points are closer to the boundary in some directions but further in others, so we can’t condition on the maximum distance from its start point achieved by an excursion of SRW (we’re vertex-transitive, so these look the same from all starting points), as even allowing for the four possible rotations, for an excursion of diameter slightly larger than N, starting at the centre is maximally bad.

However, intuitively it does feel as if being closer to the boundary makes you more likely to escape earlier. In fact, with a bit more care, we can couple the SRW started from 0 and the SRW started from such that the latter always exits first. For convenience we’ll assume also that are both even.

I couldn’t find any reference to this, so I don’t know whether it’s well-known or not. The following argument involves projecting into each axis, and doing separate couplings for transitions in the x-direction and transitions in the y-direction. We assume WLOG that x is in the upper-right quadrant as shown. Then, let be SRW started from 0, and we will construct on the same probability space as as follows. For every m, we set the increment to be . It remains to specify the sign, which will be determined by the direction of the S-increment, and a pair of stopping times. The marginal is therefore again an SRW, started from r. Temporarily, we use the unusual notation for the coordinates of .

So, if , ie S moves left or right, then we set

(*)

where . That is, moves in the opposing direction to until the first time when they are equal (hence the parity requirement), and then they move together. WLOG assume that . Then suppose and such m is minimal. Then by construction, if , then also. If , then we must have , and so since ‘s trajectory is a mirror image of ‘s, in fact , so hit +N first. In both cases, we see that hits at the same time or before .

In other words, when has non-negative x coordinate, the lazy random walk follows the same trajectory as , and when it has negative x coordinate, the mirrors . At some time, it may happen that (recall the parity condition on r). Call this time . We then adjust the description of the coupling so that (*) is the mechanism for , and then for , we take .

Similarly, if , ie S moves up or down, then we set

with corresponding definition of the stopping time .

This completes the coupling, and by considering , we have shown what that the exit time for the walk started from zero dominates the exit time for walk started from r. Recall that so far we are in the case where the box has even width and has even coordinates.

This exit time comparison isn’t exactly what we need to compare and . It’s worth remarking at this stage that if all we cared about was the Green’s function on the integer line [-N,N], we would have an easier argument, as by the harmonic property of

and so follows by symmetry. To lift from 1D to 2D directly, we need a bit more than this. It’s possible that S returns in both x- and y- coordinates more often than R, but never at the same time. Fortunately, the coupling we defined slightly earlier does give us a bit more control.

Let be the first times that hit . Under this coupling, for any

since these events are literally equal. Since we showed that almost surely, we can further deduce

To address the corresponding events for which , we apply the strong Markov property at , to obtain SRW started from r/2, and let be the hitting times of respectively and . It will now suffice to prove that

(**)

as then we can apply the law of total probability and sum over values of and .

To prove this result, we consider the following bijection between trajectories of length m from r/2 to {0,r}. We decompose the trajectories into excursions away from r/2, and then a final meander from r/2 to {0,r} that stays on the same side of r/2. We construct the new trajectory by preserving all the initial excursions, but reversing all the steps of the final meander. So if the original trajectory ended up at 0, the image ends up at r. Trivially, the initial excursions in the image only hit if the excursions in the original trajectory did this too. But it’s also easy to see, by a similar argument to the coupling at the start of this section, that if the original trajectory ends at r and does not hit , then so does the image. However, the converse is not true. So we conclude (**), and thus

for all m by combining everything we have seen so far. And so we can now lift to a statement about itself, that is considering both coordinates separately.

The remaining cases for r require a little more care over the definition of , though the same projection argument works, for fundamentally the same reason. (Note that in the above argument, if and , then in fact , and so it’s not hard to convince yourself that a sensible adjustment to the stopping time will allow a corresponding result with in the odd case.) The case for N odd is harder, since in one dimension there are *two* median sites, and it’s clear by symmetry that we can’t couple them such that RW from one always exits at least as early as RW from the other. However, the distributions of exit times started from these two sites are the same (by symmetry), and so although we can’t find a coupling, we can use similar stopping times to obtain a result in probability.

In the next post, we’ll see how to apply this uniform bound on to control the maximum of the DGFF on . In particular, we address how the positive correlations of DGFF influence the behaviour of the maximum by comparison with independent Gaussians at each site.