In the previous post, we defined the Discrete Gaussian Free Field, and offered some motivation via the discrete random walk bridge. In particular, when the increments of the random walk are chosen to be Gaussian, many natural calculations are straightforward, since Gaussian processes are well-behaved under conditioning and under linear transformations.
Non-zero boundary conditions
In the definition of the DGFF given last time, we demanded that on . But the model is perfectly well-defined under more general boundary conditions.
It’s helpful to recall again the situation with random walk and Brownian bridge. If we want a Brownian motion which passes through (0,0) and (1,s), we could repeat one construction for Brownian bridge, by taking a standard Brownian motion and conditioning (modulo probability zero technicalities) on passing through level s at time 1. But alternatively, we could set
That is, a Brownian bridge with drift can be obtain from a centered Brownian bridge by a linear transformation, and so certainly remains a Gaussian process. And exactly the same holds for a discrete Gaussian bridge: if we want non-zero values at the endpoints, we can obtain this distribution by taking the standard centred bridge and applying a linear transformation.
We can see how this works directly at the level of density functions. If we take a centred Gaussian bridge, then the density of is proportional to
So rewriting (where we might want to fit the previous example), the sum within the exponent rearranges as
So when the values at the endpoints are fixed, this middle term is a constant, as is the final term, and thus the density of the linearly transformed bridge has exactly the same form as the original one.
In two or more dimensions, the analogue of adding a linear function is to add a harmonic function. First, some notation. Let be any function on . Then there is a unique harmonic extension of , for which everywhere on D, the interior of the domain. Recall that is the discrete graph Laplacian defined up to a constant by
If we want instead to have boundary values , it’s enough to replace with . Then, in the density for the DGFF ( (1) in the previous post), the term in the exponential becomes (ignoring the )
For each , on taking this sum over its neighbours , the final term vanishes (since is harmonic), while the second term is just a constant. So the density of the transformed field, which we’ll call is proportional to (after removing the constant arising from the second term above)
So satisfies the conditions for the DGFF on D with non-zero boundary conditions .
Harmonic functions and RW – a quick review
Like the covariances in DGFF, harmonic functions on D are related to simple random walk on D stopped on . (I’m not claiming a direct connection right now.) We can define the harmonic extension to an interior point x by taking to be the law of SRW started from x, and then setting
where is the first time that the random walk hits the boundary.
Inverse temperature – a quick remark
In the original definition of the density of the DGFF, there is the option to add a constant within the exponential term so the density is proportional to
With zero boundary conditions, the effect of this is straightforward, as varying just rescales the values taken by the field. But with non-zero boundary conditions, the effect is instead to vary the magnitude of the fluctuations of the values of the field around the (unique) harmonic function on the domain with those BCs. In particular, when , the field is ‘reluctant to be far from harmonic’, and so .
This parameter is called inverse temperature. So low temperature corresponds to high , and high stability, which fits some physical intuition.
A Markov property
For a discrete (Gaussian) random walk, the Markov property says that conditional on a given value at a given time, the trajectory of the process before this time is independent of the trajectory afterwards. The discrete Gaussian bridge is similar. Suppose we have as before a centred Gaussian bridge, and condition that , for , and . With this conditioning, the density (3) splits as a product
Therefore, with this conditioning, the discrete Gaussian bridge splits into a pair of independent discrete Gaussian bridges with drift. (The same would hold if the original process had drift too.)
The situation for the DGFF is similar, though rather than focusing on the condition, it makes sense to start by focusing on the sub-domain of interest. Let , and take . So in particular .
Then we have that conditional on , the restricted fields and are independent. Furthermore, has the distribution of the DGFF on A, with boundary condition given by . As in the discrete bridge, this follows just by splitting the density. Every gradient term corresponds to an edge in the underlying graph that lies either entirely inside or entirely inside B. This holds for a general class of Gibbs models where the Hamiltonian depends only on the sum of some function of the heights (taken to be constant in this ‘free’ model) and the sum of some function of their nearest-neighbour gradients.
One additional and useful interpretation is that if we only care about the field on the restricted region A, the dependence of on comes only through . But more than that, it comes only through the (random) harmonic function which extends the (random) values taken on the boundary of A to the whole of A. So, if is an independent DGFF on A with zero boundary conditions, we can construct the DGFF from its value on via
where is the unique harmonic extension of the (random) values taken by on to .
This Markov property is crucial to much of the analysis to come. There are several choices of the restricted domain which come up repeatedly. In the next post we’ll look at how much one can deduce by taking A to be the even vertices in D (recalling that every integer lattice is bipartite), and then taking A to be a finer sublattice within D. We’ll use this to get some good bounds on the probability that the DGFF is positive on the whole of D. Perhaps later we’ll look at a ring decomposition of consisting of annuli spreading out from a fixed origin. Then the distribution of the field at this origin can be considered, via the final idea discussed above, as the limit of an infinite sequence of random harmonic functions given by the values taken by the field at increasingly large radius from the origin. Defining the DGFF on the whole lattice depends on the existence or otherwise of this local limit.
Pingback: DGFF 3 – Gibbs-Markov for entropic repulsion | Eventually Almost Everywhere
Pingback: DGFF 4 – Properties of the Green’s function | Eventually Almost Everywhere