SLE Revision 4: The Gaussian Free Field and SLE4

I couldn’t resist breaking the order of my revision notes in order that the title might be self-referential. Anyway, it’s the night before my exam on Conformal Invariance and Randomness, and I’m practising writing this in case of an essay question about the Gaussian Free Field and its relation to the SLE objects discussed in the course.

What is a Gaussian Free Field?

The most natural definition is too technical for this context. Instead, recall that we could very informally consider a Poisson random measure to have the form of a series of Poisson random variables placed at each point in the domain, weighted infinitissimely so that the integrals over an area give a Poisson random variable with mean proportional to the measure of the area, and so that different areas are independent. Here we do a similar thing only for infinitesimal centred Gaussians. We have to specify the covariance structure.

We define the Green’s function on a domain D, which has a resonance with PDE theory, by:

G_D(x,y)=\lim_{\epsilon\rightarrow 0}\mathbb{E}[\text{time spent in }B(y,\epsilon)\text{ by BM started at }x\text{ stopped at }T_D]

We want the covariance structure of the hypothetical infinitesimal Gaussians to be given by \mathbb{E}(g(x)g(y))=G_D(x,y). So formally, we define (\Gamma(A),A\subset D) for open A, by (\Gamma(A_1),\ldots,\Gamma(A_n)) a centred Gaussian RV with covariance \mathbb{E}(\Gamma(A_1)\Gamma(A_2))=\int_{A_1\times A_2}dxdyG_D(x,y).

The good news is that we have a nice expression G_U(0,x)=\log\frac{1}{|x|}, and the Green’s functions are conformally invariant in the sense that G_{\phi(D)}(\phi(x),\phi(y))=G_D(x,y), following directly for conformality of Brownian Motion.

The bad news is that the existence is not clear. The motivation for this is the following though. We have a so-called excursion measure for BMs in a domain D. There isn’t time to discuss this now: it is infinite, and invariant under translations of the boundary (assuming the boundary is \mathbb{R}\subset \bar{\mathbb{H}}, which is fine after taking a conformal map). Then take a Poisson Point Process on the set of Brownian excursions with this measure. Now define a function f on the boundary of the domain dD, and define \Gamma_f(A) to be the sum of the values of f at the starting point of BMs in the PPP passing through A, weighted by the time spent in A. We have a universality relation given by the central limit theorem: if we define h to be (in a point limit) the expected value of this variable, and we take n independent copies, we have:

\frac{1}{\sqrt{n}}\left['\Gamma_f^1(A)+\ldots+\Gamma_f^n(A)-n\int_Ah(x)dx\right]\rightarrow\Gamma(A)

where this limiting random variable is Gaussian.

For now though, we assume existence without full proof.

SLE_4

We consider chordal SLE_k, which has the form of a curve \gamma[0,\infty] from 0 to \infty in H. The g_t the regularising function as normal, consider \tilde{X}_t=X_t-W_t:=g_t(x)-\sqrt{\kappa}\beta_t for some fixed x. We are interested in the  evolution of the function arg x. Note that conditional on the (almost sure for K<=4) event that x does not lie on the curve, arg x will converge either to 0 or pi almost surely, depending on whether the curve passes to the left or the right (respectively) of x.

By Loewner’s DE for the upper half-plane and Ito’s formula:

d\bar{X}_t=\sqrt{\kappa}d\beta_t,\quad d\log\bar{X}_t=(2-\frac{\kappa}{2})\frac{dt}{\bar{X}_t^2}+\frac{\sqrt{\kappa}}{\bar{X}_t}d\beta_t

So, when K=4, the dt terms vanish, which gives that log X is a local martingale, and so

d\theta_t=\Im(\frac{2}{\bar{X}_t}d\beta_t

is a true martingale since it is bounded. Note that

\theta_t=\mathbb{E}[\pi1(x\text{ on right of }\gamma)|\mathcal{F}_t]

Note that also:

\mathbb{P}(\text{BM started at }x\text{ hits }\gamma[0,t]\cup\mathbb{R}\text{ to the right of }\gamma(t)|\gamma[0,t])=\frac{\theta_t}{\pi} also.

SLE_4 and the Gaussian Free Field on H

We will show that this chordal SLE_4 induces a conformal Markov type of property in Gaussian Free Fields constructed on the slit-domain. Precisely, we will show that if \Gamma_T is a GFF on H_T=\mathbb{H}\backslash\gamma[0,T], then \Gamma_T+ch_T(\cdot)=\Gamma_0+ch_0(\cdot), where c is a constant to be determined, and h_t(x)=\theta_t(x) in keeping with the lecturer’s notation!

It will suffice to check that for all fixed p with compact support \Gamma_T(p)+c(h_T(p)-h_0(p)) is a centred Gaussian with variance \int dxdyG_H(x,y)p(x)p(y).

First, applying Ito and conformal invariance of the Green’s functions under the maps g_t,

dG_{H_t}(x,y)=cd[h(x),h(y)]_t

The details are not particularly illuminating, but exploit the fact that Green’s function on H has a reasonable nice form \log\left|\frac{x-\bar{y}}{x-y}\right|. We are also being extremely lax with constants, but we have plenty of freedom there.

After applying Ito and some (for now unjustified) Fubini:

dh_t(p)=\left(\int c.p(x)\Im(\frac{1}{\bar{X}_t})dx\right)d\beta_t

and so as we would have suspected (since h(x) was), this is a local martingale. We now deploy Dubins-Schwarz:

h_T(p)-h_T(0)\stackrel{d}{=}B_{\sigma(T)} for B an independent BM and

\sigma(T)=\int_0^Tdt(\int c.p(x)\Im(\frac{1}{\tilde{X}_t})dx)^2

So conditional on (h_T(p),t\in[0,T]), we want to make up the difference to \Gamma_0. Add to h_T(p)-h_0(p) an independent random variable distribution as N(0,s-\sigma(T)), where

s=\int dxdyp(x)p(y)G(x,y)\quad =:\Gamma_0(p)

Then

s-\sigma(T)=\int p(x)p(y)[G(x,y)-c\int_0^Tdt\Im(\frac{1}{X_t})\Im(\frac{1}{Y_t})]dxdy=\int p(x)p(y)G_t(x,y)dxdy as desired.

Why is this important?

This is important, or at least interesting, because we can use it to reverse engineer the SLE. Informally, we let T\rightarrow\infty in the previous result. This states that taking a GFF in the domain left by removing the whole of the SLE curve (whatever that means) then adding \pi at points on the left of the curve, which is the limit \lim_T h_T is the same as a normal GFF on the upper half plane added to the argument function. It is reasonable to conjecture that a GFF in a non-connected domain has the same structure as taking independent GFFs in each component, and this gives an interesting invariance condition on GFFs. It can also be observed (Schramm-Sheffield) that SLE_4 arises by reversing the argument – take an appropriate conditioned GFF on H and look for the interface between it being ‘large’ and ‘small’ (Obviously this is a ludicrous simplification). This interface is then, under a suitable limit, SLE_4.

Advertisement

SLE revision 2: Loewner’s Differential Equation

Last time I set up the geometric notions of probability that will be needed to proceed with the course material. Now we consider the deterministic differential equation due to Loewner (1923) which he used to make progress on the Bieberbach Conjecture, but which will also underpin the construction of SLE. This proof is adapted for this specific case from the slightly more general argument in Duren’s Univalent Functions (Section 3.3). Because in that setting the result concerns an infinite domain, readers should beware that though I am using identical notation, in about half the cases, my function are the inverse and my sets the complement of what they are in Duren.

To explain the construction, as with so many things, a picture speaks a thousand words. Unfortunately I have neither the software nor, right now, the time to produce the necessary diagrams, so the following will have to suffice.

Consider a deterministic simple curve in the unit disc, (\gamma(t): t\in[0,\infty)). Removing initial segments of the curve gives the nested simply-connected regions:

U_t:=\mathbb{U}\backslash \gamma[0,t].

Then define as in the previous post, the unique conformal map

f_t: U_s\rightarrow \mathbb{U} such that f_t(0)=0, f_t'(0)\in\mathbb{R}^+,

and furthermore set \xi_t to be the image of \gamma(t) under this map. (Note that though the conformal map is not defined on the boundary, it must extend continuously).

f_t'(0) is increasing.

Very informally, this derivative records how much twisting is required at the origin to turn the slit domain into the open disk. Extending the path will demand further twisting. More rigorously, set:

g_t=f_t^{-1}:\mathbb{U}\rightarrow U_t.

Then the (g_t) are injective functions from the unit disc to itself which preserve the origin, so Schwarz’s lemma applies. They are clearly not rotations, so

|g_t'(0)|<1.

By the inverse function theorem, |f_t'(0)|>1 (*). Now, given t>s, can decompose:

f_t=f_s\circ \tilde{f},

and \tilde{f} has this useful Schwarz property (*) also. By applying the chain rule, noting that f_s(0)=0, we deduce that

|f_t'(0)|>|f_s'(0)|.

This means we are free to demand that the curve has time parameter such that

|f_t'(0)|=e^t.

A reminder of the statement of Schwarz’s Lemma: Continue reading

SLE revision 1: Properties of Random Sets

Prof. Werner’s excellent Part III course ‘Topics in Conformal Invariance and Randomness’ has recently finished, and I’ve been doing some revision. The course begins with a general discussion of some of the ideas useful in demanding some form of regularity for random paths or random sets in a domain. For example, for continuous time processes, we can define a Markovian property: this is both easy and natural, mainly because the state space, assuming it is \mathbb{R}^d is homogenous, which is not a luxury in, say, the unit disc. In two dimensions, things are particularly tractable because of the equivalence to the complex plane, and from this we develop the Schramm-Loewner evolution, and we examine its properties. In particular, SLEs with some exponents arise as a limit of discrete processes, with wide-ranging applications. In this first note, we motivate and explain some properties that we might wish random sets to have.

conformal map is an invertible map between domains in the complex plane which preserves angles. Riemann’s mapping theorem states that there exists a conformal map from any non-empty, simply connected domain to the open unit disc. We have some freedom to control one point, and the boundary is mapped to the boundary.

Conformal Invariance: Given a simply connected domain D, and conformal \phi:D\rightarrow\mathbb{U}, then \mathcal{B}^D a process defined on domains D is conformally invariant if

\phi(\mathcal{B}^D)\stackrel{d}{=}\mathcal{B}^\mathbb{U}.

This says that the law of the process is preserved under the transformation.

The notation chosen is deliberate. The best example is Brownian paths: take B a Brownian motion started at 0, and T^D the exit time of domain D, then set \mathcal{B}=\{B_t,t\leq T^D\} the path in D. Informally, conformal invariance for all domains with 0\in D, follows because BM is isotropic, that is, the angle taken after a time t, whatever that means, is uniformly distributed. Modulo Markov technicalities, this property is preserved under a conformal map because they preserve angles.

Conformal Restriction: This is essentially the same as conformal invariance, but in the special case where one of the domains is contained in the other. Although less general, by viewing everything in the context of the laws of processes in the larger domain, we can in fact show an equality for a given single process with conditioning, rather than effectively two unrelated processes. We assume the reference domain is the unit disc.

Concretely, we can consider a random set K in the unit disc with law P^K, and for a subset U\subset\mathbb{U} which contains 0 and 1, define the conformal map \phi_U:U\rightarrow \mathbb{U} that preserves 0 and 1. Then set P_U^K to be the law of \phi_U^{-1}(K), which gives a law for random sets in U. We say K satisifies conformal restriction if:

P_U^K=P|_{\{K\subset U\}}

Observe that applying \phi_U to both sides of the definition gives conformal invariance for this pair of domains. Continue reading