Real Trees – Root Growth and Regrafting

Two weeks ago in our reading group meeting, Raphael told us about Chapter Five which introduces root growth and regrafting. One of the points of establishing the Gromov-Hausdorff topology in this book was to provide a more natural setting for a discussion of tree-valued processes. Indeed in what follows, one can imagine how to start the construction of a similar process for the excursions which can be used to encode real trees, involving cutting off sub-excursions above one-sided local minima, then glueing them back in elsewhere. But taking account of the equivalence structure will be challenging, and it is much nicer to be able to describe cutting a tree in two by removing a single point without having to worry about quotient maps.

We have seen in Chapter Two an example of a process defined on the family of rooted trees with n labelled vertices which has the uniform rooted tree as an invariant distribution. Given a rooted tree with root p, we choose uniformly at random a vertex p’ in [n] to be the new root. Then if p’=p we do nothing, otherwise we remove the unique first edge in the path from p’ to p, giving two trees. Adding an edge from p to p’ completes the step and gives a new tree with p’ as root. We might want to take a metric limit of these processes as n grows and see whether we end up with a stationary real tree-valued process whose marginals are the BCRT.

To see non-trivial limiting behaviour, it is most interesting to consider the evolution of a particular subtree (which includes the root) through this process. If the vertex chosen for cutting lies in our observed subtree, then the subtree undergoes a prune and regraft operation. On the other hand, if the vertex chosen for cutting does not lie in the subtree, then we do not see any effect of the pruning, except the addition of a new vertex below the original root, which becomes the new root. So essentially, from the point of view of our observed subtree, the root is growing.

Now we can think about interpreting the dynamics of a natural limit process acting on real trees. The key idea is that we don’t change the set on which the tree is supported much, but instead just change the metric. In particular, we will keep the original tree, and add on length at unit rate. Of course, where this length gets added on entirely determines the metric structure of the tree, but that doesn’t stop us giving a simple ‘name’ for the extra length. If we consider a process X^T starting from a particular finite subtree T, then at time t, the tree X^T_t is has vertex set T \coprod (0,t]. (Finite subtree here means that it has finite total length.)

Root regrafting should happen at a rate proportional to the total length of the current observed tree. This is reasonable since after all it is supported within a larger tree, so in the discrete case the probability of a prune-regrafting event happening within a given observed subtree is proportional to the number of vertices in that subtree, which scales naturally as length in the real tree limit. It turns out that to get unit rate root growth with \Theta(1) rate prune-regrafting, we should consider subtrees of size \sqrt{n} within a host tree of size n as n\rightarrow\infty. We also rescale the lengths by \frac{1}{\sqrt{n}}, and time by \sqrt{n} so we actually see prune-regraft events.

Furthermore, if the subtree is pruned, the location of the pruning is chosen uniformly by length of the current observed subtree. So we can view the pruning process as being driven by a Poisson point process with intensity given by the instantaneous length measure of the tree, which at time t has vertex set T\coprod (0,t]. It will turn out to be consistent that there is a ‘piecewise isometry’ for want of a better phrase between the metric (and thus length measure) on X^T_t and the canonical induced measure on T\coprod (0,t], so we can describe the instances and locations of the pruning events via a pair of PPPs. The first is supported on T \times [0,\infty), and the second on \{(t,x): 0 \le x \le t\}, since we only ‘notice’ pruning at the point labelled x if the pruning happens at some time t after x was created.

If we start from a compact tree T, then the total intensity of this pair is finite up to some time t, and so we have a countable sequence \tau_0=0<\tau_1<\tau_2<\ldots of times for pruning events. It is easy to describe (but a bit messy to notate) the evolution of the metric between these pruning times. Essentially the distance between any pair of points in the observed tree at time \tau_m with root \rho_{\tau_m} is constant between times \tau_m,\tau_{m+1}, and new points are added so that the distance between \rho_{\tau_m} and any new point a\in(\tau_m,\tau_{m+1}] is a-\tau_m, and everything thing else follows from straightforward consideration of geodesics.

When a pruning event happens at point x_m at time \tau_m, distances are preserved within the subtree above x_m in X^T_{\tau_m -}, and within the rest of the tree. Again, an expression for the cross distances is straightforward but requires a volume of notation not ideally suited to this medium.

The natural thing to consider is the coupled processes started from different subtrees (again both must contain the original root) of the same host tree. Say T^1,T^2\le T, then it is relatively easy to check that X^{T^1}_t,X^{T^2}_t \le X^T_t \,\forall t, when we drive the processes by consistent coupled Poisson processes. Furthermore, it is genuinely obvious that the Hausdorff distance between X^{T^1}_t,X^{T^2}_t, here viewed as compact subsets of (X^T_t, d^T_t) remains constant during root growth phase.

Less obvious but more important is that the Hausdorff distance decreases during regrafting events. Suppose that just before a regrafting event, the two subtrees are T’ and T”, and the Hausdorff distance between them is \epsilon. This Hausdorff distance is with respect to the metric on the whole tree T. [Actually this is a mild abuse of notation – I’m now taking T to be the whole tree just before the regraft, rather than the tree at time 0.]

So for any a\in T', we can choose b\in T'' such that d_T(a,b)\le \epsilon. This is preserved under the regraft unless the pruning point lies on the geodesic segment (in T) between a and b. But in that case, the distance between a and the pruning point is again at most \epsilon, and so after the regrafting, a is at most \epsilon away from the new root, which is in both subtrees, and in particular the regrafted version of T”.

This is obviously a useful first step on the path to proving any kind of convergence result. There are some technicalities which we have skipped over. It is fairly natural that this leads to a Markov process when the original tree is finite, but it is less clear how to define these dynamics when the total tree length is infinite, as we don’t want regrafting events to be happening continuously unless we can bound their net effect in some sense.

Last week, Franz showed us how to introduce the BCRT into matters. Specifically, that BCRT is the unique stationary distribution for this process. After a bit more work, the previous result says that for convergence properties it doesn’t matter too much what tree we start from, so it is fine to start from a single point. Then, the cut points and growth mechanism corresponds very well to the Poisson line-breaking construction of the BCRT. With another ‘grand coupling’ we can indeed construct them simultaneously. Furthermore, we can show weak convergence of the discrete-world Markov chain tree algorithm to the process with these RG with RG dynamics.

It does seem slightly counter-intuitive that a process defined on the whole of the discrete tree converges to a process defined through subtrees. Evans remarks in the introduction to the chapter that this is a consequence of having limits described as compact real trees. Then limitingly almost all vertices are close to leaves, so in a Hausdorff sense, considering only \sqrt{n} of the vertices (ie a subtree) doesn’t really make any difference after rescaling edge lengths. I feel I don’t understand exactly why it’s ok to take the limits in this order, but I can see why this might work after more checking.

Tomorrow, we will have our last session, probably discussing subtree prune-and-regraft, where the regrafting does not necessarily happen at the root.

Poisson Random Measures

[This is a companion to the previous post. They explore different aspects of the same problem which I have been thinking about from a research point of view. So that they can be read independently, there has inevitably been some overlap.]

As I explained in passing previously, Poisson Random Measures have come up in my current research project. Indeed, the context where they have appeared seems like a very good motivation for considering the construction and some properties of PRMs.

We begin not with a Poisson variable, but with a standard Erdos-Renyi random graph G(n,\frac{c}{n}). The local limit of a component in this random graph is given by a Galton-Watson branching process with Poisson(c) offspring distribution. Recall that a local limit is description of what the structure looks like near a given (or random) vertex. Since the vertices in G(n,p) are exchangeable, this rooting matters less. Anyway, the number of neighbours in the graph of our root is given by Bin(n-1,c/n). Suppose that the root v_0, has k neighbours. Then if we are just interested in determining the vertices in the component, we can ignore the possibility of further edges between these neighbours. So if we pick one of the neighbours of the root, say v_1, and count the number of neighbours of this vertex that we haven’t already considered, this is distributed as Bin(n-1-k,c/n), since we discount the root and the k neighbours of the root.

Then, as n grows large, Bin(n-1,c/n) converges in distribution to Po(c). Except on a very unlikely event whose probability we can control if we need, so does Bin(n-1-k,c/n). Indeed if we consider a set of K vertices which are already connected in some way, then the distribution of the number of neighbours of one of them which we haven’t already considered is still Po(c) in the limit.

Now we consider what happens if we declare the graph to be inhomogeneous. The simplest possible way to achieve this is to specify two types of vertices, say type A and type B. Then we specify the proportion of vertices of each type, and the probability that there is an edge between two vertices of given types. This is best given by a symmetric matrix. So for example, if we wanted a random bipartite graph, we could achieve this as described by setting all the diagonal entries of the matrix to be zero.

So does the local limit extend to this setting? Yes, unsurprisingly it does. To be concrete, let’s say that the proportion of types A and B are a and b respectively, and the probabilities of having edges between vertices of various types is given by P=(p_{ij}/n)_{i,j\in\{A,B\}}. So we can proceed exactly as before, only now we have to count how many type A neighbours and how many type B neighbours we see at all stages. We have to specify the type of our starting vertex. Suppose for now that it is type A. Then the number of type A neighbours is distributed as

\text{Bin}(an,p_{AA}/n)\stackrel{d}{\rightarrow}\text{Po}(ap_{AA}),

and similarly the limiting number of type B neighbours is \sim \text{Po}(bp_{AB}). Crucially, this is independent of the number of type A neighbours. The argument extends naturally to later generations, and the result is exactly a multitype Galton-Watson process as defined in the previous post.

My motivating model is the forest fire. Here, components get burned when they are large and reduced to singletons. It is therefore natural to talk about the ‘age’ of a vertex, that is, how long has elapsed since it was last burned. If we are interested in the forest fire process at some fixed time T>1, that is, once burning has started, then we can describe it as an inhomogeneous random graph, given that we know the ages of the vertices.

For, given two vertices with ages s and t, where WLOG s<t, we know that the older vertex could not have been joined to the other vertex between times T-t and T-s. Why? Well, if it had, then it too would have been burned at time T-s when the other vertex was burned. So the only possibility is that they might have been joined by an edge between times T-s and T. Since each edge arrives at rate 1/n, the probability that this happens is 1-e^{-s/n}\approx \frac{s}{n}. Indeed, in general the probability that two vertices of ages s and t are joined at time T is \frac{s\wedge t}{n}.

Again at fixed time T>1, the sequence of ages of the vertices converges weakly to some fixed distribution (which depends on T) as the number of vertices grows to infinity. We can then recover the graph structure by assigning ages according to this distribution, then growing the inhomogeneous random graph with the kernel as described. The question is: when we look for a local limit, how to do we describe the offspring distribution?

Note that in the limit, components will be burned continuously, so the distribution of possible ages is continuous (with an atom at T for those vertices which have never been burned). So if we try to calculate the distribution of the number of neighbours of age s, we are going to be doomed, because with probability 1 then is no vertex of age s anywhere!

The answer is that the offspring distribution is given by a Poisson Random Measure. You can think of this as a Poisson Point Process where the intensity is non-constant. For example, let us consider how many neighbours we expect to have with ages [s,s+ds]. Let us suppose the age of our root is t>s+ds for now. Assuming the distribution of ages, f(\cdot) is positive and continuous, the number of vertices with these ages in the system is roughly nf(s)ds, and so the number of neighbours with this property is roughly \text{Bin}(nf(s)ds,\frac{s}{n}). In particular, this does have a Poisson limit. We need to be careful about whether this Poisson limit is preserved by the approximation. In fact this is fine. Let’s assume WLOG that f is increasing at s. Then the number of age [s,s+ds] neighbours can be stochastically bounded between \text{Bin}(nf(s)ds,\frac{s}{n}) and \text{Bin}(nf(s+ds)ds,\frac{s+ds}{n}. As n grows, these converge in the distribution to two Poisson random variables, and then we can let ds go to zero. Note for full formalism, we may need to account for the large deviations event that the number of age s vertices in the system is noticeably different from its expectation. Whether this is necessary depends on whether the ages are assigning deterministically, or drawn IID-ly from f.

One important result to be drawn from this example is that the number of offspring from disjoint type sets, say [s_1,s_2], [t_1,t_2] are independent, for the same reason as in the two-type setting, namely that the underlying binomial variables are independent. We are, after all, testing different sets of vertices! The other is that the number of neighbours with ages in some range is Poisson. Notice that these two results are consistent. The number of neighbours with ages in the set [s_1,s_2]\cup [t_1,t_2] is given by the sum of two independent Poisson RVs, and hence is Poisson itself. The parameter of the sum RV is given by the sum of the original parameters.

These are essentially all the ingredients required for the definition of a Poisson Random Measure. Note that the set of offspring is a measure of the space of ages, or types. (Obviously, this isn’t a probability measure.) We take a general space E, with sigma algebra \mathcal{E}, and an underlying measure \mu on E. We want a distribution \nu for measures on E, such that for each Borel set A\in\mathcal{E}, \nu(A), which is random because \nu is, is distributed as \text{Po}(\mu(A)), and furthermore, for disjoint A,B\in\mathcal{E}, the random variables \nu(A),\nu(B) are independent.

If M=\mu(E)<\infty, then constructing such a random measure is not too hard using a thinning property. We know that \nu(E)\stackrel{d}{=}\text{Po}(M), and so if we sample a Poisson(M) number of RVs with distribution given by \frac{\mu(\cdot)}{M}, we get precisely the desired PRM. Proving this is the unique distribution with this property is best done using properties of the Laplace transform, which uniquely defines the law of a random measure in the same manner that the moment generating function defines the law of a random variable. Here the argument is a function, rather than a single variable for the MGF, reflecting the fact that the space of measures is a lot ‘bigger’ than the reals, where a random variable is supported. We can extend this construction for sigma-finite spaces, that is some countable union of finite spaces.

One nice result about Poisson random measures concerns the expectation of functions evaluated at such a random measure. Recall that some function f evaluated at the measure \sum \delta_{x_i} is given by \sum f(x_i). Then, subject to mild conditions on f, the expectation

\mathbb{E}\nu (f)=\mu(f).

Note that when f=1_A, this is precisely one of the definitions of the PRM. So by a monotone class result, it is not surprising that this holds more generally. Anyway, I’m currently trying to use results like these to get some control over what the structure of this branching processes look like, even when the type space is continuous as in the random graph with specified ages.

Enhanced by Zemanta

Characterisations of Geometric Random Graphs

Continuing the LMS-EPSRC summer school on Random Graphs, Geometry and Asymptotic Structure, we’ve now had three of the five lectures by Mathew Penrose on Geometric Random Graphs.

The basic idea is that instead of viewing a graph entirely abstractly, we now place the vertices in the plane, or some other real space. In many network situations, we would expect connectivity to depend somehow on distance. Agents or sites which are close together might be considered more likely to have the sort of relationship indicated by being connected with an edge. In the model discussed in this course, this dependence is deterministic. We have some parameter r, and once we have chosen the location of all the vertices, we connect a pair of vertices if the distance between them is less than r.

For the purposes of this, we work in a compact space [0,1]^d, and we are interested in the limit as the number of vertices n grows to infinity. To avoid the graph getting too connected, as in the standard random graph model, we take r to be a decreasing function of n. Anyway, we place the n points into the unit hypercube uniformly at random, and then the edges are specified by the adjacency rule above. In general, because r_n will be o(1), we won’t have to worry too much above boundary effects. The number of vertices within r_n of the boundary of the cube will be o(1). For some results, this is a genuine problem, when it may be easier to work on the torus.

In G(n,p), the order of np in the limit determines the qualitative structure of the graph. This is the expected degree of a given fixed vertex. In this geometric model, the relevant parameter is nr_n^d, where d is the dimension of the hypercube. If this parameter tends to 0, we say the graph is sparse, and dense if it tends to infinity. The intermediate case is called a thermodynamic limit. Note that the definition of sparse here is slightly different from G(n,p).

Much of the content of the first three lectures has been verifying that the distributions of various quantities in the graph, for example the total number of edges, are asymptotically Poisson. Although sometimes arguments are applicable over a broad spectrum, we also sometimes have to use different calculations for different scaling windows. For example, it is possible to show convergence to a Poisson distribution for the number of edges in the sparse case, from which we get an asymptotic normal approximation almost for free. In the denser regimes, the argument is somewhat more technical, with some substantial moment calculations.

A useful tool in these calculations are some bounds derived via Stein’s method for sums of ‘almost independent’ random variables. For example, the presence or non-presence of an edge between two pairs of vertices are independent in this setting if the pairs are disjoint, and the dependence is still only mild if they share a vertex. An effective description is via a so-called dependency graph, where we view the random variables as the vertices of a graph, with an edge between them if there is some dependence. This description doesn’t have any power in itself, but it does provide a concise notation for what would otherwise be very complicated, and we are able to show versions of (Binomials converge to Poisson) and CLT via these that are exactly as required for this purpose.

In particular, we are able to show that if E_n is the total number of edges, under a broad set of scaling regimes, if \lambda_n is the expected total number of edges, then d_{TV}(E_n,\mathrm{Po}(\lambda_n))\rightarrow 0, as n grows. This convergence in total variation distance is as strong a result as one could hope for, and when the sequence of \lambda_n is O(1), we can derive a normal approximation as well.

At this point it is worth discussing an alternative specification of the model. Recall that for a standard homogenous random graph, we have the choice of G(n,m) and G(n,p) as definitions. G(n,m) is the finer measure, and G(n,p) can be viewed as a weighted mix of G(n,m). We can’t replicate this directly in the geometric setting because the edges and non-edges are a deterministic function of the vertex locations. What we can randomise is the number of vertices. Since we are placing the vertices uniformly at random, it makes sense to consider as an alternative a Poisson Point Process with intensity n. The number of vertices we get overall will be distributed as Po(n), which is concentrated near n, in the same manner as G(n,c/n).

As in G(n,p), this is a less basic model because it is a mixture of the fixed-vertex models. Let’s see if how we would go about extending the total variation convergence result to this slightly different setting without requiring a more general version of the Poisson Approximation Lemma. To avoid having to define everything again, we add a ‘ to indicate that we are talking about the Poisson Point Process case. Writing d(.,.) for total variation distance, the result we have is:

\lim_{n\rightarrow\infty} d(E_n,\mathrm{Po}(\lambda_n))=0.

We want to show that

\lim_{n\rightarrow\infty}d(E_n',\mathrm{Po}(\lambda_n'))=0,

which we can decompose in terms of expectations in the original model by conditioning on N_n

\leq \lim_{n\rightarrow\infty}\mathbb{E}\Big[\mathbb{E}[d(E_{N_n},\mathrm{Po}(\lambda_n')) | N_n]\Big],

where the outer expectation is over N. The observation here, is that the number of points given by the Poisson process induces a measure on distributions, the overwhelming majority of which look quite like Poisson distributions with parameter n. The reason we have a less than sign is that we are applying the triangle inequality in the sum giving total variation distance:

d(X,Y)=\sum_{k\geq 0}|\mathbb{P}(X=k)-\mathbb{P}(Y=k)|.

From this, we use the triangle inequality again:

\lim_{n\rightarrow\infty} \mathbb{E}\Big[\mathbb{E}[d(E_{N_n},\mathrm{Po}(\lambda_{N_n})) | N_n]\Big]

+\lim_{n\rightarrow\infty}\mathbb{E}\Big[\mathbb{E}[d(\mathrm{Po}(\lambda_{N_n}),\mathrm{Po}(\lambda_n')) | N_n]\Big].

Then, by a large deviations argument, we have that for any \epsilon>0, \mathbb{P}(|N_n-n|\geq \epsilon n)\rightarrow 0 exponentially in n. Also, total variation distance is, by definition, bounded above by 1. In the first term, the inner conditioning on N_n is irrelevant, and we have that E_{N_n} converges to the Poisson distribution for any fixed N_n\in (n(1-\epsilon),n(1+\epsilon)). Furthermore, we showed in the proof of the non-PPP result that this convergence is uniform in this interval. (This is not surprising – the upper bound is some well-behaved polynomial in 1/n.) So with probability 1- e^{-\Theta(n)} N_n is in the region where this convergence happens, and elsewhere, the expected TV distance is bounded below 1, so the overall expectation tends to 0. With a similar LD argument, for the second term it suffices to prove that when \lambda\rightarrow\mu, we must have d(\mathrm{Po}(\lambda),\mathrm{Po}(\mu))\rightarrow 0. This is ‘obviously’ true. Formally, it is probably easiest to couple the distributions \mathrm{Bin}(n,\lambda/n),\mathrm{Bin}(n,\mu/n) in the obvious way, and carry the convergence of TV distance as the parameter varies through the convergence in n.

That all sounded a little bit painful, but is really just the obvious thing to do with each term – it’s only the language that’s long-winded!

Anyway, I’m looking forward to seeing how the course develops. In particular, when you split the space into small blocks, the connectivity properties resemble those of (site) percolation, so I wonder whether there will be concrete parallels. Also, after reading about some recent results concerning the metric structure of the critical components in the standard random graph process, it will be interesting to see how these compare to the limit of a random graph process which comes equipped with metric structure for free!

Random Interlacements

In this post, I want to talk about another recently-introduced model that’s generating a lot of interest in probability theory, Sznitman’s model of random interlacements. We also want to see, at least heuristically, how this relates to more familiar models.

We fix our attention on a lattice, which we assume to be \mathbb Z ^d. We are interested in the union of an infinite collection of simple random walks on the lattice. The most sensible thing to consider is not a collection of random walks from at a random set of starting points, but rather a family of trajectories, that is a doubly-infinite random walk defined on times (-\infty,\infty). We will want this family to have some obvious properties, such as translation invariance, in order to make analysis possible and ideally obtain some 0-1 laws. The natural thing to do is then to choose the trajectories through a Poisson Point Process. The tricky part will be finding an intensity measure that has all the properties we want, and gives trajectories that genuinely do look like SRWs, and, most importantly, have a union that is neither too sparse nor too dense. For example, it wouldn’t be very interesting if with high probability every point appeared in the union…

For reasons we will mention shortly, we are interested in the complement of the union of the trajectories. We call this the vacant set. We will find an intensity which we can freely scale by some parameter u\in\mathbb{R}^+, which will give us a threshold for the complement to contain an infinite component. This is in the same sense as the phase transition for Bernoulli percolation. That is, there is a critical value u^* say, such that for u<u^* the vacant set contains an infinite component (or percolates) almost surely, and almost surely it does not when u>u^*. A later result of Teixeira shows that, as in percolation, this infinite component is unique.

Let us first recall why it is not interesting to consider this process for d=1 or 2. On \mathbb{Z}, with high probability a single SRW hits every integer point trivially, since it visits arbitrarily large and arbitrarily small integers. For d=2, the SRW is recurrent, and so consists of a countably infinite sequence of excursions from (0,0). Note that the probability that an excursion from 0 hits some point (x,y) is non-zero, as it is at least 2^{-2(|x|+|y|)} for example. Therefore, with high probability the SRW hits (x,y), and so whp it hits every point.

Therefore it is only for d=>3 that we start seeing interesting effects. It is worth mentioning at this point some of the problems that motivated considering this model. First is the disconnection time of a discrete cylinder by a simple walk. For example, Sznitman considers the random walk on \mathbb{Z}\times (\mathbb{Z}/N\mathbb{Z})^d. Obviously, it is more interesting to consider how long it takes a (1-dimensional in the natural sense) path to disconnect a d=>3 dimensional set than a 2-dimensional one, as the latter is given just by the first time the path self-intersects.

More generally, we might be interested in random walks up to some time an order of magnitude smaller than the cover time. Recall the cover time is the time to hit each point of the set. For example, for the random walk on the d-dimensional torus (\mathbb{Z}/N\mathbb{Z})^d the cover time (as discussed in Markov Chains and Mixing Times posts) is N^d \log N, but the log N represents in some sense only the ‘final few’ vertices. So we should ask what the set of unhit vertices looks like at time N^d. And it turns out that for large N, the structure of this vacant set is related to the vacant set in the random interlacement model, in a local sense.

Anyway, the main question to ask is: what should the intensity measure be?

We patch it together locally. Start with the observation that transience of the random walk means almost surely a trajectory spends only finitely many steps in a fixed finite set K. So we index all the trajectories which hit K by the first time they hit K. Given that a trajectory hits K, it is clear what the conditional distribution of this hitting point should be. Recall that SRW on Z^d is reversible, so we consider the SRW backwards from this hitting then. Then the probability that the hitting point is x (on the boundary of K) is proportional to the probability that a SRW started from x goes to infinity without hitting K again. So once we’ve settled on the distribution of the hitting point x, it is clear how to construct all the trajectories through K. We pick x on the boundary of K according to this distribution, and take the union of an SRW starting from x conditioned not to hit K again, and an SRW starting from x with no conditioning. These correspond to the trajectory before and after the hitting time, respectively.

In fact, it turns out that this is enough. Suppose we demand that the probability that the hitting point is x is equal to the probability that a SRW started from x goes to infinity without hitting K again (rather than merely proportional to). Sznitman proves that there is a unique measure on the set of trajectories that restricts to this measure for every choice of K. Furthermore, the Poisson Point Process with the globally-defined intensity, unsurprisingly restricts to a PPP with the intensity specific to K.

We have not so far said anything about trajectories which miss this set K. Note that under any sensible intensity with the translation-invariance property, the intensity measure of the trajectories which hit K must be positive, since we can cover \mathbb{Z}^d with countably many copies of K. So the number of trajectories hitting K is a Poisson random variable.

Recall how we defined the probability that the hitting point of K was some point x on the boundary. The sum of these probability is called the capacity of K. It follows that this is the parameter of the Poisson random variable. Ie, the probability that no trajectory passes through K is:

\exp(-u\mathrm{cap}(K)),

recalling that u is the free parameter in the intensity. This is the most convenient framework through which to start analysing the probability that there is an infinite connected set which is hit by no trajectory.

We conclude by summarising Sznitman’s Remark 1.2, explaining why it is preferable to work with the space of trajectories rather than the space of paths. Note that if we are working with paths, and we want translation invariance, then this restricts to translation invariance of the distribution of starting points as well, so it is in fact a stronger condition. Note then that either the intensity of starting at 0 is zero, in which case there are no trajectories at all, or it is positive, in which case the set of starting points looks like Bernoulli site percolation.

However, the results about capacity would still hold if there were a measure that restricted satisfactorily. And so the capacity of K would still be the measure of paths hitting K, which would be at least the probability that the path was started in K. But by translation invariance, this grows linearly with |K|. But capacity grows at most as fast as the size of the set of boundary points of K, which will be an order of magnitude smaller when K is, for example, a large ball.

REFERENCES

This was mainly based on

Sznitman – Vacant Set of Random Interlacements and Percolation (0704.2560)

Also

Sznitman – Random Walks on Discrete Cylinders and Random Interlacements (0805.4516)

Teixeira – On the Uniqueness of the Infinite Cluster of the Vacant Set of Random Interlacements (0805.4106)

and some useful slides by the same author (teixeira.pdf)

Brownian Excursions and Local Time

I’ve been spending a fair bit of time this week reading and thinking about the limits of various combinatorial objects, in particular letting the number of vertices tend to \infty in models of random graphs with various constraints. Perhaps predictably, like so many continuous stochastic objects, yet again the limiting ‘things’ turn out to be closely linked to Brownian Motion. As a result, I’ve ended up reading a bit about the notion of local time, and thought it was sufficiently elegant even by itself to justify a quick post.

Local Time

In general, we might be interested in calculating a stochastic integral like

\int_0^t f(B_s)ds.

Note that, except in some highly non-interesting cases, this is a random variable. Our high school understanding of Riemannian integration encourages thinking of this as a ‘pathwise’ integral along the path evolving in time. But of course, that’s orthogonal to the approach we start thinking about when we are introduced to the Lebesgue integral. There we think about potential values of the integrand, and weight their contribution by the (Lebesgue) measure of the subset of the domain in which they appear.

Can we do the same for the stochastic integral? That is, can we find a measure which records how long the Brownian Motion spends at a point x? This measure will not be deterministic – effectively the stochastic behaviour of BM will be encoded through the measure rather than the argument of the function.

The answer is yes, and the measure in question is referred to as local time. More formally, we want

\int_0^t f(B_s)ds=\int_\mathbb{R}f(x)L(t,x)dx. (*)

where the local time L(t,x) is a random process, increasing for fixed x. Informally, one could take

\partial_t L(t,x) \propto 1(B_t=x)

but clearly in practice that won’t do at all for a definition, and so instead we use (*). In the usual way, if we want (*) to hold for all reasonably nice functions f, it suffices to check it for the indicator functions of Borel sets. L(t,.) is therefore often referred to as occupation density, while L(.,A) is local time.

Local Time as natural index for Excursions

An excursion, for example of Brownian Motion, is a segment of the path that has zero value only at its endpoints. Alternatively, it is a maximal open interval of time such that the path is away from 0. We want to specify the measure on these excursions. Here are some obvious difficulties.

By Blumenthal’s 0-1 law, BM started from zero hits zero infinitely often in any time interval [0,e], so in the same way that there is no first positive rational, there is no first excursion. We could pick the excursion occurring in progress at a fixed time t, but this is little better. Firstly, the resultant measure is size-biased by the length of the excursion, and more importantly, the proximity of t to the origin may be significant unless we know of some memorylessness type of property to excursions.

Local time allows us to solve these problems. We restrict attention to L_t:=L(t,0), the occupation density of 0. Let’s think about some advantages of indexing excursions by local time rather than by the start time:

  • The key observation is that local time remains constant on excursions. That is, if we are avoiding 0, the local time at 0 cannot grow because the BM spends no time there!
  • If we use start time, then we have a countably infinite number of small excursions accumulating close to 0, ie with very small start time. However, local time increases rapidly when there are lots of small excursions. Remember, lots of small excursions means that the BM hits 0 lots of times. So local time grows quickly through the annoying bits, and effectively provides a size-biasing for excursions that allows us to ignore the effects of the ‘Blumenthal excursions’ near time 0.
  • When indexed by time, excursions might be Markovian, in the sense that subsequent excursions (and in particular their lengths) are independent of past excursions.This is certainly not the case if you index by start time! If an excursion starts at time t and has length u, then the ‘next’ excursions, in as much as that makes sense, must surely start at time t+u.

We know there are only countably many excursions, hence there are only countably many local times which pertain to an excursion. This motivates considering the set of excursions as a Poisson Point Process on local time. Once you’ve had this idea, everything follows quite nicely. Working out the distribution of the constant rate (which is a measure on the set of excursions) remains, but essentially we now have a sensible framework for tracking the process of excursions, and from this we can reconstruct the original Brownian Motion.

SLE Revision 4: The Gaussian Free Field and SLE4

I couldn’t resist breaking the order of my revision notes in order that the title might be self-referential. Anyway, it’s the night before my exam on Conformal Invariance and Randomness, and I’m practising writing this in case of an essay question about the Gaussian Free Field and its relation to the SLE objects discussed in the course.

What is a Gaussian Free Field?

The most natural definition is too technical for this context. Instead, recall that we could very informally consider a Poisson random measure to have the form of a series of Poisson random variables placed at each point in the domain, weighted infinitissimely so that the integrals over an area give a Poisson random variable with mean proportional to the measure of the area, and so that different areas are independent. Here we do a similar thing only for infinitesimal centred Gaussians. We have to specify the covariance structure.

We define the Green’s function on a domain D, which has a resonance with PDE theory, by:

G_D(x,y)=\lim_{\epsilon\rightarrow 0}\mathbb{E}[\text{time spent in }B(y,\epsilon)\text{ by BM started at }x\text{ stopped at }T_D]

We want the covariance structure of the hypothetical infinitesimal Gaussians to be given by \mathbb{E}(g(x)g(y))=G_D(x,y). So formally, we define (\Gamma(A),A\subset D) for open A, by (\Gamma(A_1),\ldots,\Gamma(A_n)) a centred Gaussian RV with covariance \mathbb{E}(\Gamma(A_1)\Gamma(A_2))=\int_{A_1\times A_2}dxdyG_D(x,y).

The good news is that we have a nice expression G_U(0,x)=\log\frac{1}{|x|}, and the Green’s functions are conformally invariant in the sense that G_{\phi(D)}(\phi(x),\phi(y))=G_D(x,y), following directly for conformality of Brownian Motion.

The bad news is that the existence is not clear. The motivation for this is the following though. We have a so-called excursion measure for BMs in a domain D. There isn’t time to discuss this now: it is infinite, and invariant under translations of the boundary (assuming the boundary is \mathbb{R}\subset \bar{\mathbb{H}}, which is fine after taking a conformal map). Then take a Poisson Point Process on the set of Brownian excursions with this measure. Now define a function f on the boundary of the domain dD, and define \Gamma_f(A) to be the sum of the values of f at the starting point of BMs in the PPP passing through A, weighted by the time spent in A. We have a universality relation given by the central limit theorem: if we define h to be (in a point limit) the expected value of this variable, and we take n independent copies, we have:

\frac{1}{\sqrt{n}}\left['\Gamma_f^1(A)+\ldots+\Gamma_f^n(A)-n\int_Ah(x)dx\right]\rightarrow\Gamma(A)

where this limiting random variable is Gaussian.

For now though, we assume existence without full proof.

SLE_4

We consider chordal SLE_k, which has the form of a curve \gamma[0,\infty] from 0 to \infty in H. The g_t the regularising function as normal, consider \tilde{X}_t=X_t-W_t:=g_t(x)-\sqrt{\kappa}\beta_t for some fixed x. We are interested in the  evolution of the function arg x. Note that conditional on the (almost sure for K<=4) event that x does not lie on the curve, arg x will converge either to 0 or pi almost surely, depending on whether the curve passes to the left or the right (respectively) of x.

By Loewner’s DE for the upper half-plane and Ito’s formula:

d\bar{X}_t=\sqrt{\kappa}d\beta_t,\quad d\log\bar{X}_t=(2-\frac{\kappa}{2})\frac{dt}{\bar{X}_t^2}+\frac{\sqrt{\kappa}}{\bar{X}_t}d\beta_t

So, when K=4, the dt terms vanish, which gives that log X is a local martingale, and so

d\theta_t=\Im(\frac{2}{\bar{X}_t}d\beta_t

is a true martingale since it is bounded. Note that

\theta_t=\mathbb{E}[\pi1(x\text{ on right of }\gamma)|\mathcal{F}_t]

Note that also:

\mathbb{P}(\text{BM started at }x\text{ hits }\gamma[0,t]\cup\mathbb{R}\text{ to the right of }\gamma(t)|\gamma[0,t])=\frac{\theta_t}{\pi} also.

SLE_4 and the Gaussian Free Field on H

We will show that this chordal SLE_4 induces a conformal Markov type of property in Gaussian Free Fields constructed on the slit-domain. Precisely, we will show that if \Gamma_T is a GFF on H_T=\mathbb{H}\backslash\gamma[0,T], then \Gamma_T+ch_T(\cdot)=\Gamma_0+ch_0(\cdot), where c is a constant to be determined, and h_t(x)=\theta_t(x) in keeping with the lecturer’s notation!

It will suffice to check that for all fixed p with compact support \Gamma_T(p)+c(h_T(p)-h_0(p)) is a centred Gaussian with variance \int dxdyG_H(x,y)p(x)p(y).

First, applying Ito and conformal invariance of the Green’s functions under the maps g_t,

dG_{H_t}(x,y)=cd[h(x),h(y)]_t

The details are not particularly illuminating, but exploit the fact that Green’s function on H has a reasonable nice form \log\left|\frac{x-\bar{y}}{x-y}\right|. We are also being extremely lax with constants, but we have plenty of freedom there.

After applying Ito and some (for now unjustified) Fubini:

dh_t(p)=\left(\int c.p(x)\Im(\frac{1}{\bar{X}_t})dx\right)d\beta_t

and so as we would have suspected (since h(x) was), this is a local martingale. We now deploy Dubins-Schwarz:

h_T(p)-h_T(0)\stackrel{d}{=}B_{\sigma(T)} for B an independent BM and

\sigma(T)=\int_0^Tdt(\int c.p(x)\Im(\frac{1}{\tilde{X}_t})dx)^2

So conditional on (h_T(p),t\in[0,T]), we want to make up the difference to \Gamma_0. Add to h_T(p)-h_0(p) an independent random variable distribution as N(0,s-\sigma(T)), where

s=\int dxdyp(x)p(y)G(x,y)\quad =:\Gamma_0(p)

Then

s-\sigma(T)=\int p(x)p(y)[G(x,y)-c\int_0^Tdt\Im(\frac{1}{X_t})\Im(\frac{1}{Y_t})]dxdy=\int p(x)p(y)G_t(x,y)dxdy as desired.

Why is this important?

This is important, or at least interesting, because we can use it to reverse engineer the SLE. Informally, we let T\rightarrow\infty in the previous result. This states that taking a GFF in the domain left by removing the whole of the SLE curve (whatever that means) then adding \pi at points on the left of the curve, which is the limit \lim_T h_T is the same as a normal GFF on the upper half plane added to the argument function. It is reasonable to conjecture that a GFF in a non-connected domain has the same structure as taking independent GFFs in each component, and this gives an interesting invariance condition on GFFs. It can also be observed (Schramm-Sheffield) that SLE_4 arises by reversing the argument – take an appropriate conditioned GFF on H and look for the interface between it being ‘large’ and ‘small’ (Obviously this is a ludicrous simplification). This interface is then, under a suitable limit, SLE_4.