# Convex ordering on Galton-Watson trees

This blog was recently revived, via a post about convex ordering and its relevance to the problem of sampling with and without replacement that forms part of the potpourri of related results all sometimes referred to as Hoeffding’s inequality.

The previous post had been lying almost-complete but dormant for over two years. I revisited it because of a short but open-ended question about trees posed in our research group meeting by Serte Donderwinkel, one of our high-flying doctoral students.

Simplified Question:

For a Galton-Watson tree, can one obtain upper bounds in probability on the height of the tree, uniformly across all offspring distributions with mean $\mu$?

Note that in this setting, it is helpful to have in mind the classical notation $(Z_0,Z_1,Z_2,\ldots)$ for a Galton-Watson process, where typically $Z_0=1$, and $Z_{n+1}$ is the sum of $Z_n$ IID copies of the offspring distribution. Then we have

$\mathbb{P}(\mathrm{height}(\mathcal{T}) < k) = \mathbb{P}(Z_k=0).$

1) Subcritical case. When $\mu<1$, we certainly have $\mathbb{P}(Z_k>0)\le \mathbb{E}[Z_k]=\mu^k$.

Furthermore, if we’re studying all such offspring distributions, this is the best possible upper bound, by considering the offspring distribution given by $Z_1=1$ with probability $\mu$ and zero otherwise.

2) In the critical or supercritical case, $\mu\ge 1$ it is possible that the height is infinite with probability one.

So neither case is especially interesting for now.

Refined question:

What if instead we aren’t trying to obtain a bound uniformly across all offspring distributions with given mean $\mu$, but instead across a subset $\mathcal{X}$ of these distributions? How do we determine which distribution in $\mathcal{X}$ maximises the probability of reaching height k?

This is the question Serte was asking in our group meeting, in the setting where $\mu=1+o(1)$ and the height k has a particular scaling. Also, as far as I understand, the approach outlined in this post didn’t provide strong enough bounds in this particular context. Happily, Serte has recently tied up all the corners of this project concerning the supercritical Galton-Watson forest, and interested readers can find her preprint here on the Arxiv.

Nonetheless the interpretation via convex ordering feels perfect for a blog post, rather than being lost forever.

Convex ordering for offspring distributions

The main observation is that given two offspring distributions X and Y, such that $X\le_{cx} Y$ (which recall means that the means are the same but X is more concentrated) then a number of distributions associated to the Galton-Watson trees for X and Y also satisfy convex ordering relations.

As a warm-up, and because it was the original genesis, we first study heights. We will use the notation

$(Z_0^X,Z_1^X,Z_2^X,\ldots), (Z_0^Y,Z_1^Y,Z_2^Y,\ldots),$

to denote the two Galton-Watson processes. We shall compare $\mathbb{P}(Z^X_k=0)$ and $\mathbb{P}(Z^Y_k=0)$. If we write $\delta_0(\cdot)$ for the function defined on the non-negative integers such that

$\delta_0(0)=1,\quad \delta_0(n)=0,\,n\ge 1$,

it holds that $\delta_0(\cdot)$ is convex. In particular, if $X\le_{cx}Y$, then $\mathbb{E}[\delta_0(X)]\le \mathbb{E}[\delta_0(Y)]$, which exactly says that

$\mathbb{P}(Z^X_1 = 0)\le \mathbb{P}(Z^Y_1 = 0).$

We can then prove that $\mathbb{P}(Z^X_k=0)\le \mathbb{P}(Z^Y_k=0)$ by induction on $k\ge 1$. Note that $\mathbb{P}(Z^X_k=0)^n$ is a convex function of n, regardless of the value of this probability, and so we have

$\mathbb{P}(Z^X_{k+1}=0) = \mathbb{E}\left[ (\mathbb{P}(Z^X_k=0))^X\right] \le \mathbb{E}\left[(\mathbb{P}(Z^X_k=0))^Y\right].$

By the induction hypothesis, this final quantity is at most

$\mathbb{E}\left[(\mathbb{P}(Z^Y_k=0))^Y\right] = \mathbb{P}(Z^Y_{k+1}=0)$.

In conclusion, we have shown that $\mathbb{P}(Z^X_k=0)\le \mathbb{P}(Z^Y_k=0)$ holds for all k, and thus

$\mathrm{height}(\mathcal{T}^X) \ge_{st} \mathrm{height}(\mathcal{T}^Y).$

To return to the original context, suppose we have a large class of offspring distributions $\mathcal{Y}$ and a subclass $\mathcal{X}\subseteq \mathcal{Y}$ such that for all $Y\in\mathcal{Y}$, there exists $X\in \mathcal{X}$ such that $X\le_{cx} Y$. Then one can obtain uniform bounds on the heights of Galton-Watson trees with offspring distributions drawn from $\mathcal{Y}$ by checking those generated from distributions in $\mathcal{X}$ (which is particularly convenient if, for example, $\mathcal{X}$ is finite).

Convex ordering of generation sizes

The above argument solves the original problem, but brushes over the natural question: is it true that $Z^X_k \le_{cx} Z^Y_k$?

The answer is yes. Here’s a proof.

This follows from the following general statement:

Lemma 1: Suppose $X\le_{cx} Y$ are non-negative valued RVs and the non-negative integer valued RVs $M,N$ also satisfy $M \le_{cx} N$. Then

$X_1+\ldots+X_M \le_{cx} Y_1+\ldots Y_N,$

where $X_1,X_2,\ldots$ are IID copies of X and, independently, $Y_1,Y_2,\ldots$ are IID copies of Y.

Lemma 2: Suppose $W_1\le_{cx}Z_1$ and $W_2\le_{cx} Z_2$, and the four random variables are independent. Then $W_1+W_2\le_{cx}Z_1+Z_2$.

Proof of Lemma 2: First, note that for any random variable X, and convex function f

$\mathbb{E}\left[f(Z+x)\right]$ is a convex function of x.

(Indeed, this holds since “f(z+x) is convex” holds for every z, and any definition of convex will pass to the expectation.)

Now we can attack the lemma directly, we may write

$\mathbb{E}\left[ f(W_1+W_2)\right]=\mathbb{E}\left[\, \mathbb{E}[f(W_1+W_2) \mid W_2 ] \,\right] \le \mathbb{E}\left[\, \mathbb{E}[f(W_1+Z_2)\mid Z_2 ] \, \right].$

But then for any $z_2$, we know $f(\cdot+z_2)$ is convex, so $\mathbb{E}[f(W_1+z_2)]\le \mathbb{E}[f(Z_1+z_2)]$, and it follows that

$\mathbb{E}\left[ f(W_1+W_2)\right]\le \mathbb{E} \left[ f(Z_1+Z_2)\right],$

which proves the lemma.

Corollary 3: When $W_1,\ldots,W_m, Z_1,\ldots,Z_m$ are independent, and satisfy $W_i \le_{cx} Z_i$, then we have $W_1+\ldots+W_m\le_{cx} Z_1+\ldots+Z_m$.

Proof of Lemma 1: Note that

$\mathbb{E}\left[ f(X_1+\ldots+X_M)\mid M=n\right] \le \mathbb{E}\left[ f(Y_1+\ldots+Y_N)\mid N=n\right],$

follows from Corollary 3. So a useful question to consider is whether $\mathbb{E}\left[f(Y_1+\ldots+Y_n)\right]$ (*) is a convex function of n?

Denote this quantity by $F(n)$. To check convexity of a function defined on the integers, it suffices to verify that $F(n+1)-F(n)\ge F(n)-F(n-1)$.

There is a canonical coupling between the RVs used to define all of $F(n-1),F(n),F(n+1)$, but it will be convenient to adjust the coupling, and write:

$F(n+1)-F(n)= \mathbb{E}\left[ f(Y_1+\ldots+Y_n + Y^*) - f(Y_1+\ldots+Y_n)\right],$

$F(n)-F(n-1)=\mathbb{E}\left[f(Y_1+\ldots+Y_{n-1}+Y^*) - f(Y_1+\ldots+Y_{n-1})\right],$

where $Y^*$ is a further independent copy of $Y$. But note that for any choice $C\ge c$ and $y\in \mathbb{R}$,

$f(C+y) - f(C) - f(c+y) + f(c)\ge 0.$ (*)

(Essentially, this says that the ‘chord’ of f on the interval $[c,C+y]$ lies above the chord on interval $[C,c+y]$ or $[c+y,C]$, which some people choose to call Karamata’s inequality, but I think is more helpful to think of as part of the visual definition of convexity.)

In any case, setting $y=Y^*, c=Y_1+\ldots+Y_{n-1}, C=Y_1+\ldots+Y_n$ and taking expectations, we obtain

$\mathbb{E}\left[ f(Y_1+\ldots+Y_n+Y^*) - f(Y_1+\ldots+Y_n)\right.$

$\left.- f(Y_1+\ldots+Y_{n-1}+Y^*) + f(Y_1+\ldots+Y_{n-1})\right]\ge 0,$

as required. So $F(n)$ is convex. We may now finish off as

$\mathbb{E}\left[ X_1+\ldots+X_M\right] = \mathbb{E}\left[ \,\mathbb{E}[X_1+\ldots+X_M\mid M]\,\right] \le \mathbb{E}\left[\, \mathbb{E}[Y_1+\ldots+Y_M\mid M]\,\right] = \mathbb{E}[f(M)]\le \mathbb{E}[f(N)] = \mathbb{E}[Y_1+\ldots+Y_N],$

completing the proof of Lemma 1.

Final comments

• The analysis in this post is not sufficient to study the total population sizes of two Galton-Watson trees generated by X and Y. Note that in Lemma 2, it is important that the random variables are independent. Otherwise, we could, for example, consider $\mathbb{E}[X]=\mathbb{E}[Y]=0$ with $X\le_{cx}Y$ but clearly it should not hold that $X_1+X_2 \le_{cx} Y + (-Y) = 0$. So for total population size, since $(Z^X_k,\,k\ge 1)$ are not independent, an alternative approach would be required.
• A further characterisation of convex ordering is given by Strassen’s theorem [Str65], which is touched on in the previous post, and to which I may return to in a future post on this topic. This may be a more promising avenue for established a convex ordering result on total population size.
• Lemma 1 requires that X,Y are non-negative. Note that during the argument we set $y=Y^*, c=Y_1+\ldots+Y_{n-1}, C=Y_1+\ldots+Y_n$, and when we relax the non-negative support condition, it is no longer guaranteed that $C\ge c$, which is crucial for the step which follows.
• In a recent article in ECP addressing Lemma 1 by a different method, Berard and Juillet [BJ20] provide a simple example showing that the non-negative assumption is genuinely necessary. Consider the random variable $\tau\in \{0,2\}$ with equal probability so $1\le_{cx} \tau$. But then, taking both X and Y to be simple random walk on $\mathbb{Z}$, we do not have $S_1\le_{cx}S_{\tau}$.

References

[BJ20] – Berard, Juillet – A coupling proof of convex ordering for compound distributions, 2020

[Str65] – Strassen – The existence of probability measures with given marginals, 1965

Advertisement

# Lecture 3 – Couplings, comparing distributions

I am aiming to write a short post about each lecture in my ongoing course onÂ Random Graphs. Details and logistics for the course can be found here.

In this third lecture, we made our first foray into the scaling regime for G(n,p) which will be the main focus of the course, namely theÂ sparse regime when $p=\frac{\lambda}{n}$. The goal for today was to give a self-contained proof of the result that in theÂ subcritical setting $\lambda<1$, there is noÂ giant component, that is, a component supported on a positive proportion of the vertices, with high probability as $n\rightarrow\infty$.

More formally, we showed that the proportion of vertices contained within the largest component of $G(n,\frac{\lambda}{n})$ vanishes in probability:

$\frac{1}{n} \left| L_1\left(G\left(n,\frac{\lambda}{n}\right)\right) \right| \stackrel{\mathbb{P}}\longrightarrow 0.$

The argument for this result involves anÂ exploration process of a component of the graph. This notion will be developed more formally in future lectures, aiming for good approximation rather than bounding arguments.

But for now, the key observation is that when we ‘explore’ the component of a uniformly chosen vertex $v\in[n]$ outwards from v, at all times the number of ‘children’ of v which haven’t already been considered is ‘at most’ $\mathrm{Bin}(n-1,\frac{\lambda}{n})$. Since, for example, if we already know that eleven vertices, including the current one w are in C(v), then the distribution of the number of new vertices to be added to consideration because they are directly connected to w has conditional distribution $\mathrm{Bin}(n-11,\frac{\lambda}{n})$.

Firstly, we want to formalise the notion that this is ‘less than’ $\mathrm{Bin}(n,\frac{\lambda}{n})$, and also that, so long as we don’t replace 11 by a linear function of n, that $\mathrm{Bin}(n-11,\frac{\lambda}{n})\stackrel{d}\approx \mathrm{Poisson}(\lambda)$.

Couplings to compare distributions

AÂ coupling of two random variables (or distributions) X and Y is a realisation $(\hat X,\hat Y)$ on the same probability space with correct marginals, that is

$\hat X\stackrel{d}=X,\quad \hat Y\stackrel{d}=Y.$

We saw earlier in the course that we could couple G(n,p) and G(n,q) by simulating both from the same family of uniform random variables, and it’s helpful to think of this in general: ‘constructing the distributions from the same source of randomness’.

Couplings are a useful notion to digest at this point, as they embody a general trend in discrete probability theory. Wherever possible, we try to do as we can with the random objects, before starting any calculations. Think about the connectivity property of G(n,p) as discussed in the previous lecture. This can be expressed directly as a function of p in terms of a large sum, but showing it is an increasing function of p is essentially impossible by computation, whereas this is very straightforward using the coupling.

We will now review how to use couplings to compare distributions.

For a real-valued random variable X, with distribution function $F_X$, we always have the option to couple with a uniform U(0,1) random variable. That is, when $U\sim U[0,1]$, we have $(F_X^{-1}(U)\stackrel{d}= X$, where the inverse of the distribution function is defined (in the non-obvious case of atoms) as

$F_X^{-1}(u)=\inf\left\{ x\in\mathbb{R}\,:\, F(x)\ge u\right\}.$

Note that when the value taken by U increases, so does the value taken by $F_X^{-1}(U)$. This coupling can be usedÂ simultaneously on two random variables X and Y, as $(F_X^{-1}(U),F_Y^{-1}(U))$, to generate a coupling of X and Y.

TheÂ total variation distance between two probability measures is

$d_{\mathrm{TV}}(\mu,\nu):= \sup_{A}|\mu(A)-\nu(A)|$,

with supremum taken over all events in the joint support S of $\mu,\nu$. This is particularly clear in the case of discrete measures, as then

$d_{\mathrm{TV}}(\mu,\nu)=\frac12 \sum_{x\in S} \left| \mu\left(\{x\}\right) - \nu\left(\{x\}\right) \right|.$

(Think of the difference in heights between the bars, when you plot $\mu,\nu$ simultaneously as a bar graph…)

The total variation distances records how well we can couple two distributions, if we want them to be equal as often as possible. It is therefore a bad measure of distributions with different support. For example, the distributions $\delta_0$ and $\delta_{1/n}$ are distance 1 apart (the maximum) for all values of n. Similarly, the uniform distribution on [0,1] and the uniform distribution on $\{0,1/n,2/n,\ldots, n-1/n, 1\}$ are also distance 1 apart.

When there is more overlap, the following result is useful.

Proposition:Â Any coupling $(\hat X,\hat Y)$ of $X\sim \mu,\,Y\sim \nu$ satisfies $\mathbb{P}(X=Y)\le 1-d_{\mathrm{TV}}(\mu,\nu)$, and there exists a coupling such that equality is achieved. Continue reading

# DGFF 4 – Properties of the Green’s function

I’m at UBC this month for the PIMS probability summer school. One of the long courses is being given by Marek Biskup about the Discrete Gaussian Free Field (notes and outline here) so this seems like a good moment to revive the sequence of posts about the DGFF. Here’s DGFF1, DGFF2, DGFF3Â from November.

The first draft of this post was about the maximum of the DGFF in a large box $V_N$, and also about the Green’s function $G^{V_N}(x,y)$, which specifies the covariance structure of the DGFF. This first draft also became too long, so I’m splitting it into two somewhat shorter ones. As we’ll see, some understanding and standard estimates of the Green’s function is enough to say quite a bit about the maximum. In this first post, we’ll explore some ‘low-hanging fruit’ concerning the Green’s function, as defined through a simple random walk, which are useful, but rarely explained in the DGFF literature.

Symmetry of Green’s function

We start with one of these low-hanging fruit. If $G^{V_N}$ is to be a covariance matrix, it has to be symmetric. In the first post, showing that the definition of the DGFF as a random field with given Hamiltonian is equivalent to $\mathcal{N}(0,G^{V_N})$ certainly can be viewed as a proof of symmetry. However, it would be satisfying if there was a direct argument in the language of the definition of the Green’s function.

To make this self-contained, recall the random walk definition of $G^{V_N}(x,y)$. Let $(S_m)_{m\ge 0}$ be simple random walk on $V_N$, and $\mathbb{P}_x,\,\mathbb{E}_x$ denote starting the random walk at $x\in V_N$. As usual, let $\tau_y,\,\tau_A$ denote the hitting time of a vertex y or a set A respectively. Then

$G^{V_N}(x,y):= \mathbb{E}_x \left[ \sum_{m=0}^{\tau_{\partial V_N}}1_{(S_m=y) }\right].$

That is, $G^{V_N}(x,y)$ is the expected number of visits to y by a random walk from x, before it exits $V_N$.

Let’s drop the superscript for now, as everything should hold for a more general subset of the lattice. I don’t think it’s immediately obvious at the level of Markov chains why G(x,y)=G(y,x). In particular, it’s not the case that

$\mathbb{P}_x(\tau_y < \tau_{D^c}) = \mathbb{P}_y(\tau_x <\tau_{D^c}),$

and it feels that we can’t map between paths $x \to \partial D$ and $y\to \partial D$ in a way that preserves the number of visits to y and x, respectively. However, we can argue that for any m

$\mathbb{P}_x(S_m=y, \tau_{D^c}>m) = \mathbb{P}_y(S_m=x, \tau_{D^c}>m),$

by looking at the suitable paths of $(S_m)$. That is, if we have a path $x=S_0,S_1,\ldots,S_m=y$ that stays within D, then the probability of seeing this path starting from x and its reverse direction starting from y are equal. Why? Because

$\mathbb{P}_x(S_0=x,S_1=v_1,\ldots,S_{m-1}=v_{m-1},S_m=y) = \prod_{\ell=0}^{m-1} \frac{1}{\mathrm{deg}(v_\ell)},$

and

$\mathbb{P}_y(S_0=y,S_1=v_{m-1},\ldots,S_{m-1}=v_1, S_m=x) = \prod_{\ell=0}^{m-1} \frac{1}{\mathrm{deg}(v_{m-\ell})} = \prod_{\ell=1}^m \frac{1}{\mathrm{deg}(v_\ell)}.$

Since $D\subset \mathbb{Z}^d$ and x,y are in theÂ interior of D, we must have $\mathrm{deg}(x)=\mathrm{deg}(y)$, and so these two expressions are equal. Summing over all such two-way paths, and then all m gives the result.

Fixing one argument

We now focus on $G^D(\cdot,y)$, where the second argument is fixed. This is the solution to the Poisson equation

$\Delta G^D(\cdot,y) = -\delta_y(\cdot),\quad G^D(x,y)=0,\; \forall x\in \partial D.$

To see this, can use a standard hitting probability argument (as here) with the Markov property. This is harmonic in $D\backslash \{y\}$, and since we know

$G^D(y,y)= \frac{1}{\mathbb{P}_y(\text{RW hits }\partial D\text{ before returning to }y)},$

this uniquely specifies $G^D(\cdot,y)$. Anyway, since harmonic functions achieve their maxima at the boundary, we have $G(y,y)\ge G(x,y)$ for all $x\in D$. We can also see this from the SRW definition as

$G(x,y)=G(y,x) = \mathbb{P}_y (\tau_x < \tau_{\partial D} ) G(x,x) \le G(x,x).$

Changing the domain

Now we want to consider nested domains $D\subset E$, and compare $G^D(\cdot,\cdot)$ and $G^E(\cdot,\cdot)$ on DxD. The idea is that for SRW started from $x\in D$, we have $\tau_{\partial D}\le \tau_{\partial E}$, since one boundary is contained within the other. From this, we get

$G^D(x,y)\le G^E(x,y),\quad \forall x,y\in D,$

and we will use the particular case y=x.

For example, if $x\in V_N$, the box with width N, then the box with width 2N centred on x contains the whole of $V_N$. So, if we set $\bar {V}_{2N}:= [-N,N]^d$, then with reference to the diagram, we have

$G^{V_N}(x,x)\le G^{\bar{V}_{2N}}(0,0),\quad x\in V_N.$

As we’ll see when we study the maximum of the DGFF on $V_N$, uniform control over the pointwise variance will be a useful tool.

Maximising the Green’s function

The idea of bounding $G^{V_N}(x,x)$ by $G^{\bar V_{2N}}(0,0)$ for any $x\in V_N$ is clever and useful. But a more direct approach would be to find the value of x that maximises $G^{V_N}(x,x)$. We would conjecture that when $V_N$ has a central vertex, then this is the maximiser.

We can prove this directly from the definition of the Green’s function in terms of random walk occupation times. Let’s assume we are working with $\bar{V}_N$ for even N, so that 0 is the central vertex. Again, since

$G^D(x,x)=\frac{1}{\mathbb{P}_x(\text{RW hits }\partial D\text{ before returning to }x)},$ (*)

it would suffice to show that this probability is minimised when x=0. This feels right, since 0 is furthest from the boundary. Other points are closer to the boundary in some directions but further in others, so we can’t condition on the maximum distance from its start point achieved by an excursion of SRW (we’re vertex-transitive, so these look the same from all starting points), as even allowing for the four possible rotations, for an excursion of diameter slightly larger than N, starting at the centre is maximally bad.

However, intuitively it does feel as if being closer to the boundary makes you more likely to escape earlier. In fact, with a bit more care, we can couple the SRW started from 0 and the SRW started from $r=(r^x,r^y)\ne 0$ such that the latter always exits first. For convenience we’ll assume also that $r^x,r^y$ are both even.

I couldn’t find any reference to this, so I don’t know whether it’s well-known or not. The following argument involves projecting into each axis, and doing separate couplings for transitions in the x-direction and transitions in the y-direction. We assume WLOG that x is in the upper-right quadrant as shown. Then, let $0=S_0,S_1,S_2,\ldots$ be SRW started from 0, and we will construct $r=R_0,R_1,R_2,\ldots$ on the same probability space as $(S_m)_{m\ge 0}$ as follows. For every m, we set the increment $R_{m+1}-R_m$ to be $\pm(S_{m+1}-S_m)$. It remains to specify the sign, which will be determined by the direction of the S-increment, and a pair of stopping times. The marginal is therefore again an SRW, started from r. Temporarily, we use the unusual notation $S_m= (S^x_m,S^y_m)$ for the coordinates of $S_m$.

So, if $S_{m+1}-S_m=(1,0), (-1,0)$, ie S moves left or right, then we set

$R_{m+1}-R_m = \begin{cases} -(S_{m+1}-S_m) &\quad \text{if }mT^x.\end{cases}$ (*)

where $T^x:= \min\{m\,:\, R^x_m=S^x_m\}$. That is, $R^x$ moves in the opposing direction to $S^x$ until the first time when they are equal (hence the parity requirement), and then they move together. WLOG assume that $r^x>0$. Then suppose $S^x_m=\pm N$ and such m is minimal. Then by construction, if $m\ge T^x$, then $R^x_m=\pm N$ also. If $m, then we must have $S^x_m=-N$, and so since $R^x$‘s trajectory is a mirror image of $S^x$‘s, in fact $R^x_m = N+r^x>N$, so $R^x$ hit +N first. In both cases, we see that $R^x$ hits $\pm N$ at the same time or before $S^x$.

In other words, when $S^x_m$ has non-negative x coordinate, the lazy random walk $R^x$ follows the same trajectory as $S^x$, and when it has negative x coordinate, the $R^x$ mirrors $S^x$. At some time, it may happen that $S^x_m= R^x_m=0$ (recall the parity condition on r). Call this time $T^x$. We then adjust the description of the coupling so that (*) is the mechanism for $m, and then for $m\ge T^x$, we take $S^x_m=R^x_m$.

Similarly, if $S_{m+1}-S_m =(0,1), (0,-1)$, ie S moves up or down, then we set

$R_{m+1}-R_m = \begin{cases} -(S_{m+1}-S_m)&\quad \text{ if }m

with corresponding definition of the stopping time $T^y$.

This completes the coupling, and by considering $T^x\wedge T^y$, we have shown what that the exit time for the walk started from zero dominates the exit time for walk started from r. Recall that so far we are in the case where the box has even width and $r=(r^x,r^y)$ has even coordinates.

This exit time comparison isn’t exactly what we need to compare $G^N(0,0)$ and $G^N(x,x)$. It’s worth remarking at this stage that if all we cared about was the Green’s function on the integer line [-N,N], we would have an easier argument, as by the harmonic property of $G(\cdot,y)$

$G^{[-N,N]}(0,r)=\frac{N-r}{N}G^{[-N,N]}(0,0),$

$G^{[-N,N]}(r,0) = \frac{N}{N+r}G^{[-N,N]}(r,r),$

and so $G(0,0)>G(r,r)$ follows by symmetry. To lift from 1D to 2D directly, we need a bit more than this. It’s possible that S returns in both x- and y- coordinates more often than R, but never at the same time. Fortunately, the coupling we defined slightly earlier does give us a bit more control.

Let $\tau^x(S), \tau^x(R)$ be the first times that $S^x, R^x$ hit $\pm N$. Under this coupling, for any $m\ge 0$

$\mathbb{P}(S^x_m=0, m

since these events are literally equal. Since we showed that $\tau^x(R)\le \tau^x(S)$ almost surely, we can further deduce

$\mathbb{P}(S^x_m=0,m

$=\mathbb{P}(R^x_m=r^x, m

To address the corresponding events for which $m\ge T^x$, we apply the strong Markov property at $T^x$, to obtain SRW $Z_m$ started from r/2, and let $\tau_{-N},\tau_{+N}$ be the hitting times of $-N,+N$ respectively and $\tau_{\pm N}=\tau_{-N}\wedge \tau_{+N}$. It will now suffice to prove that

$\mathbb{P}(Z_m=0, m< \tau_{\pm N}) \ge \mathbb{P}(Z_m=r,m<\tau_{\pm N}),$ (**)

as then we can apply the law of total probability and sum over values of $T^x$ and $m\ge 0$.

To prove this result, we consider the following bijection between trajectories of length m from r/2 to {0,r}. We decompose the trajectories into excursions away from r/2, and then a final meander from r/2 to {0,r} that stays on the same side of r/2. We construct the new trajectory by preserving all the initial excursions, but reversing all the steps of the final meander. So if the original trajectory ended up at 0, the image ends up at r. Trivially, the initial excursions in the image only hit $\pm N$ if the excursions in the original trajectory did this too. But it’s also easy to see, by a similar argument to the coupling at the start of this section, that if the original trajectory ends at r and does not hit $\pm N$, then so does the image. However, the converse is not true. So we conclude (**), and thus

$\mathbb{P}(S_m^x=0) \ge \mathbb{P}(R_m^x=0)$

for all m by combining everything we have seen so far. And so we can now lift to a statement about $S_m$ itself, that is considering both coordinates separately.

The remaining cases for r require a little more care over the definition of $T^x$, though the same projection argument works, for fundamentally the same reason. (Note that in the above argument, if $S^x_m=-N$ and $m, then in fact $R^x_m\ge N+2$, and so it’s not hard to convince yourself that a sensible adjustment to the stopping time will allow a corresponding result with $R^x_m\ge N+1$ in the odd $r^x$ case.) The case for N odd is harder, since in one dimension there areÂ two median sites, and it’s clear by symmetry that we can’t couple them such that RW from one always exits at least as early as RW from the other. However, the distributions of exit times started from these two sites are the same (by symmetry), and so although we can’t find a coupling, we can use similar stopping times to obtain a result in probability.

In the next post, we’ll see how to apply this uniform bound on $G^{V_N}(x,x)$ to control the maximum of the DGFF on $V_N$. In particular, we address how the positive correlations of DGFF influence the behaviour of the maximum by comparison with independent Gaussians at each site.

# Skorohod embedding

Background

Suppose we are given a standard Brownian motion $(B_t)$, and a stopping time T. Then, so long as T satisfies one of the regularity conditions under which the Optional Stopping Theorem applies, we know that $\mathbb{E}[B_T]=0$. (See here for a less formal introduction to OST.) Furthermore, since $B_t^2-t$ is a martingale, $\mathbb{E}[B_T^2]=\mathbb{E}[T]$, so if the latter is finite, so is the former.

Now, using the strong Markov property of Brownian motion, we can come up with a sequence of stopping times $0=T_0, T_1, T_2,\ldots$ such that the increments $T_k-T_{k-1}$ are IID with the same distribution as T. Then $0,B_{T_1},B_{T_2},\ldots$ is a centered random walk. By taking T to be the hitting time of $\{-1,+1\}$, it is easy to see that we can embed simple random walk in a Brownian motion using this approach.

Embedding simple random walk in Brownian motion.

The Skorohod embedding question asks: can all centered random walks be constructed in this fashion, by stopping Brownian motion at a sequence of stopping time? With the strong Markov property, it immediately reduces the question of whether all centered finite-variance distributions X can be expressed as $B_T$ for some integrable stopping time T.

The answer to this question is yes, and much of what follows is drawn from, or at least prompted by Obloj’s survey paper which details the problem and rich history of the many approaches to its solution over the past seventy years.

Applications and related things

The relationship between random walks and Brownian motion is a rich one. Donsker’s invariance principle asserts that Brownian motion appears as the scaling limit of a random walk. Indeed, one can construct Brownian motion itself as the limit of a sequence of consistent random walks with normal increments on an increasingly dense set of times. Furthermore, random walks are martingales, and we know that continuous, local martingales can be expressed as a (stochastically) time-changed Brownian motion, from the Dubins-Schwarz theorem.

The Skorohod embedding theorem can be used to prove results about random walks with general distribution by proving the corresponding result for Brownian motion, and checking that the construction of the sequence of stopping times has the right properties to allow the result to be carried back to the original setting. It obviously also gives a coupling between a individual random walk and a Brownian motion which may be useful in some contexts, as well as a coupling between any pair of random walks. This is useful in proving results for random walks which are much easier for special cases of the distribution. For example, when the increments are Gaussian, or when there are combinatorial approaches to a problem about simple random walk. At the moment no aspect of this blog schedule is guaranteed, but I plan to talk about the law of the iterated logarithm shortly, whose proof is approachable in both of these settings, as well as for Brownian motion, and Skorohod embedding provides the route to the general proof.

At the end, we will briefly compare some other ways to couple a random walk and a Brownian motion.

Adding extra randomness

One thing we could do is sample a copy of X independently from the Brownian motion, then declare $T= \tau_{X}:= \inf\{t\ge 0: B_t=X\}$, the hitting time of (random value) X. But recall that unfortunately $\tau_x$ has infinite expectation for all non-zero x, so this doesn’t fit the conditions required to use OST.

Skorohod’s original method is described in Section 3.1 of Obloj’s notes linked above. The method is roughly to pair up positive values taken by X appropriately with negative values taken by X in a clever way. If we have a positive value b and a negative value a, then $\tau_{a,b}$, the first hitting time of $\mathbb{R}\backslash (a,b)$ is integrable. Then we choose one of these positive-negative pairs according to the projection of the distribution of X onto the pairings, and let T be the hitting time of this pair of values. The probability of hitting b conditional on hitting {a,b} is easy to compute (it’s $\frac{-a}{b-a}$) so we need to have chosen our pairs so that the ‘probability’ of hitting b (ie the density) comes out right. In particular, this method has to start from continuous distributions X, and treat atoms in the distribution of X separately.

The case where the distribution X is symmetric (that is $X\stackrel{d}=-X$) is particularly clear, as then the pairs should be $(-x,x)$.

However, it feels like there is enough randomness in Brownian motion already, and subsequent authors showed that indeed it wasn’t necessary to introduce extra randomness to provide a solution.

One might ask whether it’s possible to generate the distribution on the set of pairs (as above) out of the Brownian motion itself, but independently from all the hitting times. It feels like it might be possible to make the distribution on the pairs measurable with respect to

$\mathcal{F}_{0+} = \bigcap\limits_{t>0} \mathcal{F}_t,$

the sigma-algebra of events determined by limiting behaviour as $t\rightarrow 0$ (which is independent of hitting times). But of course, unfortunately $\mathcal{F}_{0+}$ has a zero-one law, so it’s not possible to embed non-trivial distributions there.

Dubins solution

The exemplar for solutions without extra randomness is due to Dubins, shortly after Skorohod’s original argument. The idea is to express the distribution X as the almost sure limit of a martingale. We first use the hitting time of a pair of points to ‘decide’ whether we will end up positive or negative, and then given this information look at the hitting time (after this first time) of two subsequent points to ‘decide’ which of four regions of the real interval we end up in.

I’m going to use different notation to Obloj, corresponding more closely with how I ended up thinking about this method. We let

$a_+:= \mathbb{E}[X \,|\, X>0], \quad a_- := \mathbb{E}[X\,|\, X<0],$ (*)

and take $T_1 = \tau_{\{a_-,a_+\}}$. We need to check that

$\mathbb{P}\left( B_{T_1}=a_+\right) = \mathbb{P}\left(X>0\right),$

for this to have a chance of working. But we know that

$\mathbb{P}\left( B_{T_1}=a_+\right) = \frac{a_+}{a_+-a_-},$

and we can also attack the other side using (*) and the fact that $\mathbb{E}[X]=0$, using the law of total expectation:

$0=\mathbb{E}[X]=\mathbb{E}[X\,|\, X>0] \mathbb{P}(X>0) + \mathbb{E}[X\,|\,X<0]\mathbb{P}(X<0) = a_+ \mathbb{P}(X>0) + a_- \left(1-\mathbb{P}(X>0) \right),$

$\Rightarrow\quad \mathbb{P}(X>0)=\frac{a_+}{a_+-a_-}.$

Now we define

$a_{++}=\mathbb{E}[X \,|\, X>a_+],\quad a_{+-}=\mathbb{E}[X\,|\, 0

and similarly $a_{-+},a_{--}$. So then, conditional on $B_{T_1}=a_+$, we take

$T_2:= \inf_{t\ge T_1}\left\{ B_t\not\in (a_{+-},a_{++})Â \right\},$

and similarly conditional on $B_{T_1}=a_-$. By an identical argument to the one we have just deployed, we have $\mathbb{E}\left[B_{T_2} \,|\,\mathcal{F}_{T_1} \right] = B_{T_1}$ almost surely. So, although the $a_{+-+}$ notation now starts to get very unwieldy, it’s clear we can keep going in this way to get a sequence of stopping times $0=T_0,T_1,T_2,\ldots$ where $B_{T_n}$ determines which of the $2^n$ regions of the real line any limit $\lim_{m\rightarrow\infty} B_{T_m}$ should lie in.

A bit of work is required to check that the almost sure limit $T_n\rightarrow T$ is almost surely finite, but once we have this, it is clear that $B_{T_n}\rightarrow B_T$ almost surely, and $B_T$ has the distribution required.

Komlos, Major, Tusnady coupling

We want to know how close we can make this coupling between a centered random walk with variance 1, and a standard Brownian motion. Here, ‘close’ means uniformly close in probability. For large times, the typical difference between one of the stopping times $0,T_1,T_2,\ldots$ in the Skorohod embedding and its expectation (recall $\mathbb{E}[T_k]=k$) is $\sqrt{n}$. So, constructing the random walk $S_0,S_1,S_2,\ldots$ from the Brownian motion via Skorohod embedding leads to

$\left |S_k - B_k \right| = \omega(n^{1/4}),$

for most values of $k\le n$. Strassen (1966) shows that the true scale of the maximum

$\max_{k\le n} \left| S_k - B_k \right|$

is slightly larger than this, with some extra powers of $\log n$ and $\log\log n$ as one would expect.

The Komlos-Major-Tusnady coupling is a way to do a lot better than this, in the setting where the distribution of the increments has a finite MGF near 0. Then, there exists a coupling of the random walk and the Brownian motion such that

$\max_{k\le n}\left|S_k- B_k\right| = O(\log n).$

That is, there exists C such that

$\left[\max_{k\le n} \left |S_k-B_k\right| - C\log n\right] \vee 0$

is a tight family of distributions, indeed with uniform exponential tail. To avoid digressing infinitely far from my original plan to discuss the proof of the law of iterated logarithm for general distributions, I’ll stop here. I found it hard to find much coverage of the KMT result apart from the challenging original paper, and many versions expressed in the language of empirical processes, which are similar to random walks in many ways relevant to convergence and this coupling, but not for Skorohod embedding. So, here is a link to some slides from a talk by Chatterjee which I found helpful in getting a sense of the history, and some of the modern approaches to this type of normal approximation problem.

# Birthday Coincidences and Poisson Approximations

This morning, Facebook was extremely keen to remind me via every available medium that four of my friends celebrate their birthday today. My first thought was that I hope they all enjoy their day, and my second thought was to ask what the chance of this was. I have about 200 Facebook friends, and so this struck me as an unlikely occurrence. But this problem has form, and it felt worthwhile to try some calculations to see if my intuition was well-founded.

SimÃ©on Denis Poisson celebrated his 234th birthday on 21st June this year.

The classical birthday problem

The starting point is the question: how many friends do you have to have before you expect to start seeing anyone sharing a birthday? There are a ridiculous number of articles about this on the web already, so I will say little, except that I don’t want to call this the ‘birthday paradox’, because it’s not a paradox at all. At best it might be counter-intuitive, but then the moral should be to change our intuition for this type of problem.

Throughout, let’s discount February 29th, as this doesn’t add much. So then, to guarantee having a shared pair of birthdays, you need to have 366 friends. But if you have a mere 23 friends, then the probability of having some pair that share a birthday is slightly greater than a half. The disparity between these two numbers leads to the counter-intuition. Some people might find it helpful to think that instead of counting friends, we should instead be counting pairs of friends, but I don’t personally find this especially helpful.

For me, thinking about the calculation in very slightly more generality is helpful. Here, and throughout, let’s instead take N to be the number of days in a year, and K the number of friends, or kids in the class if you prefer. Then, as usual, it is easier to calculate the probability that no two share a birthday (that is, that all the birthdays are distinct) than the probability that some two share a birthday. We could think of the number of ways to pick the set of birthdays, or we could look at the kids one-at-a-time, and demand that their birthday is not one of those we’ve already seen. Naturally, we get the same answer, that is

$\frac{^N P_K}{N^K} = 1\cdot \frac{N-1}{N}\cdot\ldots \frac{N-K+1}{N}.$

We’ve assumed here that all birthdates are equally likely. We’ll come back to this assumption right at the end. For now, let’s assume that both N and K are large, and we’ll try to decide roughly how large K has to be in relation to N for this answer to be away from 0 and 1. If we pair opposite terms up, we might approximate this by

$(\frac{N-\frac{K}{2}}{N})^K = (1-\frac{K}{2N})^K\approx e^{-K^2/2N}.$

In fact, AM-GM says that this is an overestimate, and a bit more care can be used to show that this is a good-approximation to first order. So we see that if $K=\Theta(\sqrt{N})$ for large N, we get a non-trivial limit.

Challenges for four-way shared birthdays

So the original problem I posed is harder, because there isn’t (unless I’m missing something) a natural way to choose birthdays one-at-a-time, or describe the set of suitable birthday sets. There are two major obstacles in a calculation such as this. Firstly, the overlap of people, that is we might have five or more birthdays overlapping; secondly, the overlap of days, that is we might have several days with four (or more) birthdays. We’ll end up worrying more about the second situation.

We start by eliminating both problems, by asking for the probability that exactly four friends are born on January 1st. The general form of this probability is $\frac{\binom{K}{4} }{N^4} \cdot (\frac{N-1}{N})^{K-4}$. Now, if $K\ll N$, this final term should not be significant. Removing this is not exactly the same as specifying the probability that at least four birthdays are on January 1st. But in fact this removal turns a lower bound (because {exactly four}<{at least four}) into an upper (in fact a union) bound. So if the factor being removed is very close to one, we can use whichever expression is more convenient.

In the real life case of N=365, K=200, this term is not negligible. But accounting for this, we get that the probability of exactly four birthdays on 1st January is ~0.0021. Our upper bound on the probability of at least four is ~0.0036.

But now that we know the probability for a given day, can we calculate $(1-0.0021)^{365}$ to estimate the probability that we never have four-overlap? When we did our previous iterative calculation, we were using independence of the different kids’ birthdays. But the event that we have four-overlap on January 1st is not quite independent of the event that we have four-overlap on January 2nd. Why? Well if we know at least four people were born on January 1st, there are fewer people left (potentially) to be born on January 2nd. But maybe this dependence is mild enough that we can ignore it?

We can, however, use some moment estimates. The expected number of days with four-overlap is $365\cdot 0.0021 \approx 0.77$. So the probability that there is at least one day with four-overlap is at most ~0.77.

But we really want a lower bound. So, maybe we can borrow exactly the second-moment argument we tried (there for isolated vertices in the random graph) in the previous post? Here, the probability that both January 1st and January 2nd are four-overlapping is

$\frac{\binom{K}{4}\binom{K-4}{4}}{N^8}\cdot (\frac{N-2}{N})^{K-8}\approx 4.3\times 10^{-6}.$

From this, we can evaluate the expectation of the square of the number of days with four-overlap, and thus find that the variance is ~0.74. So we use Chebyshev, calling this number of days #D for now:

$\mathbb{P}(\# D=0)\le \mathbb{P}(|\#D - \mathbb{E}\# D|^2 \ge (\mathbb{E}\# D)^2 ) \le \frac{\mathrm{Var} \# D}{(\mathbb{E} \#D)^2}.$

In our case, this unfortunately gives us an upper bound greater than 1 on this probability, and thus a lower bound of zero on the probability that there is at least one day with four-overlap. Which isn’t especially interesting…

Fairly recently, I spoke about the Lovasz Local Lemma, which can be used to find lower bounds on the probabilities of intersections of events, many of which are independent (in a particular precise sense). Perhaps this might be useful here? The natural choice of ‘bad event’ is that particular 4-sets of people share a birthday. There are $\binom{K}{4}$ such events, and each is independent of the collection of $\binom{K-4}{4}$ disjoint events. Thus we can consider using LLL if $e\cdot (\binom{K}{4}-\binom{K-4}{4})\cdot 0.0021 \le 1$. Unfortunately, this difference of binomial coefficients is large in our example, and so in fact the LHS has order $10^3$.

Random number of friends – coupling to a Poisson Process

All of these methods failed because without independence we had to use estimates which were really not tight at all. But we can re-introduce independence if we remove the constraints on the model. Suppose instead of demanding I have K friends, I instead demand that I have a random number of friends, with distribution Poisson(K). Now it is reasonable to assume that for each day, I have a Poisson(K/365) friends with that birthday, independently for each day.

If we end up having exactly K friends with this random choice, then the distribution of the number of 4-overlap days is exactly the same as in the original setup. However, crucially, if we end up having at most K friends with this random choice, the distribution of the number of 4-overlap days is stochastically dominated by the original distribution. So instead let’s assume we have Poisson(L) friends, where L<K, and see how well we can do. For definiteness, we’ll go back to N=365, K=200 now. Let’s say X is the distribution of birthdays in the original model, and $\Xi$ for the distribution of birthdays in the model with a random number of friends

Then

$\mathbb{P}(\exists \ge 4\text{-overlap in }\Xi) = 1- \mathbb{P}(\mathrm{Po}(L/365)\le 3)^365.$ (*)

Now we can write the crucial domination relation as

$\mathbb{P}(\exists \ge 4\text{-overlap in }X)\ge \mathbb{P}( \exists \ge 4\text{-overlap in }\Xi \,|\, |\Xi|\le 200),$

and then use an inequality version of the law of total probability to bound further as

$\ge \frac{ \mathbb{P}(\exists \ge 4\text{-overlap in }\Xi) - \mathbb{P}(|\Xi|>200)}{\mathbb{P}(|\Xi|\le 200)}.$

This is a function of L, and in principle we could find its maximum, perhaps as $N\rightarrow\infty$. Here, though, let’s just take L=365/2 and see what happens. For (*) we get ~0.472.

To estimate $\mathbb{P}(\mathrm{Po}(365/2)>200)$, observe that this event corresponds to 1.4 standard deviations above the mean, so we can approximate using quantiles of the normal distribution, via the CLT. (Obviously this isn’t completely precise, but it could be made precise if we really wanted.) I looked up a table, and this probability is, conveniently for calculations, roughly 0.1. Thus we obtain a lower bound of $\frac{0.472-0.1}{0.9}$. Allowing for the fairly weak estimates at various points, we still get a lower bound of around 0.4. Which is good, because it shows that my intuition wasn’t right, but that I was in the right ball-park for it being a ‘middle-probability event’.

Remarks and References

– The reason for doing the upper bound for the probability of exact 4-overlap is that the same argument for at-least-4-overlap would have given an upper bound of 1. However, this Poisson Process coupling is also a much better method for obtaining an upper bound on either event.

– Birthdays are not uniformly distributed through the year. The deviation is strong enough that even from the set of birth frequencies (rather than the sequence of birth frequencies), we can reject a null hypothesis of uniformity. Early September is pretty close to the maximum. Two comments: 1) this is the time of year where small variations in birth date have a big effect on education, especially in primary school; 2) we are 37 weeks into the year…

– It is known that 187 friends is the first time the probability of having at-least-4-overlap is greater than Â½. You can find the full sequence on OEIS as A014088. I used to have about 650 Facebook friends, before I decided that I’d prefer instead the pleasant surprise of finding out what old acquaintances were up to when I next spoke to them. In this case, the median of the distribution of the largest number sharing a birthday would be seven.

– Eric Weisstein’s article on Mathworld is, in my opinion, the best resource for a mathematician on the first two pages of Google hits by at least two orders of magnitude. In the notation of this article, we were calculating $P_4(n=200,d=365)$. There are also some good general asymptotics, or at least recipes for asymptotics, in equations (17) and (18).

– The paper Methods for Studying Coincidences by Diaconis and Mosteller is, as one might expect, extremely readable, and summarises many results and applications, including several generalisations.

# Sharpness of Phase Transition in Percolation

On the flight to Romania a couple of weeks ago, I read this very nice paper by Duminil-Copin and Tassion in which, as a corollary to a more general result in a companion paper, they provide a short proof of a result about percolation on $\mathbb{Z}^d$. Mitch talked about this paper at our final Junior Probability Seminar of this term yesterday, so it seemed appropriate to write up something about this nice argument. I must confess I know relatively little about these problems, and in particular know nothing about how the original authors, Aizenmann + Barsky (1987), and Menshikov (1986) approached this problem, but experts have said that this is substantially easier to digest.

Rather than reference the first paper repeatedly, I remark now that everything which follows comes from there.

We consider conventional bond percolation on the edges of $\mathbb{Z}^d$, for $d\ge 2$, and are interested in whether the the origin percolates with positive probability. That is, that zero is contained in an infinite component. As usual we define $p_c=\sup\{p: \mathbb{P}_p(0\leftrightarrow\infty)=0\}$ to be the critical probability above which percolation happens with positive probability. Defining $\theta(p)=\mathbb{P}_p(0\leftrightarrow\infty)$, we do not know whether $\theta(p_c)=0$ for some values of d, notably d=3.

If the origin is connected to infinity, it is by definition connected to the boundary of every $\Lambda_n:=[-n,n]^d$. The number of distinct paths from the origin to $\partial \Lambda_n$ is bounded by the number of self-avoiding walks on the lattice of length n starting from 0, which grows at most as fast as $(2d-1)^n$. In particular, we know that $p_c\ge \frac{1}{2d-1}$, but also, for any $p<\frac{1}{2d-1}$, the probability $\mathbb{P}_p[0\leftrightarrow \partial\Lambda_n]$ decays exponentially in n. We would expect this in fact to hold for all $p, and this is something that the authors prove, called Item 1. They also show that the percolation probability grows at least linearly beyond $p_c$, specifically (called Item 2)

$\theta(p)\ge \frac{p-p_c}{p(1-p_c)}.$

The proof here proceeds by considering the function of subsets S which contain the origin:

$\varphi_p(S):= p\sum_{(x,y)\in\Delta S} \mathbb{P}_p[0\stackrel{S}{\leftrightarrow} x],\quad S\subset \mathbb{Z}^d$.

In words, this gives the expected number of edges across the boundary which are connected to the origin by a path within S. So this gives a measure of how likely we are to escape S, and in particular, an upper bound on the probability that an open path exists from 0 to outside S. The authors then define the alternative critical probability

$\tilde p_c := \sup_\{p\text{ s.t. }\exists\text{ finite }0\in S\text{ with }\varphi_p(S)<1\}.$

They will show that $\tilde p_c$ satisfies the statements of both Item 1 and Item 2. Item 2 for $\tilde p_c$ implies $\tilde p_c\le p_c$, and Item 1 for $\tilde p_c$ implies $p_c\le \tilde p_c$, so this is exactly what we need.

They show Item 1 first. We consider this set S for which $\varphi_p(S)<1$, and take some box $\Lambda_L$ which strictly contains S. Now we consider the probability of escaping from a box of size kL. The reason why considering this definition of S works really nicely is that it makes it possible to split this event of escaping from $\Lambda_{kL}$ into an event involving subjects of various disjoint sets of edges being open, so we can use independence.

We decompose the path from 0 to $\partial\Lambda_{kL}$ based on the first time it leaves S. We are mindful that there might be lots of paths from from 0 to this boundary. The way we are bounding means it doesn’t matter if we have lots of suitable paths, but they should all spend a maximal number of steps in S, in the sense that whenever the path re-enters S, say to vertex z, there is no open path from 0 to z contained in S. Let the vertex on $\partial S$ we leave from for the first time be x. Then, for all vertices y later in the path, $0\stackrel{S}{\not\leftrightarrow}y$.

So under any suitable path, now take y to be the vertex directly following x, hence $(x,y)\in\Delta S$. If we take $\mathcal{C}$ to be the set of vertices z for which $0\stackrel{S}{\leftrightarrow}z$, we can split the expression based on S to obtain:

$\mathbb{P}_p[0\leftrightarrow \partial \Lambda_{kL}]\le p \sum_{(x,y)\in\Delta S}\sum_{C\subset S} \mathbb{P}_p[0\stackrel{S}{\leftrightarrow} x,\mathcal{C}=C] \mathbb{P}_p[y\stackrel{\mathbb{Z}^d\backslash C}{\leftrightarrow}\partial\Lambda_{kL}].$

Splitting based on C gives us independence between all of these sets of edges, but then we immediately forget about it. Irrespective of choice of y (recall, $y\in S\subset \Lambda_L$), this final probability is definitely bounded by $\mathbb{P}_p[0\leftrightarrow \partial \Lambda_{(k-1)L}]$, while the p and the first term can be summed over C to give $\varphi_p(S)$. They obtain:

$\mathbb{P}_p[0\leftrightarrow \partial \Lambda_{kL}] \le \varphi_p(S)\mathbb{P}_p[y\leftrightarrow \partial \Lambda_{(k-1)L}] \le \varphi_p(S)^{k-1},$

where the final relation holds by induction, and clearly gives exponential decay as required.

For Item 2 we use Russo’s formula. Here we have a slightly simpler example than the most general version, since the event under consideration $A_n=\{0\leftrightarrow \partial\Lambda_n\}$ is increasing with respect to adding edges. It is also a function of a finite number of edges. Then we can consider $\frac{d}{dp}\mathbb{P}_p[A]$ under the coupling which adds each edge independently as a Poisson process with (locally) rate $\frac{1}{1-t}$. (We take this to be the rate to avoid having to reparameterise exponentially between time and probability. Here t=p.)

Just for ease, we only consider the right-derivative at p. Then with $\mathbb{P}$ as the law of the coupled process:

$\frac{d}{dp}\mathbb{P}_p[A] \approx \frac{1}{\epsilon} \mathbb{P}[A\text{ holds at }p+\epsilon,\text{ but not at }p]$

$= \frac{1}{\epsilon} \sum_{e\in E}\mathbb{P}[e\text{ added between }p,p+\epsilon\text{ and }e\text{ completes event }A]$

$+ \frac{1}{\epsilon}\mathbb{P}[\text{two or more edges relevant to }A\text{ added}].$

Since the number of edges whose states determine A is finite, this second term vanishes as $\epsilon\rightarrow 0$. So

$=\frac{1}{\epsilon}\sum \frac{\epsilon}{1-p} \mathbb{P}(A \text{ doesn't hold at p, and edge }e\text{ pivotal at p}).$

Taking the limit in $\epsilon$ in this example gives

$\frac{d}{dp}\mathbb{P}_p[0\leftrightarrow \partial\Lambda_n] = \frac{1}{1-p} \sum_{e\in\Lambda_n} \mathbb{P}_p[e\text{ pivotal, }0\not\leftrightarrow \partial \Lambda_n].$

The argument then proceeds in a similar way to Item 1, decomposing $\{0\not\leftrightarrow \partial \Lambda_n\}$ by conditioning on the set of vertices $\mathcal{S}$ from which it is not possible to get to $\partial \Lambda_n$. In particular, this set is an excellent candidate to view as S, since on this event it contains 0 by definition. Once we have specified $\mathcal{S}$ we know which edges might be pivotal, namely those across the boundary of $\mathcal{S}$. Crucially, the event $\{\mathcal{S}=S\}$ only depends on those edges between the boundary of S and $\partial \Lambda_n$, so is independent of the event $\{0\stackrel{S}{\leftrightarrow}x\}$ whenever $x\in\partial \mathcal{S}$. So applying this version of Russo gives

$\frac{d}{dp}\mathbb{P}_p[0\leftrightarrow \partial\Lambda_n] = \frac{1}{1-p}\sum_{0\in S\subset \Lambda_n} \sum_{(x,y)\in\Delta S} \mathbb{P}_p[0\stackrel{S}{\leftrightarrow} x]\mathbb{P}_p[\mathcal{S}=S].$

It is clear where $\varphi_p(S)$ might turn up within the sum (after removing a factor of p), so for a bound we can take $\inf_S \varphi_p(S)$ outside the sum, and arrive at

$\frac{d}{dp}\mathbb{P}_p[0\leftrightarrow \partial\Lambda_n] \ge \frac{1}{p(1-p)}\inf_{0\in S\subset \Lambda_n} (1-\mathbb{P}_p[0\leftrightarrow \partial \Lambda_n].$

It wasn’t immediately clear to me immediately that this implied the given form of Item 2 (though it certainly is consistent). I think perhaps I was trying to be too complicated and thinking about Gronwall’s lemma when in fact everything really follows from bounding $\inf \varphi_p(S)$ below by 1 (since we have assumed $p>\tilde p_c$ here), then integrating the differential inequality

$\frac{d}{dp}\left[ \frac{p}{1-p}f(p) \right] = \frac{p}{1-p}f'(p)+\frac{1}{(1-p)^2}f(p) \ge \frac{1}{(1-p)^2}.$

I include this not because it’s an interesting part of the argument (I don’t think it is really) but because I struggled to digest it first time round.

What is interesting is how well this choice to consider $\varphi_p(S)$ works out. In both parts of the argument, sets which work well for splitting the crossing probabilities into disjoint edge events mesh nicely with considering this function after conditioning on sub- or super-criticality with respect to $\tilde p_c$.

# Real Trees – Root Growth and Regrafting

Two weeks ago in our reading group meeting, Raphael told us about Chapter Five which introduces root growth and regrafting. One of the points of establishing the Gromov-Hausdorff topology in this book was to provide a more natural setting for a discussion of tree-valued processes. Indeed in what follows, one can imagine how to start the construction of a similar process for the excursions which can be used to encode real trees, involving cutting off sub-excursions above one-sided local minima, then glueing them back in elsewhere. But taking account of the equivalence structure will be challenging, and it is much nicer to be able to describe cutting a tree in two by removing a single point without having to worry about quotient maps.

We have seen in Chapter Two an example of a process defined on the family of rooted trees with n labelled vertices which has the uniform rooted tree as an invariant distribution. Given a rooted tree with root p, we choose uniformly at random a vertex p’ in [n] to be the new root. Then if p’=p we do nothing, otherwise we remove the unique first edge in the path from p’ to p, giving two trees. Adding an edge from p to p’ completes the step and gives a new tree with p’ as root. We might want to take a metric limit of these processes as n grows and see whether we end up with a stationary real tree-valued process whose marginals are the BCRT.

To see non-trivial limiting behaviour, it is most interesting to consider the evolution of a particular subtree (which includes the root) through this process. If the vertex chosen for cutting lies in our observed subtree, then the subtree undergoes a prune and regraft operation. On the other hand, if the vertex chosen for cutting does not lie in the subtree, then we do not see any effect of the pruning, except the addition of a new vertex below the original root, which becomes the new root. So essentially, from the point of view of our observed subtree, the root is growing.

Now we can think about interpreting the dynamics of a natural limit process acting on real trees. The key idea is that we don’t change the set on which the tree is supported much, but instead just change the metric. In particular, we will keep the original tree, and add on length at unit rate. Of course, where this length gets added on entirely determines the metric structure of the tree, but that doesn’t stop us giving a simple ‘name’ for the extra length. If we consider a process $X^T$ starting from a particular finite subtree T, then at time t, the tree $X^T_t$ is has vertex set $T \coprod (0,t]$. (Finite subtree here means that it has finite total length.)

Root regrafting should happen at a rate proportional to the total length of the current observed tree. This is reasonable since after all it is supported within a larger tree, so in the discrete case the probability of a prune-regrafting event happening within a given observed subtree is proportional to the number of vertices in that subtree, which scales naturally as length in the real tree limit. It turns out that to get unit rate root growth with $\Theta(1)$ rate prune-regrafting, we should consider subtrees of size $\sqrt{n}$ within a host tree of size n as $n\rightarrow\infty$. We also rescale the lengths by $\frac{1}{\sqrt{n}}$, and time by $\sqrt{n}$ so we actually see prune-regraft events.

Furthermore, if the subtree is pruned, the location of the pruning is chosen uniformly by length of the current observed subtree. So we can view the pruning process as being driven by a Poisson point process with intensity given by the instantaneous length measure of the tree, which at time t has vertex set $T\coprod (0,t]$. It will turn out to be consistent that there is a ‘piecewise isometry’ for want of a better phrase between the metric (and thus length measure) on $X^T_t$ and the canonical induced measure on $T\coprod (0,t]$, so we can describe the instances and locations of the pruning events via a pair of PPPs. The first is supported on $T \times [0,\infty)$, and the second on $\{(t,x): 0 \le x \le t\}$, since we only ‘notice’ pruning at the point labelled x if the pruning happens at some time t after x was created.

If we start from a compact tree T, then the total intensity of this pair is finite up to some time t, and so we have a countable sequence $\tau_0=0<\tau_1<\tau_2<\ldots$ of times for pruning events. It is easy to describe (but a bit messy to notate) the evolution of the metric between these pruning times. Essentially the distance between any pair of points in the observed tree at time $\tau_m$ with root $\rho_{\tau_m}$ is constant between times $\tau_m,\tau_{m+1}$, and new points are added so that the distance between $\rho_{\tau_m}$ and any new point $a\in(\tau_m,\tau_{m+1}]$ is $a-\tau_m$, and everything thing else follows from straightforward consideration of geodesics.

When a pruning event happens at point $x_m$ at time $\tau_m$, distances are preserved within the subtree above $x_m$ in $X^T_{\tau_m -}$, and within the rest of the tree. Again, an expression for the cross distances is straightforward but requires a volume of notation not ideally suited to this medium.

The natural thing to consider is the coupled processes started from different subtrees (again both must contain the original root) of the same host tree. Say $T^1,T^2\le T$, then it is relatively easy to check that $X^{T^1}_t,X^{T^2}_t \le X^T_t \,\forall t$, when we drive the processes by consistent coupled Poisson processes. Furthermore, it is genuinely obvious that the Hausdorff distance between $X^{T^1}_t,X^{T^2}_t$, here viewed as compact subsets of $(X^T_t, d^T_t)$ remains constant during root growth phase.

Less obvious but more important is that the Hausdorff distance decreases during regrafting events. Suppose that just before a regrafting event, the two subtrees are T’ and T”, and the Hausdorff distance between them is $\epsilon$. This Hausdorff distance is with respect to the metric on the whole tree T. [Actually this is a mild abuse of notation – I’m now taking T to be the whole tree just before the regraft, rather than the tree at time 0.]

So for any $a\in T'$, we can choose $b\in T''$ such that $d_T(a,b)\le \epsilon$. This is preserved under the regraft unless the pruning point lies on the geodesic segment (in T) between a and b. But in that case, the distance between a and the pruning point is again at most $\epsilon$, and so after the regrafting, a is at most $\epsilon$ away from the new root, which is in both subtrees, and in particular the regrafted version of T”.

This is obviously a useful first step on the path to proving any kind of convergence result. There are some technicalities which we have skipped over. It is fairly natural that this leads to a Markov process when the original tree is finite, but it is less clear how to define these dynamics when the total tree length is infinite, as we don’t want regrafting events to be happening continuously unless we can bound their net effect in some sense.

Last week, Franz showed us how to introduce the BCRT into matters. Specifically, that BCRT is the unique stationary distribution for this process. After a bit more work, the previous result says that for convergence properties it doesn’t matter too much what tree we start from, so it is fine to start from a single point. Then, the cut points and growth mechanism corresponds very well to the Poisson line-breaking construction of the BCRT. With another ‘grand coupling’ we can indeed construct them simultaneously. Furthermore, we can show weak convergence of the discrete-world Markov chain tree algorithm to the process with these RG with RG dynamics.

It does seem slightly counter-intuitive that a process defined on the whole of the discrete tree converges to a process defined through subtrees. Evans remarks in the introduction to the chapter that this is a consequence of having limits described as compact real trees. Then limitingly almost all vertices are close to leaves, so in a Hausdorff sense, considering only $\sqrt{n}$ of the vertices (ie a subtree) doesn’t really make any difference after rescaling edge lengths. I feel I don’t understand exactly why it’s ok to take the limits in this order, but I can see why this might work after more checking.

Tomorrow, we will have our last session, probably discussing subtree prune-and-regraft, where the regrafting does not necessarily happen at the root.

# The Yule Process

The second problem sheet for classes on the Applied Probability course this term features a long question about the Yule process. This is probably the simplest example of a birth process. It’s named for the British statistician George Udny Yule, though some sources prefer to call it the Yule-Furry process for the American physicist Wendell Furry who used it as a model of a radioactive reaction.

The model is straightforward. At any time there is some number of individuals in the population, and each individual gives birth to an offspring at constant rate $\lambda$, independently from the rest of the population. After a birth has happened, the parent and child evolve independently. In the notation of general birth processes, the birth rate when there are n individuals is $\lambda_n=\lambda n$.

Note that if we start with two or more individuals, the sizes of the two or more families of descendents evolve as a continuous-time Polya’s urn. The arrivals process speeds up with time, but the jump chain is exactly Polya’s urn. Unsurprisingly, the Yule process can be found embedded in preferential attachment models, and other processes which are based around Polya’s urn with extra information.

This is a discrete, random version of exponential growth. Since the geometric distribution is the discrete analogue of the exponential distribution, we probably shouldn’t be surprised to learn that this is indeed the distribution of the process at some fixed time t, when it is started from a single original ancestor. This is all we care about, since the numbers of descendents from each different original ancestors are independent. In general, the distribution of the population size at some fixed time will be negative binomial, that is, a sum of IID geometric distributions.

The standard method here is to proceed using generating functions. Conditioning on the first splitting time gives two independent copies of the original process over a shorter time-scale. One derives an ODE in time for the generating function evaluated at any particular value z. This can be solved uniquely for each z, and patching together gives the generating function of the distribution at any specific time t, which can be seen to coincide with the corresponding generating function of the geometric distribution with parameter $e^{-\lambda t}$.

So we were trying to decide whether there might be a more heuristic argument for this geometric distribution. The method we came up with is not immediate, but does justify the geometric distribution in a couple of steps. First, we say that the birth times are $T_2,T_3,\ldots$, so between times $[T_n,T_{n+1})$ there are n individuals, with $T_1:=0$ for concreteness. Then by construction of the birth process, $T_{n+1}-T_n\stackrel{d}{=}\mathrm{Exp}(\lambda n)$.

We now look at these ‘inter-birth times’ backwards, starting from $T_{n+1}$. Note that $\mathrm{Exp}(\lambda n)$ is the distribution of the time for the first of n IID $\mathrm{Exp}(\lambda)$ clocks to ring. But then, looking backwards, the next inter-birth time is thus the distribution of the time for one of (n-1) IID $\mathrm{Exp}(\lambda)$ clocks to ring. So by memorylessness of the exponential distribution (discussed at great length on the first problem sheet), we can actually take these (n-1) clocks to be exactly those of the original n clocks which did not ring first. Continuing this argument, we can show that the first (in the original time direction) inter-birth time corresponds to the time spent waiting for the final clock to ring. Rewriting this observation formally:

$T_{n+1}\stackrel{d}{=}\max\{X_i : X_1,\ldots,X_n\stackrel{\text{iid}}{\sim}\mathrm{Exp}(\lambda)\}.$ (*)

To return to justifying the geometric form of the distribution, we need to clarify the easiest relationship between the population size at a fixed size and these birth times. As we are aiming for the geometric distribution, the probability of the event $\{X_t>n\}$ will be most useful. Clearly this event is the same as $\{T_{n+1}, and from the description involving maxima of IID exponentials, this is easy to compute as $(1-e^{-\lambda t})^n$, which is exactly what we want.

There are two interesting couplings hidden in these constructions. On closer inspection they turn out to be essentially the same from two different perspectives.

We have specified the distribution of $T_n$ at (*). Look at this distribution on the right hand side. There is a very natural way to couple these distributions for all n, namely to take some infinite sequence $X_1,X_2,\ldots$ of IID $\mathrm{Exp}(\lambda)$ random variables, then use initial sequences of these to generate each of the $T_n$s as described in (*).

Does this coupling correspond to the use of these IID RVs in the birth process? Well, in fact it doesn’t. Examining the argument, we can see that $X_1$ gives a different inter-birth time for each value of t in the correspondence proposed. Even more concretely, in the birth process, almost surely $T_{n+1}>T_n$ for each n. This is not true if we take the canonical coupling of (*). Here, if $X_n<\max\{X_1,\ldots,X_{n-1}\}$, which happens with high probability for large n, we have $T_{n+1}=T_n$ in the process of running maxima.

Perhaps more interestingly, we might observe that this birth process gives a coupling of the geometric distributions. If we want to recover the standard parameterisation of the geometric distribution, we should reparameterise time. [And thus generate an essentially inevitable temptation to make some joke about now having a Yule Log process.]

Let’s consider what the standard coupling might be. For a binomial random variable, either on [n] or some more exotic set, as in percolation, we can couple across all values of the parameter by constructing a family independent uniform random variables, and returning a 1 if $U_i>1-p$ and so on, where p is the parameter of a specific binomial realisation.

We can do exactly the same here. A geometric distribution can be justified as the first success in a sequence of Bernoulli trials, so again we can replace the relevant Bernoulli distribution with a uniform distribution. Take $U_1,U_2,\ldots$ to be IID U[0,1] random variables. Then, we have:

$X_t=\stackrel{d}{=}\bar X_t:= \max\{n: U_1,\ldots,U_{n-1}\ge e^{-\lambda t}\}.$

The equality in distribution holds for any particular value of t by constructing. But it certainly doesn’t hold uniformly in t. Note that if we define $\bar X_t$ as a process, then typically the jumps of this process will be greater than 1, which is forbidden in the Yule process.

So, we have seen that this Yule process, even though its distribution at a fixed time has a standard form, provides a coupling of such distributions that is perhaps slightly surprising.

# Skorohod Representation Theorem

Continuing the theme of revising theory in the convergence of random processes that I shouldn’t have forgotten soÂ rapidly, today we consider the Skorohod Representation Theorem. Recall from the standard discussion of the different modes of convergence of random variables that almost sure convergence isÂ among the strongest since it implies convergence in probability and thus convergence in distribution. (But not convergence in $L_1$. For example, take U uniform on [0,1], and $X_n=n\mathbf{1}_{\{U<\frac{1}{n}\}}$.)

Almost sure convergence is therefore in some sense the most useful form of convergence to have. However, it comes with a strong prerequisite, that the random variables be defined on the same probability space, which is not required for convergence in distribution. Indeed, one can set up weak versions of convergence in distribution which do not even require the convergents to be random variables. The Skorohod representation theorem gives a partial converse to this result. It states some conditions under which random variables which converge in distribution can be coupled on some larger probability space to obtain almost sure convergence.

Skorohod’s original proof dealt with convergence of distributions defined on complete, separable metric spaces (Polish spaces). The version discussed here is from Chapter 5 of Billingsley [1], and assumes the limiting distribution has separable support. More recent authors have considered stronger convergence conditions (convergence in total variation or Wasserstein distance, for example) with weaker topological requirements, and convergence of random variables defined in non-metrizable spaces.

Theorem (Skorohod representation theorem):Â Suppose that distributions $P_n\Rightarrow P$, where P is a distribution with separable support. Then we can define a probability space $(\Omega,\mathcal{F},\mathbb{P})$ and random variables $X,(X_n)_{n\ge 1}$ on this space such that the laws of $X,X_n$ are $P,P_n$ respectively and $X_n(\omega)\rightarrow X(\omega)$ for all $\omega\in\Omega$.

NB. We are proving ‘sure convergence’ rather than merely almost sure convergence! It is not surprising that this is possible, since changing the value of all the $X_n$s on a set with measure zero doesn’t affect the conditions for convergence in distribution.

Applications:Â Before going through the Billingsley proof, we consider one simple application of this result. Let S be a separable metric space containing the support of X, and g a continuous function $S\rightarrow S'$. Then

$X_n\stackrel{a.s.}{\rightarrow}X\quad\Rightarrow\quad g(X_n)\stackrel{a.s.}{\rightarrow}g(X).$

So, by applying the Skorohod representation theorem once, and the result that almost sure convergence implies convergence in distribution, we have shown that

$X_n\stackrel{d}{\rightarrow}X\quad\Rightarrow\quad g(X_n)\stackrel{d}{\rightarrow}g(X),$

subject to these conditions on the space supporting X. And we have avoided the need to be careful about exactly which class of functions determine convergence in distribution, as would be required for a direct argument.

Proof (from [1]): Unsurprisingly, the idea is to construct realisations of the $(X_n)$ from a realisation of X. We take X, and a partition of the support of X into small measurable sets, chosen so that the probability of lying in a particular set is almost the same for $X_n$ as for X, for large n. Then, the $X_n$ are constructed so that for large n, with limitingly high probability $X_n$ lies in the same small set as X.

Constructing the partition is the first step. For each $x\in S:=\mathrm{supp}(X)$, there must be some radius $\frac{\epsilon}{4} such that $P(\partial B(x,r_x)=0$. This is where we use separability. Since every point in the space is within $\frac{\epsilon}{4}$ of some element of a countable sequence of elements of the space, we can take a countable subset of these open balls $B(x,r_x)$ which cover the space. Furthermore, we can take a finite subset of the balls which cover all of the space apart from a set of measure at most $\epsilon$. We want the sets to be disjoint, and we can achieve this by removing the intersections inductively in the obvious way. We end up with a collection $B_0,B_1,\ldots,B_k$, where $B_0$ is the leftover space, such that

• $P(B_0)<\epsilon$
• $P(\partial B_i)=0,\quad i=0,1,\ldots,k$
• $\mathrm{diam}(B_i)<\epsilon,\quad i=1\ldots,k$.

Now suppose for each m, we take such a partition $B^m_0,B^m_1,\ldots,B^m_{k_m}$, for which $\epsilon_m=\frac{1}{2^m}$. Unsurprisingly, this scaling of $\epsilon$ is chosen so as to use Borel-Cantelli at the end. Then, from convergence in distribution, there exists an integer $N_m$ such that for $n\ge N_m$, we have

$P_n(B^m_i)\ge (1-\epsilon_m)P(B^m_i),\quad i=0,1,\ldots,k_m.$ (*)

Now, for $N_m\le n , for each $B^m_i$ with non-zero probability under P, take $Y_{n,i}$ to be independent random variables with law $P_n(\cdot | B^m_i)$ equal to the restriction onto the set. Now take $\xi\sim U[0,1]$ independent of everything so far. Now we make concrete the heuristic for constructing $X_n$ from X. We define:

$X_n=\sum_{i=0}^{k_m}\mathbf{1}_{\{\xi\le 1-\epsilon_m, X\in B^m_i\}} Y_{n,i} + \mathbf{1}_{\{\xi>1-\epsilon_m\}}Z_n.$

We haven’t defined $Z_n$ yet. But, from (*), there is a unique distribution such that taking $Z_n$ to be independent of everything so far, with this distribution, we have $\mathcal{L}(X_n)=P_n$. Note that by iteratively defining random variables which are independent of everything previously defined, our resulting probability space $\Omega$ will be a large product space.

Note that $\xi$ controls whether the $X_n$ follow the law we have good control over, and we also want to avoid the set $B^m_0$. So define $E_m:=\{X\not \in B^m_0, \xi\le 1-\epsilon_m\}$. Then, $P(E_m)<2\epsilon_m=2^{-(m-1)}$, and so by Borel-Cantelli, with probability 1, $E_m$ holds for all m larger than some threshold. Let us call this $\liminf_m E_m=: E$, and on this event E, we have by definition $X_n \rightarrow X$. So we have almost sure convergence. But we can easily convert this to sure convergence by removing all $\omega\in\Omega$ for which $\xi(\omega)=1$ and setting $X_n\equiv X$ on $E^c$, as this does not affect the distributions.

Omissions:Â

• Obviously, I have omitted the exact construction of the distribution of $Z_n$. This can be reverse reconstructed very easily, but requires more notation than is ideal for this medium.
• It is necessary to remove any sets $B^m_i$ with zero measure under P for the conditioning to make sense. These can be added to $B^m_0$ without changing any of the required conditions.
• We haven’t dealt with any $X_n$ for $n.

The natural question to ask is what happens if we remove the restriction that the space be separable. There are indeed counterexamples to the existence of a Skorohod representation. The clearest example I’ve found so far is supported on (0,1) with a metric inducing the discrete topology. If time allows, I will explain this construction in a post shortly.

References

[1] – Billingsley – Convergence of Probability Measures, 2nd edition (1999)

# Coupling from the Past

In a long series of previous posts I have talked about mixing times for Markov chains. We consider how long it takes for the distribution of a particular Markov chain to approach equilibrium. We are particularly interested in the asymptotics when some parameter of the model grows, such as the size of the state space, grows to infinity.

But why are we interested in the underlying problem? The idea of Markov Chain Monte Carlo methods is to sample from an intractable distribution by instead sampling from a Markov chain which approximates the distribution well at large times. A distribution might be intractable because it is computationally demanding to work out the normalising constant, or it might be distributed uniformly on a complicated combinatorial set. If, however, the distribution is the equilibrium distribution of some Markov chain, then we know how to at least sample from a distribution which is close to the one we want. But we need to know how long to run the process. We will typically tolerate some small error in approximating the distribution (whether we measure this in terms of total variation distance or some other metric doesn’t really matter at this heuristic level), but we need to know how it scale. If we double the size of the system, do we need to double the number of iterations of the chain, or square it. This is really important if we are going to use this for large real-world models with finite computing power!

Sometimes though, an approximation is not enough. If we want an exact sample from the equilibrium distribution, Markov chains typically will not help us as it is only in very artificial examples that the distribution after some finite time is actually the equilibrium distribution. One thing that we might use is aÂ stationary time, which is a stopping time T, for which $X_T\stackrel{d}{=}\pi$. Note that there is one trivial way to do this. We can sample Y from distribution $\pi$ before starting the process, then stop X at the first time T for which $X_T=Y$. But this is no help really, as we need to have Y in the first place!

So we are really interested in less trivial stationary times. Perhaps the best example is the top-to-random shuffle. Here we are given a pack of labelled cards, WLOG initially in descending order at each step we move the top card in the pile to a randomly-chosen location in the pile (which includes back onto the top). Then it turns out that the first time we move the card originally at the bottom from the top to somewhere is a strong stationary time. This is fairly natural, as by this time, every card has been involved in at least one randomising event.

Anyway, so this gives a somewhat artificial way to sample from the uniform distribution on a pack of cards. This strong stationary time is almost surely finite, with distribution given by the coupon collector problem, for which the expectation grows as $n\log n$, where n is the number of cards.

The problem with this method is that it is not easy in general to come up with a non-contrived stationary time such as this one. The idea ofÂ coupling from the past, discussed by some previous authors but introduced in this context by Propp and Wilson in the mid ’90s, is another method to achieve perfect sampling from the equilibrium distribution of a Markov chain. The idea here is to work backwards rather than forwards. The rest of this post, which discusses this idea, is based on the talk given at the Junior Probability Seminar by Irene, and on the chapter in the Levin, Peres, Wilmer book.

The key to the construction is a coupling of the transitions of a Markov chain. In the setting of a simple random walk, we have by construction a coupling of the transitions. It doesn’t matter which state we are at: we toss a coin to decide whether to move up or down, and we can do this without reference to our current position. Levin, Peres and WIlmer call this aÂ random mapping representation in general, and it is yet another concept that is less scary than its definition might suggest.

Given a transition matrix P on state space S, such a representation is a function

$\phi: S\times[0,1]\rightarrow S,\text{ s.t. }\mathbb{P}(\phi(i,U)=j)=p_{ij},$

where U is a U(0,1) random variable independent of choice of i. In particular, once we have the random value of u, we can consider $\phi(i,u)$ as i varies, to obtain a random map $S\rightarrow S$. Crucially, this map is not necessarily a bijection.

Note first that there are many possibilities for constructing the representation $\phi$. For some chains, and some representations, in particular random walks on vertex-transitive graphs (such as SRW – only for now we are restricting attention to finite state spaces) it is possible to choose $\phi$ so that it always gives a bijection, but it is also always possible to choose it so that there is some probability it doesn’t give a bijection.

Let $U_1,U_2,\ldots$ be an IID sequence of U[0,1] random variables, and write $\phi_i$ for the random map induced by $U_i$. Then consider the sequence of iterated maps:

$\phi_1, \phi_1\circ \phi_2, \ldots, \phi_1\circ\ldots\circ\phi_n,$

and let T be the (random) smallest time such that the image of $\phi_1\circ\ldots\circ \phi_T$ is a single state. Ie, as we go backwards in time through the maps $\phi_i$, we are gradually losing various states, corresponding to the maps not being bijections. Since the state space is finite, and the probability of not being a bijection is positive, it can be shown that T is almost surely finite. The claim then is that

$Y=\text{Im}(\phi_1\circ\ldots\circ \phi_T)$

is distributed as the equilibrium distribution of the chain. We finish by proving this.

Proof:Â Since the algorithm terminates after finite time almost surely, given any $\epsilon>0$, we can choose N such that the probability the algorithm stops in at most N steps is greater than $1-\epsilon$.

Now run the Markov chain from time -N, started in the equilibrium distribution, with the transition from time -t to -(t-1) given by the random mapping driven by $U_t$. Thus at time 0, the distribution of the chain is still the equilibrium distribution. But if we condition on the event that $T\le N$, then $X_0=\phi_1\circ \ldots \circ\phi_n(X_{-N})=Y$ regardless of the initial value. So $\mathbb{P}(X_0\ne Y)<\epsilon$, and hence the result follows, since $\epsilon>0$ was arbitrary.

What makes this easier than strong stationary times is that we don’t have to be clever to come up with the stopping time. It is however still important to know how long on average it takes to run the algorithm. At the end of her talk, Irene showed how to adapt this algorithm to deal with Probabilistic Cellular Automata. Roughly speaking, these are a sequence of infinite strings of 0s and 1s. The value of some element is determined randomly as a function of the values in the row underneath, say the element directly underneath and the two either side. In that setting,Â if you start with a finite subsequence and couple from the past by looking down to lower rows, each time you drop down a row you consider one further element, so in fact the coupling from the past algorithm has to eliminate possibilities fast enough to make up for this, if we want to terminate almost surely in finite time.

Here’s a link to the paper which discusses this in fuller detail.