# Generating uniform trees

A long time ago, I wrote quite a few a things about uniform trees. That is, a uniform choice from the $n^{n-2}$ unrooted trees with vertex set [n]. This enumeration, normally called Cayley’s formula, has several elegant arguments, including the classical Prufer bijection. But making a uniform choice from a large set is awkward, and so we seek more probabilistic methods to sample such a tree, which might also give insight into the structure of a ‘typical’ uniform tree.

In another historic post, I talked about the Aldous-Broder algorithm. Here’s a quick summary. We run a random walk on the complete graph $K_n$ started from a uniformly-chosen vertex. Every time we arrive at a vertex we haven’t visited before, we record the edge just traversed. Eventually we have visited all n vertices, so have recorded n-1 edges. It’s easy enough to convince yourself that these n-1 edges form a tree (how could there be a cycle?) and a bit more complicated to decide that the distribution of this tree is uniform.

It’s worth noting that this algorithm works to construct a uniform spanning tree on any connected base graph.

This post is about a few alternative constructions and interpretations of the uniform random tree. The first construction uses a Galton-Watson process. We take a Galton-Watson process where the offspring distribution is Poisson(1), and condition that the total population size is n. The resulting random tree has a root but no labels, however if we assign labels in [n] uniformly at random, the resulting rooted tree has the uniform distribution among rooted trees on [n].

Proof

This is all about moving from ordered trees to non-ordered trees. That is, when setting up a Galton-Watson tree, we distinguish between the following two trees, drawn extremely roughly in Paint:

That is, it matters which of the first-generation vertices have three children. Anyway, for such a (rooted) ordered tree T with n vertices, the probability that the Galton-Watson process ends up equal to T is

$\mathbb{P}(GW = T) = \prod_{v\in T} \frac{e^{-1}}{C(v)!} = e^{-n} \prod_{v\in T}\frac{1}{C(v)!},$

where $C(v)$ is the number of children of a vertex $v\in T$. Then, since $\mathbb{P}( |GW|=n )$ is a function of n, we find

$\mathbb{P}(GW=T \,\big|\, |GW|=n) = f(n)\prod_{v\in T} \frac{1}{C(v)!},$

where f(n) is a function of n alone (ie depends on T only through its size n).

But given an unordered rooted tree t, labelled by [n], there are $\prod_{v \in t} C(v)!$ ordered trees associated to t in the natural way. Furthermore, if we take the Poisson Galton-Watson tree conditioned to have total population size n, and label uniformly at random with [n], we obtain any one of these ordered trees with probability $\frac{f(n)}{n!} \prod_{v\in t} \frac{1}{C(v)!}$. So the probability that we have t after we forget about the ordering is $\frac{f(n)}{n!}$, which is a function of n alone, and so the distribution is uniform among the set of rooted unordered trees labelled by [n], exactly as required.

Heuristic for Poisson offspring distribution

In this proof, the fact that $\mathbb{P}(C(v)=k)\propto \frac{1}{k!}$ exactly balances the number of orderings of the k children explains why Poisson(1) works out. Indeed, you can see in the proof that Poisson(c) works equally well, though when $c\ne 1$, the event we are conditioning on (namely that the total population size is n) has probability decaying exponentially in n, whereas for c=1, the branching process is critical, and the probability decays polynomially.

We can provide independent motivation though, from the Aldous-Broder construction. Both the conditioned Galton-Watson construction and the A-B algorithm supply the tree with a root, so we’ll keep that, and look at the distribution of the degree of the root as constructed by A-B. Let $\rho=v_1,v_2,v_3,\ldots$ be the vertices [n], ordered by their discovery during the construction. Then $\rho$ is definitely connected by an edge to $v_2$, but thereafter it follows by an elementary check that the probability $\rho$ is connected to $v_m$ is $\frac{1}{n-1}$, independently across all m. In other words, the distribution of the degree of $\rho$ in the tree as constructed by A-B is

$1+ \mathrm{Bin}\left(n-2,\frac{1}{n-1}\right) \approx 1+\mathrm{Poisson}(1).$

Now, in the Galton-Watson process, conditioning the tree to have fixed, large size changes the offspring distribution of the root. Conveniently though, in a limiting sense it’s the same change as conditioning the tree to have size at least n. Since these events are monotone in n, it’s possible to take a limit of the conditioning events, and interpret the result as the Galton-Watson tree conditioned to survive. It’s a beautiful result that this interpretation can be formalised as a local limit. The limiting spine decomposition consists of an infinite spine, where the offspring distribution is a size-biased version of the original offspring distribution (and so in particular, always has at least one child) and where non-spine vertices have the original distribution.

In particular, the number of the offspring of the root is size-biased, and it is well-known and not hard to check that size-biasing Poisson(c) gives 1+Poisson(c) ! So in fact we have, in an appropriate limiting sense in both objects, a match between the degree distribution of the root in the uniform tree, and in the conditioned Galton-Watson tree.

This isn’t supposed to justify why a conditioned Galton-Watson tree is relevant a priori (especially the unconditional independence of degrees), but it does explain why Poisson offspring distributions are relevant.

Construction via G(N,p) and the random cluster model

The main reason uniform trees were important to my thesis was their appearance in the Erdos-Renyi random graph G(N,p). The probability that vertices {1, …, n} form a tree component in G(N,p) with some particular structure is

$p^{n-1} (1-p)^{\binom{n}{2}-(n-1)} \times (1-p)^{n(N-m)}.$

Here, the first two terms give the probability that the graph structure on {1, …, n} is correct, and the the final term gives the probability of the (independent) event that these vertices are not connected to anything else in the graph. In particular, this has no dependence on the tree structure chosen on [n] (for example, whether it should be a path or a star – both examples of trees). So the conditional distribution is uniform among all trees.

If we work in some limiting regime, where $pn\rightarrow 0$ (for example if n is fixed and $p=\frac{1}{N}\rightarrow 0$), then we can get away asymptotically with less strong conditioning. Suppose we condition instead just that [n] form a component. Now, there are more ways to form a connected graph with one cycle on [n] than there are trees on [n], but the former all require an extra edge, and so the probability that a given one such tree-with-extra-edge appears as the restriction to [n] in G(N,p) is asymptotically negligible compared to the probability that the restriction to [n] of G(N,p) is a tree. Naturally, the local limit of components in G(N,c/N) is a Poisson(c) Galton-Watson branching process, and so this is all consistent with the original construction.

One slightly unsatisfying aspect to this construction is that we have to embed the tree of size [n] within a much larger graph on [N] to see uniform trees. We can’t choose a scaling p=p(n) such that G(n,p) itself concentrates on trees. To guarantee connectivity with high probability, we need to take $p> \frac{\log n}{n}$, but by this threshold, the graph has (many) cycles with high probability.

At this PIMS summer school in Vancouver, one of the courses is focusing on lattice spin models, including the random cluster model, which we now briefly define. We start with some underlying graph G. From a physical motivation, we might take G to be $\mathbb{Z}^d$ or some finite subset of it, or a d-ary tree, or the complete graph $K_N$. As in classical bond percolation (note G(N,p) is bond percolation on $K_N$), a random subset of the edges of G are included, or declared open. The probability of a given configuration w, with e open edges is proportional to

$p^e (1-p)^{|E(G)| - e} q^{k(w)},$ (*)

where the edge-weight $p\in(0,1)$ as usual, and cluster weight $q\in (0,\infty)$, and $k(w)$ counts the number of connected components in configuration w. When q=1, we recover classical bond percolation (including G(N,p) ), while for q>1, this cluster-reweighting favours having more components, and q<1 favours fewer components. Note that in the case $q\ne 1$, the normalising constant (or partition function) of (*) is generally intractable to calculate explicitly.

As in the Erdos-Renyi graph, consider fixing the underlying graph G, and taking $p\rightarrow 0$, but also taking $\frac{q}{p}\rightarrow 0$. So the resulting graph asymptotically ‘wants to have as few edges as possible, but really wants to have as few components as possible’. In particular, 1) all spanning trees of G are equally likely; 2) any configuration with more than one component has asymptotically negligible probability relative to any tree; 3) any graph with a cycle has #components + #edges greater than that of a tree, and so is asymptotically negligible probability relative to any tree.

In other words, the limit of the distribution is the uniform spanning tree of G, and so this (like Aldous-Broder) is a substantial generalisation, which constructs the uniform random tree in the special case where $G=K_n$.

# Azuma-Hoeffding Inequality

It’s (probably) my last Michaelmas term in Oxford, at least for the time being, and so also the last time giving tutorials on either of the probability courses that students take in their first two years. This time, I’m teaching the second years, and as usual the aim of the majority of the first half of the course is to acquire as sophisticated an understanding as possible of the Central Limit Theorem. I feel a key step is appreciating that CLT tells you about the correct scaling for the deviations from the mean of these partial sums of IID random variables. The fact that these deviations on this correct scaling converge in law to a normal distribution, irrespective (apart from mild conditions) on the underlying distribution, is interesting, but should be viewed as a secondary, bonus, property.

Emphasising the scaling of deviations in CLT motivates the next sections of this (or any) course. We develop tools like Markov’s inequality to control the probability that a random variable is much larger than its expectation, and experiment with applying this to various functions of the random variable to get stronger bounds. When the moment generating function exists, this is an excellent choice for this analysis. We end up with a so-called Chernoff bound. For example, we might consider the probability that when we toss N coins, at least a proportion ¾ are Heads. A Chernoff bound says that this probability decays exponentially in N.

One direction to take is to ask how to control precisely the parameter of this exponential decay, which leads to Cramer’s theorem and the basis of the theory of Large Deviations. An alternative direction is to observe that the signed difference between the partial sums of independent random variables and their means is an example of a martingale, albeit not a very interesting one, since in general the increments of a martingale are not independent. So we might ask: under what circumstances can we show exponential tail bounds on the deviation of a martingale from its mean (that is, its initial value) at a fixed (perhaps large) time?

Azuma-Hoeffding inequality

The following result was derived and used by various authors in the 60s, including Azuma and Hoeffding (separately), but also others.

Let $X_0,X_1,X_2,\ldots$ be a martingale with respect to some filtration, and we assume that the absolute value of each increment $|X_i-X_{i-1}|$ is bounded almost surely by some $c_i<\infty$. Then, recalling that $\mathbb{E}[X_n|\mathcal{F}_0]=X_0$, we have

$\mathbb{P}(X_n \ge X_0+t) \le \exp\left( -\frac{t^2}{2\sum_{i=1}^n c_i^2}\right).$

Proof

We apply a Chernoff argument to each increment. First, observe that for Y a distribution supported on [-1,1] with mean zero, by convexity $\mathbb{E}[e^{tY}]$ is maximised by taking Y equal to +1 and -1 each with probability ½. Thus

$\mathbb{E}[e^{tY}]\le \frac12 e^t + \frac 12 e^{-t}=\cosh(t) \le e^{-t^2/2},$

where the final inequality follows by directly comparing the Taylor series.

We’ll use this shortly. Before that, we start the usual argument for a Chernoff bound on $X_n-X_0$.

$\mathbb{P}(X_n-X_0\ge t) = \mathbb{P}(e^{\theta(X_n-X_0)}\ge e^{\theta t})\le e^{-\theta t} \mathbb{E}[e^{\theta(X_n-X_0)}]$

$= e^{-\theta t} \mathbb{E}[\mathbb{E}[e^{\theta((X_n-X_{n-1}) +X_{n-1}-X_0)} | \mathcal{F}_{n-1}]]$

$= e^{-\theta t} \mathbb{E}[e^{\theta(X_{n-1}-X_0)} \mathbb{E}[e^{\theta(X_n-X_{n-1})}|\mathcal{F}_{n-1}] ],$

and our preliminary result allows us to control this inner expectation

$\le e^{-\theta t} e^{\theta^2c_n^2/2} \mathbb{E}[e^{\theta(X_{n-1}-X_0)}].$

So now we can apply this inductively to obtain

$\mathbb{P}(X_n-X_0\ge t) \le e^{-\theta t+ \theta^2 \sum_{i=1}^n c_i^2}.$

Finally, as usual in such an argument, we need to choose a sensible value of the free parameter $\theta$, and naturally we want to choose it to make this RHS as small as possible, which is achieved when $\theta = \frac{t}{\sum_{i=1}^n c_i^2}$, and leads exactly to the statement of the inequality.

Applications

Unsurprisingly, we can easily apply this to the process of partial sums of IID random variables with mean zero and bounded support, to recover a Chernoff bound.

A more interesting example involves revealing the state (ie open or closed) of the edges of an Erdos-Renyi graph one at a time. We need to examine some quantitative property of the graph which can’t ever be heavily influenced by the presence or non-presence of a single given edge. The size of the largest clique, or the largest cut, are good examples. Adding or removing an edge can change these quantities by at most one.

So if we order the edges, and let the filtration $\mathcal{F}_k$ be generated by the state of the first k edges in this ordering, then $X_k=\mathbb{E}[\text{max cut}| \mathcal{F}_k]$ is a martingale. (A martingale constructed backwards in this fashion by conditioning a final state on a filtration is sometimes called a Doob martingale.) Using A-H on this shows that the deviations from the mean are of order $\sqrt{N}$, where N is the size of the graph. In the sparse case, it can be justified fairly easily that the maximum cut has size $\Theta(N)$, since for example there will always be some positive proportion of isolated vertices. However, accurate asymptotics for the mean of this quantity seem (at least after a brief search of the literature – please do correct me if this is wrong!) to be unknown. So this might be an example of the curious situation where we can control the deviations around the mean better than the mean itself!

Beyond bounded increments

One observation we might make about the proof is that it is tight only if all the increments $X_i-X_{i-1}$ are supported on $\{-c_i,+c_i\}$, which is stronger than demanding that the absolute value is bounded. If in fact we have $X_i-X_{i-1}\in[-d_i,c_i]$ almost surely, then, with a more detailed preliminary lemma, we can have instead a bound of $\exp\left( -\frac{2t^2}{\sum_{i=1}^n (c_i+d_i)^2} \right)$.

While it isn’t a problem in these examples, in many settings the restriction to bounded increments is likely to be the obstacle to applying A-H. Indeed, in the technical corner of my current research problem, this is exactly the challenge I faced. Fortunately, at least in principle, all is not necessarily lost. We might, for example, be able to establish bounds $(c_i)$ as described, such that the probability that any $|X_i-X_{i-1}|$ exceeds its $c_i$ is very small. You could then construct a coupled process $(Y_i)$, that is equal to $X_i$ whenever the increments are within the given range, and something else otherwise. For Y to fit the conditions of A-H, the challenge is to ensure we can do this such that the increments remain bounded (ie the ‘something else’ also has to be within $[-c_i,c_i]$ ) and also that Y remains a martingale. This total probability of a deviation is bounded above by the probability of Y experiencing that deviation, plus the probability of Y and X decoupling. To comment on the latter probability is hard in general without saying a bit more about the dependence structure in X itself.

# Enumerating Forests

I’ve just got back from a visit to Budapest University of Technology, where it was very pleasant to be invited to give a talk, as well as continuing the discussion our research programme with Balazs. My talk concerned a limit for the exploration process of an Erdos-Renyi random graph conditioned to have no cycles. Watch this space (hopefully very soon) for a fully rigorous account of this. In any case, my timings were not as slick as I would like, and I had to miss out a chunk I’d planned to say about a result of Britikov concerning enumerating unrooted forests. It therefore feels like an excellent time to write something again, and explain this paper, which you might be able to find here, if you have appropriate journal rights.

We are interested to calculate $a_{n,m}$ the number of forests with vertex set [n] consisting of m unrooted trees. Recall that if we were interested in rooted trees, we could appeal to Prufer codes to show that there are $m n^{n-m-1}$ such forests, and indeed results of Pitman give a coalescent/fragmentation scheme as m varies between 1 and n-1. It seems that there is no neat combinatorial re-interpretation of the unrooted case though, so Britikov uses an analytic method.

We know that

$a_{n,m}= \frac{n!}{m!} \sum_{\substack{k_1+\ldots+k_m=n\\ k_i\ge 1}} \prod_{j=1}^m \frac{k_j^{k_j-2}}{k_j!}.$

To see this, observe that the $k_j$s correspond to the sizes of the m trees in the forest; $\frac{n!}{\prod k_j!}$ gives the multinomial number of ways to assign vertices to the trees; given the labels for a tree of size $k_j$, there are $k_j^{k_j-2}$ ways to make up the tree itself; and $\frac{1}{m!}$ accounts for the fact that the trees have no order.

What we would really like to do is to take the uniform distribution on the set of all labelled trees, then simulate m IID copies of this distribution, and condition the union to contain precisely n vertices. But obviously this is an infinite set, so we cannot choose uniformly from it. Instead, we can tilt so that large trees are unlikely. In particular, for each x we define

$\mathbb{P}(\xi=k) \propto \frac{k^{k-2} x^k}{k!}$,

and define the normalising constant

$B(x):= \sum_{k\ge 1} \frac{k^{k-2}x^k}{k!},$

whenever it exists. It turns out that $x\le e^{-1}$ is precisely the condition for $B(x)<\infty$. Note now that if $\xi_1,x_2,\ldots$ are IID copies of $\xi$, then

$\mathbb{P}(\xi_1+\ldots+\xi_m=n) = \frac{x^n}{B(x)^m} \sum_{k_1+\ldots + k_m=n} \prod_{j=1}^m \frac{k_j^{k_j-2}}{k_j!},$

and so we obtain

$a_{n,m}= \frac{n!}{m!} \frac{B(x)^m}{x^n} \mathbb{P}(\xi_1+\ldots + \xi_m=n).$

So asymptotics for $a_{n,m}$ might follows from laws of large numbers of this distribution $\xi$.

So far, we haven’t said anything about how to choose this value x. But observe that if you want to have lots of trees in the forest, then the individual trees should generally be small, so we take x small to tilt away from a preference for large trees. It turns out that there is a similar interpretation of criticality for forests as for general graphs, and taking x equal to 1/e, its radius of convergence works well for this setting. If you want even fewer trees, there is no option to take x larger than 1/e, but instead one can use large deviations machinery rather than laws of large number asymptotics.

We will be interested in asymptotics of the characteristic function of $\xi$ for x=1/e. In particular $\mathbb{E}[e^{it\xi}]=\frac{B(xe^{it})}{B(x)}$, and it will be enough to clarify the behaviour of this as $t\rightarrow 0$. It’s easier to work with a relation analytic function

$\theta(x)=\sum_{k\ge 1} \frac{k^{k-1}x^k}{k!},$

ie the integral of B. What now feels like a long time ago I wrote a masters’ thesis on the subject of multiplicative coalescence, and this shows up as the generating function of the solutions to Smoluchowski’s equations with monodisperse initial conditions, which are themselves closely related to the Borel distributions. In any case, several of the early papers on this topic made progress by establishing that the radius of convergence is 1/e, and that $\theta(x)e^{-\theta(x)}=x$ everywhere where $|x|\le 1/e$. We want to consider x=1/e, for which $\theta=1$.

Note that $\mathbb{E}\xi = \frac{\theta(x)}{B(x)}$, so we will make progress by relating $B(x),\theta(x)$ in two ways. One way involves playing around with contour integrals in a fashion that is clear in print, but involves quite a lot of notation. The second way is the Renyi relation which asserts that $\theta(x)=B(x)+\frac{\theta(x)^2}{2}$. We will briefly give a combinatorial proof. Observe that after multiplying through by factorials and interpreting the square of a generating function, this is equivalent to

$k^{k-1} = k^{k-2} + \frac12 \sum_{\substack{l+m=k\\l,m\ge 1}} l^{l-1}m^{m-1}\binom{k}{l},$

for all k. As we might expect from the appearance of this equality, we can prove it using a bijection on trees. Obviously on the LHS we have the size of the set of rooted trees on [k]. Now consider the set of pairs of disjoint rooted trees with vertex set [k]. This second term on the RHS is clearly the size of this set. Given an element of this set, join up the two roots, and choose whichever root was not initially in the same tree as 1 to be the new root. We claim this gives a bijection between this set, and the set of rooted trees on [k], for which 1 is not the root. Given the latter, the only pair of trees that leads to the right rooted tree on [k] under this mapping is given by cutting off the unique edge incident to the root that separates the root and vertex 1. In particular, since there is a canonical bijection between rooted trees for which 1 is the root, and unrooted trees (!), we can conclude the Renyi relation.

The Renyi relation now gives $\mathbb{E}\xi = \frac{\theta(x)}{B(x)}=2$ when x=1/e. If we wanted, we could show that the variance is infinite, which is not completely surprising, as the parameter x lies on the radius of convergence of the generating function.

Now, playing around with contour integrals, and being careful about which strands to take leads to the asymptotic as $t\rightarrow 0$

$\mathbb{E}[ e^{it\xi}] = 1+2it + \frac{2}{3}i |2t|^{3/2} (i\mathrm{sign}(t))^{3/2} + o(|t|^{3/2}).$

So from this, we can show that the characteristic function of the rescaled centred partial sum $\frac{\xi_1+\ldots+\xi_N-2N}{bN^{2/3}}$ converges to $\exp(-|t|^{3/2}\exp(\frac{i\pi}{4}\mathrm{sign} t))$, where $b= (32/9)^{1/3}$ is a constant arising out of the previous step.

We recognise this as the characteristic function of the stable distribution with parameters 3/2 and -1. In particular, we know now that $\xi$ is in the domain of attraction for a stable-3/2 distribution. If we wanted a version of the central limit theorem for such partial sums, we could have that, but since we care about the partial sums of the $\xi_i$s taking a specific value, rather than a range of values on the scale of the fluctuations, we actually need a local limit theorem.

To make this clear, let’s return to the simplest example of the CLT, with some random variables with mean $\mu$ and variance $\sigma^2<\infty$. Then the partial sums satisfy

$\mathbb{P}(\mu N + a\sigma\sqrt{N} \le S_N \le \mu_N+b\sigma\sqrt{N}) \rightarrow \int_a^b f_{\mathcal N}(x)dx,$

as $N\rightarrow\infty$. But what about the probability of $S_N$ taking a particular value m that lies between $\mu N+a\sigma \sqrt{N}$ and $\mu N + b\sigma \sqrt{N}$? If the underlying distribution was continuous, this would be uncontroversial – considering the probability of lying in a range that is smaller than the scale of the CLT can be shown in a similar way to the CLT itself. A local limit theorem asserts that when the underlying distribution is supported on some lattice, mostly naturally the integers, then these probabilities are in the limit roughly the same whenever m is close to $\mu N+a\sigma\sqrt{N}$.

In this setting, a result of Ibragimov and Linnik that I have struggled to find anywhere in print (especially in English) gives us local limit theory for integer-supported distributions in the domain of attraction of a stable distribution. Taking p( ) to be the density of this distribution, we obtain

$bm^{2/3}\mathbb{P}(\xi_1+\ldots+\xi_m=n) - p(\frac{n-2m}{b m^{2/3}}) \rightarrow 0$

as $n\rightarrow\infty$, uniformly on any set of m for which $z= \frac{n-2m}{bm^{2/3}}$ is bounded. Conveniently, the two occurrences of b clear, and Britikov obtains

$a_{n,m} = (1+o(1)) \frac{\sqrt{2\pi} n^{n-1/6}}{2^{n-m}(n-m)!} p(\frac{n-2m}{n^{2/3}},$

uniformly in the same sense as before.

# Poisson Random Measures

[This is a companion to the previous post. They explore different aspects of the same problem which I have been thinking about from a research point of view. So that they can be read independently, there has inevitably been some overlap.]

As I explained in passing previously, Poisson Random Measures have come up in my current research project. Indeed, the context where they have appeared seems like a very good motivation for considering the construction and some properties of PRMs.

We begin not with a Poisson variable, but with a standard Erdos-Renyi random graph $G(n,\frac{c}{n})$. The local limit of a component in this random graph is given by a Galton-Watson branching process with Poisson(c) offspring distribution. Recall that a local limit is description of what the structure looks like near a given (or random) vertex. Since the vertices in G(n,p) are exchangeable, this rooting matters less. Anyway, the number of neighbours in the graph of our root is given by Bin(n-1,c/n). Suppose that the root v_0, has k neighbours. Then if we are just interested in determining the vertices in the component, we can ignore the possibility of further edges between these neighbours. So if we pick one of the neighbours of the root, say v_1, and count the number of neighbours of this vertex that we haven’t already considered, this is distributed as Bin(n-1-k,c/n), since we discount the root and the k neighbours of the root.

Then, as n grows large, Bin(n-1,c/n) converges in distribution to Po(c). Except on a very unlikely event whose probability we can control if we need, so does Bin(n-1-k,c/n). Indeed if we consider a set of K vertices which are already connected in some way, then the distribution of the number of neighbours of one of them which we haven’t already considered is still Po(c) in the limit.

Now we consider what happens if we declare the graph to be inhomogeneous. The simplest possible way to achieve this is to specify two types of vertices, say type A and type B. Then we specify the proportion of vertices of each type, and the probability that there is an edge between two vertices of given types. This is best given by a symmetric matrix. So for example, if we wanted a random bipartite graph, we could achieve this as described by setting all the diagonal entries of the matrix to be zero.

So does the local limit extend to this setting? Yes, unsurprisingly it does. To be concrete, let’s say that the proportion of types A and B are a and b respectively, and the probabilities of having edges between vertices of various types is given by $P=(p_{ij}/n)_{i,j\in\{A,B\}}$. So we can proceed exactly as before, only now we have to count how many type A neighbours and how many type B neighbours we see at all stages. We have to specify the type of our starting vertex. Suppose for now that it is type A. Then the number of type A neighbours is distributed as

$\text{Bin}(an,p_{AA}/n)\stackrel{d}{\rightarrow}\text{Po}(ap_{AA})$,

and similarly the limiting number of type B neighbours is $\sim \text{Po}(bp_{AB})$. Crucially, this is independent of the number of type A neighbours. The argument extends naturally to later generations, and the result is exactly a multitype Galton-Watson process as defined in the previous post.

My motivating model is the forest fire. Here, components get burned when they are large and reduced to singletons. It is therefore natural to talk about the ‘age’ of a vertex, that is, how long has elapsed since it was last burned. If we are interested in the forest fire process at some fixed time T>1, that is, once burning has started, then we can describe it as an inhomogeneous random graph, given that we know the ages of the vertices.

For, given two vertices with ages s and t, where WLOG s<t, we know that the older vertex could not have been joined to the other vertex between times T-t and T-s. Why? Well, if it had, then it too would have been burned at time T-s when the other vertex was burned. So the only possibility is that they might have been joined by an edge between times T-s and T. Since each edge arrives at rate 1/n, the probability that this happens is $1-e^{-s/n}\approx \frac{s}{n}$. Indeed, in general the probability that two vertices of ages s and t are joined at time T is $\frac{s\wedge t}{n}$.

Again at fixed time T>1, the sequence of ages of the vertices converges weakly to some fixed distribution (which depends on T) as the number of vertices grows to infinity. We can then recover the graph structure by assigning ages according to this distribution, then growing the inhomogeneous random graph with the kernel as described. The question is: when we look for a local limit, how to do we describe the offspring distribution?

Note that in the limit, components will be burned continuously, so the distribution of possible ages is continuous (with an atom at T for those vertices which have never been burned). So if we try to calculate the distribution of the number of neighbours of age s, we are going to be doomed, because with probability 1 then is no vertex of age s anywhere!

The answer is that the offspring distribution is given by a Poisson Random Measure. You can think of this as a Poisson Point Process where the intensity is non-constant. For example, let us consider how many neighbours we expect to have with ages [s,s+ds]. Let us suppose the age of our root is t>s+ds for now. Assuming the distribution of ages, $f(\cdot)$ is positive and continuous, the number of vertices with these ages in the system is roughly nf(s)ds, and so the number of neighbours with this property is roughly $\text{Bin}(nf(s)ds,\frac{s}{n})$. In particular, this does have a Poisson limit. We need to be careful about whether this Poisson limit is preserved by the approximation. In fact this is fine. Let’s assume WLOG that f is increasing at s. Then the number of age [s,s+ds] neighbours can be stochastically bounded between $\text{Bin}(nf(s)ds,\frac{s}{n})$ and $\text{Bin}(nf(s+ds)ds,\frac{s+ds}{n}$. As n grows, these converge in the distribution to two Poisson random variables, and then we can let ds go to zero. Note for full formalism, we may need to account for the large deviations event that the number of age s vertices in the system is noticeably different from its expectation. Whether this is necessary depends on whether the ages are assigning deterministically, or drawn IID-ly from f.

One important result to be drawn from this example is that the number of offspring from disjoint type sets, say $[s_1,s_2], [t_1,t_2]$ are independent, for the same reason as in the two-type setting, namely that the underlying binomial variables are independent. We are, after all, testing different sets of vertices! The other is that the number of neighbours with ages in some range is Poisson. Notice that these two results are consistent. The number of neighbours with ages in the set $[s_1,s_2]\cup [t_1,t_2]$ is given by the sum of two independent Poisson RVs, and hence is Poisson itself. The parameter of the sum RV is given by the sum of the original parameters.

These are essentially all the ingredients required for the definition of a Poisson Random Measure. Note that the set of offspring is a measure of the space of ages, or types. (Obviously, this isn’t a probability measure.) We take a general space E, with sigma algebra $\mathcal{E}$, and an underlying measure $\mu$ on E. We want a distribution $\nu$ for measures on E, such that for each Borel set $A\in\mathcal{E}$, $\nu(A)$, which is random because $\nu$ is, is distributed as $\text{Po}(\mu(A))$, and furthermore, for disjoint $A,B\in\mathcal{E}$, the random variables $\nu(A),\nu(B)$ are independent.

If $M=\mu(E)<\infty$, then constructing such a random measure is not too hard using a thinning property. We know that $\nu(E)\stackrel{d}{=}\text{Po}(M)$, and so if we sample a Poisson(M) number of RVs with distribution given by $\frac{\mu(\cdot)}{M}$, we get precisely the desired PRM. Proving this is the unique distribution with this property is best done using properties of the Laplace transform, which uniquely defines the law of a random measure in the same manner that the moment generating function defines the law of a random variable. Here the argument is a function, rather than a single variable for the MGF, reflecting the fact that the space of measures is a lot ‘bigger’ than the reals, where a random variable is supported. We can extend this construction for sigma-finite spaces, that is some countable union of finite spaces.

One nice result about Poisson random measures concerns the expectation of functions evaluated at such a random measure. Recall that some function f evaluated at the measure $\sum \delta_{x_i}$ is given by $\sum f(x_i)$. Then, subject to mild conditions on f, the expectation

$\mathbb{E}\nu (f)=\mu(f).$

Note that when $f=1_A$, this is precisely one of the definitions of the PRM. So by a monotone class result, it is not surprising that this holds more generally. Anyway, I’m currently trying to use results like these to get some control over what the structure of this branching processes look like, even when the type space is continuous as in the random graph with specified ages.

# Dispersion in Social Networks

This post is based on a paper that appeared a couple of weeks ago on the Computer Science section of arXiv. You can find it here. I’m going to write a few things about the ideas behind the paper, and avoid pretty much entirely the convincing data the authors present, as well as questions of implementing the algorithms discussed.

The setting is a social network, which we can describe as a graph. Nodes stand for people, and an edge represents that the two associated people have some social connection. This paper focuses on edges corresponding to friendship in the Facebook graph.

A key empirical feature of the graph topology of such social networks as compared to most mathematical models of random graphs is the prevalence of short cycles, and so-called clustering. Loosely speaking, in an Erdos-Renyi random graph, any potential edges appear in the graph independently of the rest of the configuration. But this does not accord well with our experience of our own Facebook friend circle. In general, knowledge that A is friends with both B and C increases the likelihood that B and C are themselves friends. This effect appears to be more present in other models, such as Preferential Attachment and the Configuration Model, but that is really more a consequence of the degree sequence being less concentrated.

The reason for this phenomenon appearing in social networks is clear. People meet other people generally by sharing common activities, whether that be choice of school, job or hobbies. The question of how readily people choose to add others on Facebook is a worthwhile one, but not something I have the time or the sociological credibility to consider! In any case, it is not a controversial idea that for some typical activity, it is entirely possible that almost all the participants will end up as friends, leading to a large (almost-) ‘clique’ in the graph. Recall a clique is a copy of a complete graph embedded in a larger graph – that is, a set of nodes all of which are pairwise connected.

We could think of much of the structure of this sort of network as being generated in the following way. Suppose we were able to perform the very unlikely-sounding task of listing every conceivable activity or shared attribute that might engender a friendship. Each activity corresponds to a set of people. We then construct a graph on the set of people by declaring that a pair of nodes are connected by an edge precisely if the people corresponding to these nodes can both be found in some activity set.

Another way of thinking about this setup is to consider a bipartite graph, with people as one class of vertices, and activities as the other. Predictably, we join a person to an activity if they engage in that activity. The edges within the class of people are then induced by the bipartite edges. Obviously, under this interpretation, we could equally well construct a graph on the set of activities. Here, two activities would be joined if there is a person who does them both. Graphs formed in this way can be called Intersection Graphs, and there is lots of interest in investigating various models of Random Intersection Graphs.

The question addressed by the authors of the paper can be summarised as follows. A social network graph tells us whether two people are ‘friends’, but it does not directly tell us how close their relationship is. It is certainly an interesting question to ask whether the (local) network topology can give us a more quantitative measure of the strength of a friendship.

As the authors explain, a first approach might be to consider how many mutual friends two people have. (We consider only pairs of people who are actually friends. It seems reasonable to take this as a pre-requisite for a strong relationship among people who do actually use Facebook.) However, this can fail because of the way these social networks organise themselves around shared attributes and activities. The size of one of these cliques (which are termed social foci in parts of the literature) is not especially likely to be well correlated to the strengths of the friendships within the clique. In particular, the clique corresponding to someone’s workplace is likely to grow in size over time, especially when people grow towards an age where, on average, they move job much less. So it seems likely that, according to a naive examination of the number of mutual friends, we would predict that a person’s strongest friend is likely to be someone they work with, who perhaps by chance also does some other activity with that person.

The authors phrase this problem slightly differently. They examine algorithms for establishing a person’s spouse or long-term partner with good accuracy from only the local network structure.

Heuristically we might expect that a husband knows many of his wife’s work colleagues, and vice versa. Not all of these ties might be so strong that they actually lead to friendship, in the Boolean sense of Facebook, but we might expect that some noticeable proportion have this property. Naturally, there will be cliques to which both belong. One or more of these might be the reason they met in the first place, and others (eg parents at children’s schools) might have developed over the course of their relationship. However, as we’ve explained, this doesn’t narrow things down much.

(We need not be constrained by this heteronormative scenario. However, as the authors point out in a footnote, there are challenges in collecting data because of the large number of ironic relationship listings on Facebook, mainly among the undergraduate and younger community. This problem is particularly obstructive in the case of same-sex marriage, owing to the smaller numbers of genuine pairings, and larger numbers of false listings for this setting.)

The crucial observation is that if we look at the couple’s mutual friends, we expect to see large parts of the most important cliques from both husband and wife’s lives. Among these mutual friends, there will be some overlap, that is cliques of which both are an integral member. But among the rest, there will be a natural partition into friends who really originate from the husband, and friends who were introduced via the wife. So the induced graph on these mutual friends is likely to split into three classes of vertices, with very poor connectivity between two of them.

This is, up to sorting out scaling and so on, precisely the definition of dispersion, introduced by the authors. The dispersion between two vertices is high if the induced graph on their mutual neighbourhood has poor connectivity. Modulo exact choice of definition, they then exhibit data showing that this is indeed a good metric for determining marriages from the network topology, with success rate of around 50% over a wide range of users.

# Persistent Hubs

This post is based on the paper “Existence of a persistent hub in the convex preferential attachment model” which appeared on arXiv last week. It can be found here. My aim is to explain (again) the model; the application-based motivation for the result; and a heuristic for the result in a special case. In particular, I want to stress the relationship between PA models and urns.

The preferential attachment model attempts to describe the formation of large complex networks. It is constructed dynamically: vertices are introduced one at a time. For each new vertex, we have to choose which existing vertices to join to. This choice is random and reinforced. That is, the new vertex is more likely to join to an existing vertex with high degree than to an existing vertex with degree 1. It’s clear why this might correspond well to the evolution of, say, the world wide web. New webpages are much more likely to link to an established site, eg Wikipedia or Google, than to a uniformly randomly chosen page.

The model is motivated also by a desire to fit a common property of real-world networks that is not exhibited by, among others, the Erdos-Renyi random graph model. In such a network, we expect a few nodes to have much greater connectivity than average. In a sense these so-called hubs drive connectivity of the system. This makes sense in practice. If you are travelling by train around the South-East of England, it is very likely you will pass through at least one of Reading, East Croydon, or about five major terminus in London. It would be absurd for every station to be of equal significance to the network. By contrast, the typical vertex degree in the sparse Erdos-Renyi model is O(1), and has a limiting Poisson distribution, with a super-exponential tail.

So, this paper addresses the following question. We know that the PA model, when set up right, has power-law tails for the degree distribution, and so has a largest degree that is an order of magnitude larger than the average degree. Let’s call this the ‘hub’ for now. But the model is dynamic, so we should ask how this hub changes in time as we add extra vertices. In particular, is it the case that one vertex should grow so large that it remains as the dominant hub forever? This paper answers this question in the affirmative, for a certain class of preferential attachment schemes.

We assign a weighting system to possible degrees, that is a function from N to R+. In the case of proportional PA, this function could be f(n)=n. In general, we assume it is convex. Note that the more convex this weight function is, the stronger the preference a new vertex feels towards existing dominant vertices. Part of the author’s proof is a formalisation of this heuristic, which provides some machinery allowing us to treat only really the case f(n)=n. I will discuss only this case from now on.

I want to focus on the fact that we have another model which describes aspects of the degree evolution rather well. We consider some finite fixed collection of vertices at some time, and consider the evolution of their degrees. We will be interested in limiting properties, so the exact time doesn’t matter too much. We look instead at the jump chain, ie those times when one of the degrees changes. This happens when a new vertex joins to one of the chosen vertices. Given that the new vertex has joined one of the chosen vertices, the choice of which of the chosen vertices is still size-biased proportional to the current degrees. In other words, the jump chain of this degree sequence is precisely Polya’s Urn.

This is a powerful observation, as it allows us to make comments about the limiting behaviour of finite quantities almost instantly. In particular, we know that from any starting arrangement, Polya’s Urn converges almost surely. This is useful to the question of persistence for the following reason.

Recall that in the case of two colours, starting with one of each, we converge to the uniform distribution. We should view this as a special case of the Dirichlet distribution, which is supported on partitions into k intervals of [0,1]. In particular, for any fixed k, the probability that two of the intervals have the same size is 0, as the distribution is continuous. So, since the convergence of the proportions in Polya’s Urn is almost sure, with probability one all of the proportions are with $\epsilon>0$ of their limit, and so taking epsilon small enough, given the limit, which we are allowed to do, we can show that the colour which is largest in the limit is eventually the largest at finite times.

Unfortunately, we can’t mesh these together these finite-dimensional observations particularly nicely. What we require instead is a result showing that if a vertex has large enough degree, then it can never be overtaken by any new vertex. This proved via a direct calculation of the probability that a new vertex ‘catches up’ with a pre-existing vertex of some specified size.

That calculation is nice and not too complicated, but has slightly too many stages and factorial approximations to consider reproducing or summarising here. Instead, I offer the following heuristic for a bound on the probability that a new vertex will catch up with a pre-existing vertex of degree k. Let’s root ourselves in the urn interpretation for convenience.

If the initial configuration is (k,1), corresponding to k red balls and 1 blue, we should consider instead the proportion of red balls, which is k/k+1 obviously. Crucially (for proving convergence results if nothing else), this is a martingale, which is clearly bounded within [0,1]. So the expectation of the limiting proportion is also k/k+1. Let us consider the stopping time T at which the number of red balls is equal to the number of blue balls. We decompose the expectation by conditioning on whether T is finite.

$\mathbb{E}X_\infty=\mathbb{E}[X_\infty|T<\infty]\mathbb{P}(T<\infty)+\mathbb{E}[X_\infty|T=\infty]\mathbb{P}(T=\infty)$

$\leq \mathbb{E}[X_\infty | X_T,T<\infty]\mathbb{P}(T<\infty)+(1-\mathbb{P}(T=\infty))$

using that $X_\infty\leq 1$, regardless of the conditioning,

$= \frac12 \mathbb{P}(T<\infty) + (1-\mathbb{P}(T<\infty))$

$\mathbb{P}(T<\infty) \leq \frac{2}{k+1}.$

We really want this to be finite when we sum over k so we can use some kind of Borel-Cantelli argument. Indeed, Galashin gets a bound of $O(k^{-3/2})$ for this quantity. We should stress where we have lost information. We have made the estimate $\mathbb{E}[X_\infty|T=\infty]=1$ which is very weak. This is unsurprising. After all, the probability of this event is large, and shouldn’t really affect the limit that much when it does not happen. The conditioned process is repelled from 1/2, but that is of little relevance when starting from k/k+1. It seems likely this expectation is in fact $\frac{k}{k+1}+O(k^{-3/2})$, from which the result will follow.

# Critical Components in Erdos-Renyi

In various previous posts, I’ve talked about the phase transition in the Erdos-Renyi random graph process. Recall the definition of the process. Here we will use the Gilbert model G(n,p), where we have n vertices, and between any pair of vertices we add an edge, independently of other pairs with probability p. We are interested in the sparse scaling, where the typical vertex has degree O(1) in n, and so p=c/n for constant c>0, and we assume throughout that n is large. We could alternatively have considered the alternative Erdos-Renyi model where we choose uniformly at random from the set of graphs with n vertices and some fixed number of edges. Almost all the results present work equally well in this setting.

As proved by Erdos and Renyi, the typical component structure of such a graph changes noticeably around the threshold c=1. Below this, in the subcritical regime, all the components are small, meaning of size at most order O(log n). Above this, in the supercritical regime, there is a single giant component on some non-zero proportion of the vertices. The rest of the graph looks subcritical. The case c=1 exhibits a phase transition between these qualitatively different behaviours. They proved that here, the largest component is with high probability O(n^2/3). It seems that they thought this result held whenever c=1-o(1), but it turns out that this is not the case. In this post, I will discuss some aspects of behaviour around criticality, and the tools needed to treat them.

The first question to address is this: how many components of size n^{2/3} are there? It might be plausible that there is a single such component, like for the subsequent giant component. It might also be plausible that there are n^1/3 such components, so O(n) vertices are on such critical components. As then it is clear how we transition out of criticality into supercriticality – all the vertices on critical components coalesce to form the new giant component.

In fact neither of these are correct. The answer is that for all integers k>0, with high probability the k-th largest component is on a size scale of n^2/3. This is potentially a confusing statement. It looks like there are infinitely many such components, but of course for any particular value of n, this cannot be the case. We should think of there being w(1) components, but o(n^b) for any b>0.

The easiest way to see this is by a duality argument, as we have discussed previously for the supercritical phase. If we remove a component of size O(n^2/3), then what remains is a random graph with n-O(n^2/3) vertices, and edge probability the same as originally. It might make sense to rewrite this probability 1/n as

$\frac{1}{n-O(n^{2/3})}\cdot \frac{n-O(n^{2/3})}{n}=\frac{1-O(n^{-1/3})}{n-O(n^{2/3})}.$

The approximation in the final numerator is basically the same as

$1-o\left(n-O(n^{2/3})\right).$

Although we have no concrete reasoning, it seems at least plausible that this should look similar in structure to G(n,1/n). In particular, there should be another component of size

$O\left([n-O(n^{2/3})]^{2/3}\right)=O(n^{2/3})$.

In fact, the formal proof of this proceeds by an identical argument, only using the exploration process. Because I’ve described this several times before, I’ll be brief. We track how far we have gone through each component in a depth-first walk. In both the supercritical and subcritical cases, when we scale correctly we get a random path which is basically deterministic in the limit (in n). For exactly the same reasons as visible CLT fluctuations for partial sums of RVs with expectation zero, we start seeing interesting effects at criticality.

The important question is the order of rescaling to choose. At each stage of the exploration process, the number of vertices added to the stack is binomial. We want to distinguish between components of size $O(n^{2/3})$ so we should look at the exploration process at time $sn^{2/3}$. The drift of the exploration process is given by the expectation of a binomial random variable minus one (since we remove the current vertex from the stack as we finish exploring it). This is given by

$\mathbb{E}=\left[n-sn^{2/3}\right]\cdot \frac{1}{n}-1=-sn^{-1/3}.$

Note that this is the drift in one time-step. The drift in $n^{2/3}$ time-steps will accordingly by $sn^{1/3}$. So, if we rescale time by $n^{2/3}$ and space by $n^{1/3}$, we should get a nice stochastic process. Specifically, if Z is the exploration process, then we obtain:

$\frac{1}{n^{1/3}}Z^{(n)}_{sn^{2/3}} \rightarrow_d W_s,$

where W is a Brownian motion with inhomogeneous drift -s at time s. The net effect of such a drift at a fixed positive time is given by integrating up to that time, and hence we might say the process has quadratic drift, or is parabolic.

We should remark that our binomial expectation is not entirely correct. We have discounted those $sn^{2/3}$ vertices that have already been explored, but we have not accounted for the vertices currently in the stack. We should also be avoiding considering these. However, we now have a heuristic for the approximate number of these. The number of vertices in the stack should be $O(n^{1/3})$ at all times, and so in particular will always be an order of magnitude smaller than the number of vertices already considered. Therefore, they won’t affect this drift term, though this must be accounted for in any formal proof of convergence. On the subject of which, the mode of convergence is, unsurprisingly, weak convergence uniformly on compact sets. That is, for any fixed S, the convergence holds weakly on the random functions up to time $sn^{2/3}$.

Note that this process will tend to minus infinity almost surely. Component sizes are given by excursions above the running minimum. The process given by the height of the original process above the running minimum is called reflected. Essentially, we construct the reflected process by having the same generator when the current value is positive, and forcing the process up when it is at zero. There are various ways to construct this more formally, including as the scaling limit of some simple random walks conditioned never to stay non-negative.

The cute part of the result is that it holds equally well in a so-called critical window either side of the critical probability 1/n. When the probability is $\frac{1+tn^{-1/3}}{n}$, for any $t\in \mathbb{R}$, the same argument holds. Now the drift at time s is t-s, though everything else still holds.

This result was established by Aldous in [1], and gives a mechanism for calculating distributions of component sizes and so on through this critical window.

In particular, we are now in a position to answer the original question regarding how many such components there were. The key idea is that because whenever we exhaust a component in the exploration process, we choose a new vertex uniformly at random, we are effectively choosing a component according to the size-biased distribution. Roughly speaking, the largest components will show up near the beginning. Note that a critical $O(n^{2/3})$ component will not necessarily be exactly the first component in the exploration process, but the components that are explored before this will take up sufficiently few vertices that they won’t show up in the scaling of the limit.

In any case, the reflected Brownian motion ‘goes on forever’, and the drift is eventually very negative, so there cannot be infinitely wide excursions, hence there are infinitely many such critical components.

If we care about the number of cycles, we can treat this also via the exploration process. Note that in any depth-first search we are necessarily only interested in a spanning tree of the host graph. Anyway, when we are exploring a vertex, there could be extra edges to other vertices in the stack, but not to vertices we’ve already finished exploring (otherwise the edge would have been exposed then). So the expected number of excess edges into a vertex is proportional to the height of the exploration process at that vertex. So the overall expected number of excess edges, conditional on the exploration process is the area under the curve. This carries over perfectly well into the stochastic process limit. It is then a calculation to verify that the area under the curve is almost surely infinite, and thus that we expect there to be infinitely many cycles in a critical random graph.

REFERENCES

[1] Aldous D. – Brownian excursions, critical random graphs and the multiplicative coalescent

# Recent Research Activity

I’ve spent this week in Luminy, near Marseille, attending a summer school run by ALEA, the organisation of French probabilists. We’ve been staying in CIRM, a dedicated maths research conference centre at the edges of the calanques, the area of mountains and jagged coastal inlets between Marseille and Cassis. The walking possibilities have been excellent, as have the courses and lectures, on a range of topics in probability theory.

Anyway, the time here has been an excellent moment to reflect on my research progress, and try to come up with the sort of fresh ideas that are perhaps slightly inhibited by sitting at a desk with an endless supply of paper on which to try calculations. When I get back, I have to submit a first-year report, so at least for a little while I will have to suppress the desire to make further progress and instead diligently assemble the progress I have made.

The Model

I’ve defined some of these processes in past posts, but I see no harm in doing so again. We take the standard Erdos-Renyi random graph process, where edges are added one-at-a-time uniformly at random between n vertices, and amend it by adding a deletion mechanism. The aim is to arrive at a process which looks in equilibrium more like the critical random graph than either the subcritical or supercritical regimes, where the components are very small, and dominated by one giant component respectively. Rath, Toth and others have studied the process where each vertex is hit by lightning at uniform rate. When this happens, we delete all the edges in the component containing that vertex. Naturally, big components will be hit by lightning more often than small components, and so this acts as a mechanism to prevent the formation of giant components, if scaled correctly.

We take a different approach. We observe that criticality in the original random graph process is denoted by the first appearance of a giant component, but also by the first appearance of a) lots of cycles, and b) large cycles. In particular, it is very unlikely that a giant component could form without containing any cycles. We will therefore use the appearance of a cycle to trigger some form of deletion mechanism.

Our final goal is to treat the so-called ‘Cycle Deletion’ model. Here, whenever a cycle appears, we delete all the edges in that cycle immediately. There are several challenges in treating this model, because the rate at which cycles emerge in a tree is a function of the tree structure. The trees in this model will not be Uniform Spanning Trees (though it is very possible that they will be ‘almost USTs’ in some sense – we need to investigate this further) so it will be hard to make nice statements about the rates. For the standard random graph process, if we are only interested in the sizes of the components, we are actually allowed to ignore the graph structure entirely. The component sizes evolve as a discrete, stochastic version of the multiplicative coalescent (sometimes called a Marcus-Lushnikov process). We would like a deletion mechanism that has a nice interpretation as a fragmentation operation in the same sense. The rate at which a component fragments will be quadratic in the size of the component, since there are $O(k^2)$ possible edges between k vertices forming a component, and adding any of precisely these will create a cycle.

I’ve talked previously about how to overcome the problems with the tree structure in Cycle Deletion with the so-called Uniform Cycle Deleting model. In any case, as a starting point we might consider the Cycle-Induced Forest Fire model. Here, whenever a cycle appears, we delete all the edges, including the new one, in the whole component which contains the cycle.

We suspect this model may resemble the critical random graph at all times. The main characteristic of G(n,1/n) is that the largest component is of size O(n^2/3), and indeed there are arbitrarily many components of this size, with high probability in the limit. Since CIFF is recurrent for any fixed n, meaning that it will visit any state infinitely often (rather than tending to infinity or similar), we should ask what the largest component is typically in the equilibrium distribution. Our aim is to prove that it is O(n^2/3). We might suspect that the typical size of the largest component will be greater in the Cycle Deletion model, since each fragmentation event is less severe there, removing fewer edges.

An Upper Bound

The nice thing about Markov chains is that they have an ergodic property, which means that if you run them for long enough, the proportion of time spent in any state is given by the stationary probability of being in that state. It doesn’t matter whether or not you start in equilibrium, since it will converge anyway. Thus it is meaningful to talk about properties like the average number of isolated vertices as a time-average as well as an average with respect to some distribution.

This quantity is the key to an upper bound. We can equally talk about the average change in the number of isolated vertices in a time-step. This will increase when a component fragments, and will decrease when an isolated vertex coalesces with another component. In particular, the largest possible decrease in the number of isolated vertices in a single time-step is 2, corresponding to an edge appearing between two isolated vertices.

Suppose that with probability $\Theta(1)$ there is a component of size $n^\alpha$ for some $\alpha>2/3$. Then such a component makes a contribution to the expected change in the number of isolated vertices of

$\Theta(1) n^\alpha \left(\frac{n^\alpha}{n}\right)^2.$ (*)

Where does this come from? Well, we are tracking the contributions from the event that the largest component is of this size and that it fragments, giving $n^\alpha$ new isolated vertices. So the $\Theta(1)$ accounts for the probability that there is such a component to begin with. Then, conditional on that, the probability that it gets fragmented in the next time-step is the probability that both ends of the next edge added lie in that component. Since the edge is chosen uniformly at random, the probability of this is $n^\alpha/n$. Note that this is under a slightly odd definition of an edge, that allows loops. Basically, I don’t want to have lots of correction terms involving $\binom{n}{2}$ floating around. However, it would make no difference to the orders of magnitude if we to do it with these.

So, this is only one contribution to the typical rate of gain of isolated vertices. Now note that if $\alpha>2/3$, then this expression is >> 1. This is bad since the negative contributions to this expected flux in the number of isolated vertices is O(1). So this suggests that over time, the number of isolated vertices will keep growing. This is obviously ridiculous since a) we are in equilibrium, so the expected flux should be 0 and b) the number of isolated vertices cannot exceed n, for clear reasons.

This gives us an upper bound of n^2/3 as the typical scale of the largest component. We can come up with a similar argument for the cycle deleting model. The most helpful thing to track there is the number of edges in the graph. Note that since the graph is at all times a forest on n vertices, the number of edges is equal to n minus the number of (tree) components. We use the fact that the typical fragmentation of a component of size k creates $O(\sqrt{k})$ new components. It is possible to argue via isolated vertices here too, but the estimates are harder, or at least less present in the literature.

Lower Bounds?

The problem with lower bounds is that it is entirely possible that the flux in the number of isolated vertices is not driven by typical behaviour. Suppose for example we had a different rule. We begin a random graph process, and the first time we see a cycle in a component with size larger than n^2/3, we delete all the edges in the whole graph. Then we will see a sequence of random graph processes starting with the empty graph and stopped at some point close to criticality (in fact, with high probability in the *critical window*), and these will all be glued together. So then, most of the time the process will look subcritical, but the gains in isolated vertices will occur only during the critical periods, which are only an asymptotically small proportion of the time.

At the moment, my approach to the lower bound is instead to prove that the upper bound is tight. I mean this in the following sense. Suppose we wanted to be sure that (*) was in fact equal to the average rate of gain of isolated vertices. We would have to check the following:

• That the total contributions from all other components were similar or smaller than from the component(s) of size roughly $n^{\alpha}$.
• That there were only a few components of size $n^{\alpha}$. In particular, the estimate would be wrong if there were $n^\epsilon$ such components for any $\epsilon>0$.
• That it cannot be the case that for example, some small proportion of the time there is a component of size roughly $n^{\alpha+\epsilon}$, and over a large enough time these make a greater contribution to the average gain in isolated vertices.

A nice way to re-interpret this is to consider some special vertex and track the size of its component in time. It will be involved in repeated fragmentations over the course of time, so it is meaningful to talk about the distribution of the size of the component containing the vertex when it is fragmented. Our aim is to show that this distribution is concentrated on the scaling $O(n^\alpha)$.

So this has turned out to be fairly hard. Rather than try to explain some of the ideas I’ve employed in attempting to overcome this, I will finish by giving one reason why it is hard.

We have seen that the component sizes in random graphs evolve as the multiplicative coalescent, but at a fixed moment in time, we can derive good estimates from an analogy with branching processes. We might like to do that here. If we know what the system looks like most of the time, we might try to ‘grow’ a multiplicative coalescent, viewing it like a branching process, with distribution given by the typical distribution. The problem is that when I do this, I find that the expectation of the offspring distribution is $\Theta(1)$. This looks fine, since 1 is the threshold for extinction with probability 1. However, throughout the analysis, I have only been paying attention to the exponent of n in all the time and size estimates. For example, I view $n^\alpha$ and $n^\alpha \log n$ as the same. This is a problem, as when I say the expectation is $\Theta(1)$, I am really saying it is $\sim n^0$. This means it could be $\frac{1}{\log n}$ or $\log n$. Of course, there is a massive difference between these, since a branching process grows expectationally!

So, this approach appears doomed in its current form. I have some other ideas, but a bit more background may be required before going into those. I’m going to be rather busy with teaching on my return to the office, so unfortunately it is possible that there may be many posts about second year probability and third year applied probability before anything more about CIFF.

# The Configuration Model

In the past, I’ve talked about limitations of the Erdos-Renyi model of homogeneous random graphs for applications in real-world networks. In a previous post, I’ve discussed a dynamic model, the Preferential Attachment mechanism, that ‘grows’ a graph dynamically by adding edges from new vertices preferentially to existing vertices with high degree. The purpose of this adjustment is to ensure that the distribution of the degrees is not concentrated around some fixed value (which would be c in G(n,c/n) ) but rather exhibits a power-law tail such as observed in many genuine examples.

In this post, we introduce some aspects of the configuration model, which achieves this property more directly. This idea probably first arose in the guise of regular graphs. Recall a regular graph has all degrees equal. How would we construct a random d-regular graph on a large number of vertices?

What we probably want to do is to choose uniformly at random from the set of such graphs, but it is not clear even how large this set is, let alone how one would order its elements to make it possible to make this uniform choice. Instead, we try the following. Assign to each vertex d so-called stubs, which will end up being ‘half-edges’. We then choose two stubs uniformly at random, and glue them together. More formally, we construct an edge between the host vertices, and then delete the chosen stubs. We then continue.

The construction makes no reference to the distribution of stubs, so we are free to choose this as we please. We could for example specify some sequence of degrees which approximates a power-law, so we could sample a random sequence of degrees in some way. So long as we have a sequence of stub set sizes before we start building the edges of the graph we will be able to use the above algorithm.

So what might go wrong? There seem to me to be three potential problems that might arise with this construction.

Firstly, there might be a stub left over, if the sum of the stub set sizes is odd. Recall that in a graph the sum of the degrees is twice the sum of the number of edges, and so in particular the sum of the degrees should be even. But this is a small problem. When the degree sequence is deterministic we can demand that it have even sum, and if it is random, we will typically be working in a large N regime, and so deleting the solitary stub, if such a thing exists, will not affect the sort of properties of the graph we are likely to be interested in.

The second and third objections are perhaps more serious. If we glue together stubs naively, we might end up with loops, that is, edges that ‘begin’ and ‘end’ at the same vertex. These are not allowed in the standard definition of a graph. Alternatively, we might end up with more than one edge between the same pair of vertices.

Our overall aim is that this mechanism gives a convenient way of simulating the uniform distribution on simple graphs with a given degree sequence. At present we have the uniform distribution on potential multigraphs, with a weighting of 1/k! for every multi-edge with multiplicity k, and a weighting of 1/2 for every loop. The latter can be seen because there is an initial probability proportional to $d(v_i)d(v_j)$ that vertices v_i and v_j will be joined, whereas a probability proportional (with the same constant) to $d(v_i)^2$ that v_i will receive a loop. The multi-edge weighting justification is similar.

However, conditional on getting a simple graph, the distribution is uniform on the set of simple graphs with that degree sequence. So it remains to investigate the probability that a graph generated in this way is simple. So long as this probability does not tend to 0 as n grows, we will probably be happy.

The strongest results on this topic are due to Janson. First observe that if the sum of the degrees grows faster than the number of vertices n, we fail to get a graph without loops with high probability. Heuristically, note that on the first pass, we are taking two picks from the set of vertices, biased by the number of stubs. By Cauchy-Schwarz, Rearrangement Inequality or just intuition, the probability of getting the same vertex is greater than if we picked uniformly from the set of vertices without biasing. So the probability of getting no loop on the first pass is $\le (1-\frac{1}{n})$. Take some function a(n) that grows faster than n, but slower than the sum of the degrees. Then after a(n) passes, the degree distribution is still roughly the same. In particular, the sum of the degrees is still an order of magnitude greater than n. So we obtain:

$\mathbb{P}(\text{no loops})\leq (1-\frac{1}{n})^{a(n)}\approx e^{-\frac{a(n)}{n}}\rightarrow 0.$

So, since isolated vertices have no effect on the simplicity or otherwise, we assume the sum of the degrees is $\Theta(n)$. Then, Janson shows that the further condition

$\sum_{i=1}^n d_i^2=O(n),$

is essentially necessary and sufficient for simplicity. We can see why this might be true by looking at the probability that the first edge added is a loop, which is roughly

$\frac{d_1^2+d_2^2+\ldots+d_n^2}{2(\sum d_i)^2}.$

We have to consider $O(\sum d_i)$ edges, so if the above expression is much larger than this, we can perform a similar exponential estimate to show that the probability there are no loops is o(1). The technical part is showing that this probability doesn’t change dramatically as the first few stubs disappear.

Note that in both cases, considering only loops is sufficient for simplicity. Although it looks like loop appearance is weaker than multiplicity of edges, in fact they have the same threshold. It should also be pointed out that, like the uniform random forests, an alternative approach is simply to count the number of simple graphs and multigraphs with a given degree sequence. Good asymptotics can then be found for the probability of simplicity.

In the case of G(n,c/n), we were particularly interested in the emergence of the giant component at time c=1. While first-moment methods can be very effective in demonstrating such results, a branching process local limit representation is probably easiest heuristic for this phase transition.

So long as the degree sequences converge in a natural way, we can apply a similar approach to this configuration model. Concretely, we assume that the proportion of vertices with degree i is $\lambda_i$ in the limit. Although the algebra might push through, we should be aware that this means we are not explicitly specifying how many vertices have degree, eg $\Theta(n^{1/2})$. For now assume the $\lambda_i$s sum to 1, so specify a probability distribution for degree induced by choosing a vertex uniformly at random.

So we start at a vertex, and look at its neighbours. The expected number of neighbours of this root vertex is $\sum i\lambda i$. Thereafter, when we consider a child vertex, based on how the stubs are paired up (and in particular the fact that the order of the operations does not matter – the choice of partner of a given stub is chosen uniformly at random), we are really choosing a stub uniformly at random. This corresponds to choosing a vertex at random, biased by the number of stubs available. The quantity of interest is how many additional stubs (other than the one that led to the vertex) are attached to this vertex. We assume we don’t need to worry too much about repeating vertices, in a similar way to G(n,c/n). So the expected number of additional stubs is

$\frac{1}{\sum i\lambda_i}\sum i\lambda_i(i-1).$

For an infinite component, we required the expectation to be > 1, which is equivalent to

$\sum \lambda_i i(i-2)>0.$

This was proven by Molloy and Reed (95), then with fewer conditions by Janson (07). The latter also shows how to use this construction to derive the giant component for G(n,c/n) result.

REFERENCES

Janson – A New Approach to the Giant Component Problem

Molloy, Reed – A Critical Point for Random Graphs with a Given Degree Sequence

Janson – The Probability that  Random Multigraph is Simple

# Preferential Attachment Models

I’ve just read a really interesting paper by Peter Morters and Maren Eckhoff that made me feel I should look up some of the background and write a quick post. I may get onto some of the results in the paper at the end of this post, but I want to start by saying a bit about the model itself. I’ve spoken about this briefly in a previous post about several descriptions of complex networks, but I think it’s worth having a second attempt.

We seek a model for random graphs that gives a distribution which exhibits some of the properties of the sort of complex networks seen in the real world. In particular, whereas the degree distribution is Poisson, and so concentrated with exponential tails for the Erdos-Renyi random graph, data indicates that a better model for most applications would have power law tails for this degree distribution.

Albert and Barabasi propose growing such a graph via a so-called preferential attachment scheme. We start with some small possibly empty graph, and add new vertices one at a time. For each new vertex, we add exactly M edges between the new vertex and the vertices already present. The choice of these M other vertices is given by weighting by the degree of the (pre-existing) vertices. That is, vertices with large degree are more likely to be joined to new vertices. This is obviously designed to replicate some of the behaviour seen in say the formation of the internet, where new sites are more likely to link to established and popular sites (Google, Youtube and so on) than a uniformly chosen site.

This model has a couple of problems. Firstly, it is not immediately obvious how to start it. Obviously we need M vertices present for the PA dynamics to start working. In fact, whether one starts with a empty graph or a complete graph on M vertices makes little difference to the large n behaviour. Trickier is the question of multiple edges, which may emerge if we define the PA dynamics in the natural way, that is for each of the M edges in turn. Overcoming this is likely to be annoying.

Bollobas and Riordan do indeed overcome this possible problems in a formal way, and prove that a version of this model does indeed have power law decay of the degree distribution, with exponent equal to 3. The model in the paper instead joins new vertex (n+1) to old vertex m with probability:

$\frac{f(\text{in-degree of n})}{n},$

where f is some function, which for now we assume has the form $f(k)=\gamma k+\beta$. Since the vertices are constructed one at a time, it is well-defined to orient these edges from new to old vertices, hence this notion of in-degree makes sense.

It was not obvious to me that this model was more general than the Bollobas/Riordan model, but we will explain this in a little while. First I want to explain why the Bollobas/Riordan model has power law tails, and how one goes about finding the exponent of this decay, since this was presented as obvious in most of the texts I read yet is definitely an important little calculation.

So let’s begin with the Bollobas/Riordan model. It makes sense to think of the process in terms of time t, so there are t – M vertices in the graph. But if t is large, this is essentially equal to t. We want to track the evolution of the degree of some fixed vertex v_i, the ith vertex to be formed. Say this degree is d(t) at time t. Then the total number of edges in the graph at time t is roughly tM. Therefore, the probability that a new vertex gets joined to vertex v is roughly $\frac{Md}{2Mt}$, where the M appears in the numerator because there are M fresh edges available. Note that we have ignored the possibility of trying to connect multiple edges from the new vertex to v, so this holds provided d is substantially smaller than t. With the boundary condition $d(i)=M$, this leads to the simple ODE

$\dot{d}=\frac{d}{2t}\quad \Rightarrow\quad d=M(\frac{t}{i})^{1/2}.$

To me at least it was not immediately clear why this implied that the tail of the degree distribution had exponent 3. The calculation works as follows. Let D be the degree of a vertex at large time t, chosen uniformly at random.

$d_i\propto (\frac{t}{i})^{1/2}$

$\Rightarrow\quad \mathbb{P}(D\geq d)=\frac{1}{t}|\{i:(\frac{t}{i})^{1/2}\geq d\}|=\frac{1}{t}|\{i:i\leq \frac{t}{d^2}\}|=\frac{1}{d^2}$

Now we consider the Eckhoff / Morters model. The main difference here is that instead of assuming that each new vertex comes with a fixed number of edges, instead the new vertex joins to each existing vertex independently with probability proportional to the degree of the existing vertex. More precisely, they assume that edges are directed from new vertices to old vertices, and then each new vertex n+1 is joined to vertex m<n+1 with probability $\frac{f(\text{indegree of }m\text{ at time }n)}{n}\wedge 1$, where $f(k)=\gamma k +\beta$, for $\gamma\in[0,1), \beta>0$.

I was stuck for a long time before I read carefully enough the assertion that $\beta>0$. Of course, if this doesn’t hold, then the graph won’t grow fast enough. For, since the function f is now linear, we can lift the statement about evolution of the degree of a vertex to a statement about the evolution of the total number of edges. Note that each edge contributes exactly one to the total number of in-degrees. So we obtain

$\dot{E}=\frac{\gamma E}{t}\quad\Rightarrow E(t)\propto t^\gamma.$

In particular, this is much less than t, so the majority of vertices have small degree. The answer is fairly clear in fact: since the preferential attachment mechanism depends only on in-degree, then if f(0)=0, since the in-degree of a new vertex will always be zero by construction, there is no way to get an additional edge to that vertex. So all the edges in the graph for large t will be incident to a vertex that had positive in-degree in the time 0 configuration. Hence we need $\beta>0$ for the model to be meaningful. Note that this means we effectively have a Erdos-Renyi type mechanism AND a preferential attachment evolution. As, for each new vertex, we add roughly $\beta$ edges to existing vertices chosen uniformly at random (rather than by a PA method) and also some assigned via PA. A previous paper by Dereich and Morters shows that the asymptotic degree distribution has a power law tail with exponent

$\tau:=\frac{\gamma+1}{\gamma}.$

Note that $\gamma=\frac12$ gives the same exponent (3) as the Bollobas / Riordan model.

We can apply a similar ODE approximation as above to estimate the likely large time behaviour of the number of edges:

$E'=\frac{\gamma E + \beta t}{t}.$

So since $E'\geq \beta$, we have $E\geq \beta t$ so defining F to be E(t)/t, we get:

$tF'(t)=\beta-(1-\gamma)F(t)$        (1)

Noting that F’ is positive when $F< \frac{\beta}{1-\gamma}$ and negative when $F>\frac{\beta}{1-\gamma}$ suggests that for large t, this is an equilibrium point for F and hence $E(t)\approx \frac{\beta t}{1-\gamma}$. Obviously, this is highly non-rigorous, as F’ can be very small and still satisfy the relation (1), so it is not clear that the ‘equilibrium’ for F is stable. Furthermore, one needs to check that the binomial variables that supply the randomness to this model are sufficiently concentrated that this approximation by expectation is reasonable.

Nonetheless, as a heuristic this is not completely unsatisfactory, and it leads to the conclusion that E(t) is a linear function of t, and so the distribution of the out-degrees for vertices formed at large times t is asymptotically Poisson, with parameter

$\lambda =\frac{\beta\gamma}{1-\gamma}+\beta=\frac{\beta}{1-\gamma}$.

Note that this is the same situation as in Erdos-Renyi. In particular, it shows that all the power tail behaviour comes from the in-degrees. In a way this is unsurprising, as these evolve in time, whereas the out-degree of vertex t does not change after time t. Dereich and Morters formalise this heuristic with martingale analysis.

The reason we are interested in this type of model is that it better reflects models seen in real life. Some of these networks are organic, and so there it is natural to consider some form of random destructive mechanism, for example lightning, that kills a vertex and all its edges. We have to compare this sort of mechanism, which chooses a vertex uniformly at random, against a targeted attack, which deletes the vertices with largest degree. Note that in Erdos-Renyi, the largest degree is not much larger than the size of the typical degree, because the degree distribution is asymptotically Poisson. We might imagine that this is not the case in some natural networks. For example, if one wanted to destroy the UK power network, it would make more sense to target a small number of sub-stations serving large cities, than, say, some individual houses. However, a random attack on a single vertex is unlikely to make much difference, since the most likely outcome by far is that we lose only a single house etc.

In Eckhoff / Morters’ model, the oldest vertices are by construction have roughly the largest degree, so it is clear what targeting the most significant $\epsilon n$ vertices means. They then show that these vertices include all the vertices that give the power law behaviour. In particular, if you remove all of these vertices and, obviously, the edges incident to them, you are left with a graph with exponential tail in the asymptotic degree distribution, with largest degree on the order of log n. It was shown in a previous paper that this type of network is not vulnerable to random removal of nodes. Perhaps most interestingly, these authors now prove that after removing the most significant $\epsilon n$ vertices, the network IS now vulnerable to random removal of nodes, leading to the conclusion that it is preferable to experience a random attack followed by a targeted attack than vice versa!

In a future (possibly distant) post, I want to say some slightly more concrete things about how these processes link to combinatorial stochastic processes I understand slightly better, in particular urn models. I might also discuss the configuration model, an alternative approach to generating complex random networks.