Kernels of critical graph components

This post is motivated by G(N,p), the classical Erdos-Renyi random graph, specifically its critical window, when p=p(N)=\frac{1}{N}(1+\lambda N^{-1/3}).

We start with the following observation, which makes no restriction on p. Suppose a component of G(N,p) is a tree. Then, the graph geometry of this component is that of a uniform random tree on the appropriate number of vertices. This is deliberately informal. To be formal, we’d have to say “condition on a particular subset of vertices forming a tree-component” and so on. But the formality is broadly irrelevant, because at the level of metric scaling limits, if we want to describe the structure of a tree component, it doesn’t matter whether it has \log N or \frac{1}{7}N vertices, because in both cases the tree structure is uniform. The only thing that changes is the scaling factor.

In general, when V vertices form a connected component of a graph with E edges, we define the excess to be E-V+1. So the excess is non-negative, and is zero precisely when the component is a tree. I’m reluctant to say that the excess counts the number of cycles in the component, but certainly it quantifies the amount of cyclic structure present. We will sometimes, in a mild abuse of notation, talk about excess edges. But note that for a connected component with positive excess, there is a priori no way to select which edges would be the excess edges. In a graph process, or when there is some underlying exploration of the component, there sometimes might be a canonical way to classify the excess edges, though it’s worth remarking that the risk of size-biasing errors is always extremely high in this sort of situation.

Returning to the random graph process, as so often there are big changes around criticality. In the subcritical regime, the components are small, and most of them, even the largest with high probability, are trees. In the supercritical regime, the giant component has excess \Theta(N), which is qualitatively very different.

It feels like every talk I’ve ever given has begun with an exposition of Aldous’s seminal paper [Al97] giving a distributional scaling limit of the sizes of critical components in the critical window, and a relation between the process on this time-scale and the multiplicative coalescent. And it remains relevant here, because the breadth-first exploration process can also be used to track the number of excess edges.

In a breadth-first exploration, we have a stack of vertices we are waiting to explore. We pick one and look its neighbours restricted to the rest of the graph, that is without the vertices we have already fully explored, and also without the other vertices in the stack. That’s the easiest way to handle the total component size. But we can simultaneously track how many times we would have joined to a neighbour within the stack, which leads to an excess edge, and Aldous derives a joint distributional scaling limit for the sizes of the critical components and their excesses. (Note that in this case, there is a canonical notion of excess edge, but it depends not just on the graph structure, but also on the extra randomness of the ordering within the breadth-first search.)

Roughly speaking, we consider the reflected exploration process, and its scaling limit, which is a reflected parabolically-drifting Brownian motion (though the details of this are not important at this level of exposition, except that it’s a well-behaved non-negative process that hits zero often). The component sizes are given by the widths of the excursions above zero, scaled up in a factor N^{1/3}. Then conditional on the shape of the excursion, the excess is Poisson with parameter the area under the excursion, with no rescaling. That is, a critical component has \Theta(1) excess.

So, with Aldous’s result in the background, when we ask about the metric structure of these critical components, we are really asking: “what does a uniformly-chosen connected component with fixed excess look like when the number of vertices grows?”

I’ll try to keep notation light, but let’s say T(n,k) is a uniform choice from connected graphs on n vertices with excess k.

[Note, the separation of N and n is deliberate, because in the critical window, the connected components have size n = \Theta(N^{2/3}), so I want to distinguish the two problems.]

In this post, we will mainly address the question: “what does the cycle structure of T(n,k) look like for large n?” When k=0, we have a uniform tree, and the convergence of this to the Brownian CRT is now well-known [CRT2, LeGall]. We hope for results with a similar flavour for positive excess k.

2-cores and kernels

First, we have to give a precise statement of what it means to study just the cycle structure of a connected component. From now on I will assume we are always working with a connected graph.

There are several equivalent definitions of the 2-core C(G) of a graph G:

  • When the excess is positive, there are some cycles. The 2-core is the union of all edges which form part of some cycle, and any edges which lie on a path between two edges which both form part of some cycle.
  • C(G) is the maximal induced subgraph where all degrees are at least two.
  • If you remove all the leaves from the graph, then all the leaves from the remaining graph, and continue, the 2-core is the state you arrive at where there are no leaves.

It’s very helpful to think of the overall structure of the graph as consisting of the 2-core, with pendant trees ‘hanging off’ the 2-core. That is, we can view every vertex of the 2-core as the root of a (possibly size 1) tree. This is particular clear if we remove all the edges of the 2-core from the graph. What remains is a forest, with one tree for each vertex of the 2-core.

In general, the k-core is the maximal induced subgraph where all degrees are at least k. The core is generally taken to be something rather different. For this post (and any immediate sequels) I will never refer to the k-core for k>2, and certainly not to the traditional core. So I write ‘core’ for ‘2-core’.

As you can see in the diagram, the core consists of lots of paths, and topologically, the lengths of these paths are redundant. So we will often consider instead the kernel, K(G), which is constructed by taking the core and contracting all the paths between vertices of degree greater than 2. The resulting graph has minimal degree at least three. So far we’ve made no comment about the simplicity of the original graphs, but certainly the kernel need not be simple. It will regularly have loops and multiple edges. The kernel of the graph and core in the previous diagram is therefore this:

Kernels of critical components

To recap, we can deconstruct a connected graph as follows. It has a kernel, and each edge of the kernel is a path length of some length in the core. The rest of the graph consists of trees hanging off from the core vertices.

For now, we ask about the distribution of the kernel of a T(n,K). You might notice that the case k=1 is slightly awkward, as when the core consists of a single cycle, it’s somewhat ambiguous how to define the kernel. Everything we do is easily fixable for k=1, but rather than carry separate cases, we handle the case k\ge 2.

We first observe that fixing k doesn’t confirm the number of vertices or edges in the kernel. For example, both of the following pictures could correspond to k=3:

However, with high probability the kernel is 3-regular, which suddenly makes the previous post relevant. As I said earlier, it can introduce size-biasing errors to add the excess edges one-at-a-time, but these should be constant factor errors, not scaling errors. So imagine the core of a large graph with excess k=2. For the sake of argument, assume the kernel has the dumbbell / handcuffs shape. Now add an extra edge somewhere. It’s asymptotically very unlikely that this is incident to one of the two vertices with degree three in the core. Note it would need to be incident to both to generate the right-hand picture above. Instead, the core will gain two new vertices of degree three.

Roughly equivalently, once the size of the core is fixed (and large) we have to make a uniform choice from connected graphs of this size where almost every vertex has degree 2, and \Theta(1) of the rest have degree 3 or higher. But the sum of the degrees is fixed, because the excess is fixed. If there are n vertices in the core, then there are \Theta(n) more graphs where all the vertices have degree 2 or 3, than graphs where a vertex has degree at least 4. Let’s state this formally.

Proposition: The kernel of a uniform graph with n vertices and excess k\ge 2 is, with high probability as n\rightarrow\infty, 3-regular.

This proved rather more formally as part of Theorem 7 of [JKLP], essentially as a corollary after some very comprehensive generating function setup; and in [LPW] with a more direct computation.

In the previous post, we introduced the configuration model as a method for constructing regular graphs (or any graphs with fixed degree sequence). We observe that, conditional on the event that the resulting graph is simple, it is in fact uniformly-distributed among simple graphs. When the graph is allowed to be a multigraph, this is no longer true. However, in many circumstances, as remarked in (1.1) of [JKLP], for most applications the configuration model measure on multigraphs is the most natural.

Given a 3-regular labelled multigraph H with 2(k-1) vertices and 3(k-1) edges, and K a uniform choice from the configuration model with these parameters, we have

\mathbb{P}\left( K \equiv H \right) \propto \left(2^{t(H)} \prod_{e\in E(H)} \mathrm{mult}(e)! \right)^{-1},

where t(H) is the number of loops in H, and mult(e) the multiplicity of an edge e. This might seem initially counter-intuitive, because it looks we are biasing against graphs with multiple edges, when perhaps our intuition is that because there are more ways to form a set of multiple edges we should bias in favour of it.

I think it’s most helpful to look at a diagram of a multigraph as shown, and ask how to assign stubs to edges. At a vertex with degree three, all stub assignments are different, that is 3!=6 possibilities. At the multiple edge, however, we care which stubs match with which stubs, but we don’t care about the order within the multi-edge. Alternatively, there are three choices of how to divide each vertex’s stubs into (2 for the multi-edge, 1 for the rest), and then two choices for how to match up the multi-edge stubs, ie 18 in total = 36/2, and a discount factor of 2.

We mention this because in fact K(T(n,k)) converges in distribution to this uniform configuration model. Once you know that K(T(n,k)) is with high probability 3-regular, then again it’s probably easiest to think about the core, indeed you might as well condition on its total size and number of degree 3 vertices. It’s then not hard to convince yourself that a uniform choice induces a uniform choice of kernel. Again, let’s state that as a proposition.

Proposition: For any H a 3-regular labelled multigraph H with 2(k-1) vertices and 3(k-1) edges as before,

\lim_{n\rightarrow\infty}\mathbb{P}\left( K(T(n,k)) \equiv H \right) \propto \left(2^{t(H)} \prod_{e\in E(H)} \mathrm{mult}(e)! \right)^{-1}.

As we said before, the kernel describes the topology of the core. To reconstruct the graph, we need to know the lengths in the core, and then how to glue pendant trees onto the core. But this final stage depends on k only through the total length of paths in the core. Given that information, it’s a combinatorial problem, and while I’m not claiming it’s easy, it’s essentially the same as for the case with k=1, and is worth treating separately.

It is worth clarifying a couple of things first though. Even the outline of methods above relies on the fact that the size of the core diverges as n grows. Again, the heuristic is that up to size-biasing errors, T(n,k) looks like a uniform tree with some uniformly-chosen extra edges. But distances in T(n,k) scale like n^{1/2} (and thus in critical components of G(N,p) scale like N^{1/3}). And the core will be roughly the set of edges on paths between the uniformly-chosen pairs of vertices, and so will also have length \Theta(n^{1/2}).

Once you have conditioned on the kernel structure, and the (large) number of internal vertices on paths in the core (ie the length of the core), it is natural that the assignment of the degree-2 vertices to core paths / kernel edges is uniform. A consequence of this is that if you record (Y_1,\ldots,Y_m) the lengths of paths in the core, where m=3(k-1), then

\frac{(Y_1,\ldots,Y_m)}{\sum Y_i} \stackrel{d}\rightarrow \mathrm{Dirichlet}(1,1,\ldots,1).

This is stated formally as Corollary 7 b) of [ABG09]. It’s worth noting that this confirms that the lengths of core paths are bounded in probability away from zero after the appropriate rescaling. In seeking a metric scaling limit, this is convenient as it means there’s so danger that two of the degree-3 vertices end up in ‘the same place’ in the scaling limit object.

To recap, the only missing ingredients now to give a complete limiting metric description of T(n,k) are 1) a distributional limit of the total core length; 2) some appropriate description of set of pendant trees conditional on the size of the pendant forest. [ABG09] show the first of these. As remarked before, all the content of the second of these is encoded in the unicyclic k=1 case, which I have written about before, albeit slightly sketchily, here. (Note that in that post we get around size-biasing by counting a slightly different object, namely unicyclic graphs with an identified cyclic edge.)

However, [ABG09] also propose an alternative construction, which you can think of as glueing CRTs directly onto the stubs of the kernel (with the same distribution as before). The proof that this construction works isn’t as painful as one might fear, and allows a lot of the other metric distributional results to be read off as corollaries.

References

[ABG09] – Addario-Berry, Broutin, Goldschmidt – Critical random graphs: limiting constructions and distributional properties

[CRT2] – Aldous – The continuum random tree: II

[Al97] – Aldous – Brownian excursions, critical random graphs and the multiplicative coalescent

[JKLP] – Janson, Knuth, Luczak, Pittel – The birth of the giant component

[LeGall] – Le Gall – Random trees and applications

[LPW] – Luczak, Pittel, Wierman – The structure of a random graph at the point of the phase transition

 

Advertisements

Real Trees – Root Growth and Regrafting

Two weeks ago in our reading group meeting, Raphael told us about Chapter Five which introduces root growth and regrafting. One of the points of establishing the Gromov-Hausdorff topology in this book was to provide a more natural setting for a discussion of tree-valued processes. Indeed in what follows, one can imagine how to start the construction of a similar process for the excursions which can be used to encode real trees, involving cutting off sub-excursions above one-sided local minima, then glueing them back in elsewhere. But taking account of the equivalence structure will be challenging, and it is much nicer to be able to describe cutting a tree in two by removing a single point without having to worry about quotient maps.

We have seen in Chapter Two an example of a process defined on the family of rooted trees with n labelled vertices which has the uniform rooted tree as an invariant distribution. Given a rooted tree with root p, we choose uniformly at random a vertex p’ in [n] to be the new root. Then if p’=p we do nothing, otherwise we remove the unique first edge in the path from p’ to p, giving two trees. Adding an edge from p to p’ completes the step and gives a new tree with p’ as root. We might want to take a metric limit of these processes as n grows and see whether we end up with a stationary real tree-valued process whose marginals are the BCRT.

To see non-trivial limiting behaviour, it is most interesting to consider the evolution of a particular subtree (which includes the root) through this process. If the vertex chosen for cutting lies in our observed subtree, then the subtree undergoes a prune and regraft operation. On the other hand, if the vertex chosen for cutting does not lie in the subtree, then we do not see any effect of the pruning, except the addition of a new vertex below the original root, which becomes the new root. So essentially, from the point of view of our observed subtree, the root is growing.

Now we can think about interpreting the dynamics of a natural limit process acting on real trees. The key idea is that we don’t change the set on which the tree is supported much, but instead just change the metric. In particular, we will keep the original tree, and add on length at unit rate. Of course, where this length gets added on entirely determines the metric structure of the tree, but that doesn’t stop us giving a simple ‘name’ for the extra length. If we consider a process X^T starting from a particular finite subtree T, then at time t, the tree X^T_t is has vertex set T \coprod (0,t]. (Finite subtree here means that it has finite total length.)

Root regrafting should happen at a rate proportional to the total length of the current observed tree. This is reasonable since after all it is supported within a larger tree, so in the discrete case the probability of a prune-regrafting event happening within a given observed subtree is proportional to the number of vertices in that subtree, which scales naturally as length in the real tree limit. It turns out that to get unit rate root growth with \Theta(1) rate prune-regrafting, we should consider subtrees of size \sqrt{n} within a host tree of size n as n\rightarrow\infty. We also rescale the lengths by \frac{1}{\sqrt{n}}, and time by \sqrt{n} so we actually see prune-regraft events.

Furthermore, if the subtree is pruned, the location of the pruning is chosen uniformly by length of the current observed subtree. So we can view the pruning process as being driven by a Poisson point process with intensity given by the instantaneous length measure of the tree, which at time t has vertex set T\coprod (0,t]. It will turn out to be consistent that there is a ‘piecewise isometry’ for want of a better phrase between the metric (and thus length measure) on X^T_t and the canonical induced measure on T\coprod (0,t], so we can describe the instances and locations of the pruning events via a pair of PPPs. The first is supported on T \times [0,\infty), and the second on \{(t,x): 0 \le x \le t\}, since we only ‘notice’ pruning at the point labelled x if the pruning happens at some time t after x was created.

If we start from a compact tree T, then the total intensity of this pair is finite up to some time t, and so we have a countable sequence \tau_0=0<\tau_1<\tau_2<\ldots of times for pruning events. It is easy to describe (but a bit messy to notate) the evolution of the metric between these pruning times. Essentially the distance between any pair of points in the observed tree at time \tau_m with root \rho_{\tau_m} is constant between times \tau_m,\tau_{m+1}, and new points are added so that the distance between \rho_{\tau_m} and any new point a\in(\tau_m,\tau_{m+1}] is a-\tau_m, and everything thing else follows from straightforward consideration of geodesics.

When a pruning event happens at point x_m at time \tau_m, distances are preserved within the subtree above x_m in X^T_{\tau_m -}, and within the rest of the tree. Again, an expression for the cross distances is straightforward but requires a volume of notation not ideally suited to this medium.

The natural thing to consider is the coupled processes started from different subtrees (again both must contain the original root) of the same host tree. Say T^1,T^2\le T, then it is relatively easy to check that X^{T^1}_t,X^{T^2}_t \le X^T_t \,\forall t, when we drive the processes by consistent coupled Poisson processes. Furthermore, it is genuinely obvious that the Hausdorff distance between X^{T^1}_t,X^{T^2}_t, here viewed as compact subsets of (X^T_t, d^T_t) remains constant during root growth phase.

Less obvious but more important is that the Hausdorff distance decreases during regrafting events. Suppose that just before a regrafting event, the two subtrees are T’ and T”, and the Hausdorff distance between them is \epsilon. This Hausdorff distance is with respect to the metric on the whole tree T. [Actually this is a mild abuse of notation – I’m now taking T to be the whole tree just before the regraft, rather than the tree at time 0.]

So for any a\in T', we can choose b\in T'' such that d_T(a,b)\le \epsilon. This is preserved under the regraft unless the pruning point lies on the geodesic segment (in T) between a and b. But in that case, the distance between a and the pruning point is again at most \epsilon, and so after the regrafting, a is at most \epsilon away from the new root, which is in both subtrees, and in particular the regrafted version of T”.

This is obviously a useful first step on the path to proving any kind of convergence result. There are some technicalities which we have skipped over. It is fairly natural that this leads to a Markov process when the original tree is finite, but it is less clear how to define these dynamics when the total tree length is infinite, as we don’t want regrafting events to be happening continuously unless we can bound their net effect in some sense.

Last week, Franz showed us how to introduce the BCRT into matters. Specifically, that BCRT is the unique stationary distribution for this process. After a bit more work, the previous result says that for convergence properties it doesn’t matter too much what tree we start from, so it is fine to start from a single point. Then, the cut points and growth mechanism corresponds very well to the Poisson line-breaking construction of the BCRT. With another ‘grand coupling’ we can indeed construct them simultaneously. Furthermore, we can show weak convergence of the discrete-world Markov chain tree algorithm to the process with these RG with RG dynamics.

It does seem slightly counter-intuitive that a process defined on the whole of the discrete tree converges to a process defined through subtrees. Evans remarks in the introduction to the chapter that this is a consequence of having limits described as compact real trees. Then limitingly almost all vertices are close to leaves, so in a Hausdorff sense, considering only \sqrt{n} of the vertices (ie a subtree) doesn’t really make any difference after rescaling edge lengths. I feel I don’t understand exactly why it’s ok to take the limits in this order, but I can see why this might work after more checking.

Tomorrow, we will have our last session, probably discussing subtree prune-and-regraft, where the regrafting does not necessarily happen at the root.