I am aiming to write a short post about each lecture in my ongoing course on Random Graphs. Details and logistics for the course can be found here.
By this point of the course, we’ve studied several aspects of the Erdos-Renyi random graph, especially in the sparse setting . We’ve also taken a lengthy detour to revise Galton-Watson trees, with a particular focus on the case of Poisson offspring distribution.
This is deliberate. Note that a given vertex v of has some number of neighbours distributed as
, and the same approximation remains valid as we explore the graph (for example in a breadth-first fashion) either until we have seen a large number of vertices, or unless some ultra-pathological event happens, such as a vertex having degree n/3.
In any case, we are motivated by the notion that the local structure of is well-approximated by the Galton-Watson tree with
offspring, and in this lecture and the next we try to make this notion precise, and discuss some consequences when we can show that this form of convergence occurs.
Deterministic graphs
Throughout, we will be interested in rooted graphs, since by definition we have to choose a root vertex whose local neighbourhood is to be studied. Usually, we will study a sequence of rooted graphs , where the vertex set of
is [n], or certainly increasing in n (as in the first example).
For some rooted graph , we say such a sequence
converges to
locally if for all radii
, we have
. In words, the neighbourhood around
in
is the same up to radius r as the neighbourhood around
in
, so long as n is large enough (for given r).
This is best illustrated by an example, such as , the binary tree to depth n.
If we take to be the usual root, then the trees are nested, and converge locally to the infinite binary tree
. Slightly less obviously, if we take
to be one of the leaves, then the trees are still nested (up to labelling – ie in the sense of isomorphisms of rooted trees), and converge locally to the canopy tree, defined by a copy of
with nearest-neighbour edges, and where each vertex
is connected to the root of a disjoint copy of
, as shown below:
Things get more interesting when the root is chosen randomly, for example, uniformly at random, as this encodes more global information about the graphs . In the case where the
are vertex-transitive, then if we only care about rooted graphs up to isomorphism, then it doesn’t matter how we choose the root.
Otherwise, we say that converges in the local weak sense to
if, for all
and for all rooted graphs
,
as .
Alternatively, one can phrase this as a result about convergence of rooted-graph-valued distributions.
A simple non-transitive example is , the path of length n. Then, the r-neighbourhood of a vertex is isomorphic to
, unless that vertex is within graph-distance (r-1) of one of the leaves of
. As
, the proportion of such vertices vanishes, and so,
, from which we conclude the unsurprising result that
converges in the local weak sense to
. (Which is vertex-transitive, so it doesn’t matter where we select the root.)
The binary trees offer a slightly richer perspective. Let be the set of leaves of
, and we claim that when
is chosen uniformly from the vertices of
, then
converges in distribution. Indeed,
, whenever
, and so the given distance converges in distribution to the Geometric distribution with parameter 1/2 supported on {0,1,2,…}.
This induces a random local weak limit, namely the canopy tree, rooted at one of the vertices we denoted by , with the choice of this vertex given by Geometric(1/2).
Notation
Such limits, and the random graph versions to follow, are sometimes referred to as Benjamini-Schramm convergence, after the authors of the paper [LINK] in which this was first heavily introduced. Various subsequent authors use local weak convergence and Benjamini-Schramm convergence to denote a general distribution on roots versus a uniform measure on roots (or some similar distinction). For the purpose of this course, I followed the notation in Volume II of van der Hofstad’s notes exactly, and did not use the latter phrase.
A note on finiteness and tightness
Note at this point that we have not said anything about the class of random rooted graphs which may appear as a limit. However, it can be seen that they must be locally-finite if there’s a chance that they appear as a weak limit of a sequence of finite graphs.
In this course, we won’t think of metrizing the space of rooted graphs, but it’s worth mentioning that a tightness criterion for local weak convergence is that the degree of the root is uniformly integrable, meaning that very large degrees do not contribute significantly to the expected degree in the limit. This induces probabilistic bounds on the largest degree to be seen within the radius r neighbourhood (bearing in mind that size-biasing appears everywhere when studying degrees), and tightness follows.
Random graphs – quenched versus annealed settings
In the previous example, all the randomness came from the choice of root. We want to return to a setting where the graph itself is random as well, though will restrict attention to the case where the set of vertices of is always [n]. In this course, we also restrict attention to the two cases where either the root is chosen uniformly from the vertex set (as relevant to G(n,p)) or when the graph is a branching process tree with the usual (deterministic root).
We say that converges in distribution locally weakly to
if, as before,
as . Here, we are taking the probability over both sources of randomness on the LHS. This is more transparent if we rewrite as
In particular, if the graph is vertex-transitive then
.
We also say that converges in probability locally weakly to
if
as . Note now that the LHS is a random variable, depending on
(but not on
). Again, the indicator function form may be more transparent:
The choice of names for these convergence settings might be counter-intuitive on a first reading. In both settings, we are studying the number of vertices for which exactly describes its local neighbourhood.
- In the first setting, we require this quantity to converge in expectation after rescaling.
- In the second setting, we require this quantity to concentrate in distribution to a constant (which is the same as converging in probability to a constant).
- It’s probably not helpful actually to think of these as convergence in distribution vs convergence in probability of any quantity.
- This distinction is referred to in general as quenched (conditional on the graph) versus annealed (averaged over the graph).
It’s clear that this represents a division into first- and second-moment arguments, such as we’ve seen earlier in the course. Unsurprisingly, this induces a result about asymptotic independence, when we have local weak convergence in probability. In the lecture we phrased this as
We prove this by conditioning on , and then the conditional probability of the event on the LHS becomes
Both of these bracketed terms are random, but they converge in probability to constants by the convergence assumption. And it is true and straightforward to show that if and
, then
, which shows that the given result in fact holds in the quenched setting, so certainly holds in the annealed setting (ie by removing the conditioning on
) too. Note throughout, that the fact the quantities of interest are bounded (by 1, since they are probabilities…) makes it easy to move from convergence in distribution to convergence in expectation.
Notes
- In the quenched setting, if we start breaking the setting that the pre-limit graphs are growing, we get some pathologies, such as a random graph not converging in probability locally weakly to itself. This phenomenon is a feature of the quenched setting, rather than the space of rooted graphs.
- Proving that
converges locally weakly in probability to the Galton-Watson tree with offspring distribution
is not going to be part of the course, since the details are a more technical version of previous second-moment calculations. What is most relevant is that a lot useful results can be read off as a consequence of this property. At a meta level, it is preferable to do one combinatorially technical second-moment argument and derive lots of results from this, than to do slightly technical arguments from scratch every time.
- In the next lecture, we’ll return to the question of component sizes in
, starting to consider the supercritical case
. We will see that the weak local limits provide upper bounds on the size of the giant component (in generality), but will need more involved arguments to provide matching lower bounds.
Pingback: Lecture 8 – Bounds in the critical window | Eventually Almost Everywhere