I am aiming to write a short post about each lecture in my ongoing course on Random Graphs. Details and logistics for the course can be found here.
As we edge into the second half of the course, we are now in a position to return to the question of the phase transition between the subcritical regime and the supercritical regime
concerning the size of the largest component
.
In Lecture 3, we used the exploration process to give upper bounds on the size of this largest component in the subcritical regime. In particular, we showed that
If we used slightly stronger random walk concentration estimates (Chernoff bounds rather than 2nd-moment bounds from Chebyshev’s inequality), we could in fact have shown that with high probability the size of this largest component was at most some logarithmic function of n.
In this lecture, we turn to the supercritical regime. In the previous lecture, we defined various forms of weak local limit, and asserted (without attempting the notationally-involved combinatorial calculation) that the random graph converges locally weakly in probability to the Galton-Watson tree with
offspring distribution, as we’ve used informally earlier in the course.
Of course, when , this branching process has strictly positive survival probability
. At a heuristic level, we imagine that all vertices whose local neighbourhood is ‘infinite’ are in fact part of the same giant component, which should occupy
vertices. In its most basic form, the result is
(*)
where the second part is a uniqueness result for the giant component.
The usual heuristic for proving this result is that all ‘large’ components must in fact be joined. For example, if there are two giant components, with sizes , then each time we add a new edge (such an argument is often called ‘sprinkling‘), the probability that these two components are joined is
, and so if we add lots of edges (which happens as we move from edge probability
) then with high probability these two components get joined.
It is hard to make this argument rigorous, and the normal approach is to show that with high probability there are no components with sizes within a certain intermediate range (say between and
) and then show that all larger components are the same by a joint exploration process or a technical sprinkling argument. Cf the books of Bollobas and of Janson, Luczak, Rucinski. See also this blog post (and the next page) for a readable online version of this argument.
I can’t find any version of the following argument, which takes the weak local convergence as an assumption, in the literature, but seems appropriate to this course. It is worth noting that, as we shall see, the method is not hugely robust to adjustments in case one is, for example, seeking stronger estimates on the giant component (eg a CLT).
Anyway, we proceed in three steps:
Step 1: First we show, using the local limit, that for any ,
with high probability as
.
Step 2: Using a lower bound on the exploration process, for small enough
with high probability.
Step 3: Motivated by duality, we count isolated vertices to show
We will return to uniqueness at the end.
Step 1
This step is unsurprising. The local limit gives control on how many vertices are in small components of various sizes, and so gives control on how many vertices are in small components of all finite sizes (taking limits in the right order). This gives a bound on how many vertices can be in the giant component.
(Note: parts of this argument appear in the text and exercises of Section 1.4 in the draft of Volume II of van der Hofstad’s notes, which can be found here.)
We can proceed in greater generality, by considering a sequence of random graphs which converge locally weakly in probability to T, a random tree, with survival probability
. We will show that:
Proposition: for each
.
As a preliminary, note that for every , there are finitely many rooted graphs
with size k. We can also identify whether a graph has size k by looking at a ball of radius r>k around any vertex. In particular, by summing over all graphs with size k, the weak local limit implies:
Furthermore, we can then control the tail as
(Recall that the LHS of this statement is the proportion of vertices in components of size at least k.)
We will make the trivial but useful observation that in any graph the largest component has size at least k precisely if at least k vertices are in components of size at least k (!). Ie
Returning now to the problem at hand, we have as
, so we may pick k such that
.
But then, using our ‘trivial but useful’ observation:
(**)
Note that we have replaced by k in this final step for a bound. However, the random quantity inside the probability is known to converge in probability to
. So in fact this probability (**) vanishes as
.
Step 2
Remember the exploration process, where is a labelling of the vertices of
in breadth-first order. Defining
the number of children of vertex , we set
to be (a version of) the exploration process. It will be useful to study
the hitting times of (-k), as then is the kth component to be explored.
Unlike for a tree, we have multiple components, and essentially the process decreases by one each time we start a new component, which means that the current value no longer describes the number of vertices on the stack. In general, this is given by , and so
which we may stochastically bound below by
noting that this is extremely crude.
We want to study whether ever exceeds
, for some
to be determined later.
For reasons that will become clear in the following deduction, it’s convenient to fix small such that
, and then choose
such that
(which is possible by continuity since the given relation holds when .)
Now, when and
, we have
The following argument requires some kind of submartingale approach (involving coupling with a simpler process at the stopping time) to make rigorous, which is beyond the scope of this course’s prerequisites.
However, informally, if we assume that , ‘then’
But this distribution is concentrated on a value which is, by our obscure assumption, (!) contradicting the assumption on the maximum. Thus we conclude that
as
We conclude that holds with high probability. But remember that
so if
, then all of
are non-negative, and so certainly
are in the same component of the graph, and
with high probability.
Step 3
The motivation for this section is duality. Recall (from Lecture 5) that if we condition a supercritical Poisson GW tree on extinction, we obtain the distribution of a dual subcritical Poisson GW tree. This relation moves across to the world of the sparse Erdos-Renyi random graph. If you exclude the giant component, you are left with a subcritical random graph (on a smaller vertex set), and this applies equally well to the local limits. Essentially, if we exclude a component, and take the local limit of what remains, we get the wrong answer unless the component we excluded was a giant component with size , or was small.
As we shall see, this effect is captured sufficiently by counting isolated vertices.
First, we state a Fact: when , then
. This convexity property is easily checked by comparing derivatives, and will be useful shortly.
Now, we study , the number of isolated vertices in
, under conditioning that
is a component for various values k. Note that unless k=1, we have
for exactly the same reason as when we did this calculation for the original graph several lectures back. We will consider k in the range .
We can take a limit of this expectation in appropriate uniformly using the Fact above, since the function is suitably well-behaved, to obtain
where . So
But is bounded above (by 1, of course), and so lower bounds on the expectation give lower bounds on upper tail, leading to
However, we know (for example by local convergence…). Therefore, in order to make the unconditional probability vanish, the probability of the conditioning event in question must also vanish, ie
Finally, since
the corresponding result holds for the largest component, not just the observed component.
Uniqueness and overall comments
Uniqueness can be obtained by a slight adjustment of Step 1. Morally, Step 1 is saying that a proportion asymptotically at most of the vertices are in large components, so it is possible (and an exercise in the course) to adjust the argument to show
with high probability,
from which the uniqueness result follows immediately.
In particular, it’s worth noting that this is an example of a bootstrapping argument, where we show a weak version of our goal result (in Step 2), but then use this to show the full result.
Note also that we can use the duality principle to show logarithmic bounds on the size of the second-largest component in exactly the same way that we showed logarithmic bounds on the size of the largest component in the subcritical regime. The whole point of duality is that these are the same problem!
Pingback: Lecture 8 – Bounds in the critical window | Eventually Almost Everywhere