I am aiming to write a short post about each lecture in my ongoing course on *Random Graphs*. Details and logistics for the course can be found here.

As we edge into the second half of the course, we are now in a position to return to the question of the phase transition between the *subcritical* regime and the *supercritical* regime concerning the size of the largest component .

In Lecture 3, we used the exploration process to give upper bounds on the size of this largest component in the subcritical regime. In particular, we showed that

If we used slightly stronger random walk concentration estimates (Chernoff bounds rather than 2nd-moment bounds from Chebyshev’s inequality), we could in fact have shown that with high probability the size of this largest component was at most some logarithmic function of n.

In this lecture, we turn to the supercritical regime. In the previous lecture, we defined various forms of weak local limit, and asserted (without attempting the notationally-involved combinatorial calculation) that the random graph converges locally weakly in probability to the Galton-Watson tree with offspring distribution, as we’ve used informally earlier in the course.

Of course, when , this branching process has strictly positive survival probability . At a heuristic level, we imagine that all vertices whose local neighbourhood is ‘infinite’ are in fact part of the same *giant component*, which should occupy vertices. In its most basic form, the result is

(*)

where the second part is a *uniqueness* result for the giant component.

The usual heuristic for proving this result is that all ‘large’ components must in fact be joined. For example, if there are two giant components, with sizes , then each time we add a new edge (such an argument is often called ‘*sprinkling*‘), the probability that these two components are joined is , and so if we add lots of edges (which happens as we move from edge probability ) then with high probability these two components get joined.

It is hard to make this argument rigorous, and the normal approach is to show that with high probability there are no components with sizes within a certain intermediate range (say between and ) and then show that all larger components are the same by a joint exploration process or a technical sprinkling argument. Cf the books of Bollobas and of Janson, Luczak, Rucinski. See also this blog post (and the next page) for a readable online version of this argument.

I can’t find any version of the following argument, which takes the weak local convergence as an assumption, in the literature, but seems appropriate to this course. It is worth noting that, as we shall see, the method is not hugely robust to adjustments in case one is, for example, seeking stronger estimates on the giant component (eg a CLT).

Anyway, we proceed in three steps:

**Step 1: **First we show, using the local limit, that for any ,

with high probability as .

**Step 2: **Using a lower bound on the exploration process, for small enough

with high probability.

**Step 3: **Motivated by duality, we count isolated vertices to show

We will return to uniqueness at the end.

**Step 1**

This step is unsurprising. The local limit gives control on how many vertices are in small components of various sizes, and so gives control on how many vertices are in small components of all finite sizes (taking limits in the right order). This gives a bound on how many vertices can be in the giant component. Continue reading