I am aiming to write a short post about each lecture in my ongoing course on *Random Graphs*. Details and logistics for the course can be found here.

The goal of the second lecture was to establish the sharp phase transition for the connectivity of the random graph G(n,p(n)) around the critical regime . In the end, we showed that when is any diverging sequence, and , then we have that G(n,p(n)) is *with high probability* not connected.

In the next lecture, we will finish the classification by studying , and show that for this range of p, the graph G(n,p(n)) is with high probability connected.

The details of the lecture, especially the calculation, are not presented fully here. There, I followed van der Hofstad’s recent book fairly closely, sometimes taking different approximations and routes through the algebra, though all versions remain fairly close to the original enumerations by Renyi.

**Immediate remarks**

- One is allowed to be surprised that for almost all scalings of p(n), G(n,p) is either whp connected or whp not connected. The speed of the transition is definitely interesting.
- As defined in lectures, the property that a graph is connected is an
*increasing*property, meaning that it is preserved when you add additional edges to the graph. - Because of the natural coupling between G(n,p) and G(n,q), the fact that connectedness is an increasing property makes life easier. For example, we can insist temporarily that , or whatever scaling turns out to be convenient for the proof, but conclude the result for all diverging . This avoids the necessity for an annoying case distinction.

**Heuristics – Isolated vertices**

It turns out that the ‘easiest’ way for such a graph to be disconnected is for it to have an isolated vertex. In determining that the graph has a *cut* into classes of sizes a and b with no edges between them, there is a trade-off between the number of ways to choose the partition (which increases with min(a,b) ) and the probabilistic penalty from banning the ab edges between the classes (which decreases with min(a,b) ). It turns out that the latter effect is slightly stronger, and so (1,n-1) dominates.

**Method 1: second-moment method**

In the case , we use a *second-moment method* argument to establish that G(n,p) contains an isolated vertex with high probability. Note that a given vertex v is isolated precisely if n-1 edges are not present. Furthermore, two given vertices v,w are *both* isolated, precisely if 2n-3 edges are not present. So in fact, both the first moment and the second moment of the number of isolated vertices are straightforward to evaluate.

It turns out that the number of isolated vertices, , satisfies

(*)

As always, we have to eliminate the possibility that this divergent expectation is achieved by the graph typically having no isolated vertices, but occasionally having very many. So we turn to the second moment, and can show

and so by Chebyshev’s inequality, we have .

**Method 2: first-moment method**

Counter-intuitively, although the case requires only a first-moment method, it is more technical because it involves the non-clear direction of the informal equivalence:

At the time we showed (*), we also showed that for this regime of p(n), G(n,p) whp has no isolated vertices. It remains to show that it has no splits into (unions of) connected components of sizes k and n-k.

It turns out that it is annoying to count components of size k because when we take an expectation, we can’t study all connected components on k vertices equally, since the probability of a given component structure depends on the exact number of edges it contains (which must be at least k-1, but will be larger if it includes lots of cycles). It is much easier to study the number of *spanning trees* of components of size k. So if the component itself is a tree, it has exactly one spanning tree, and if it has more cyclic structure then it will have several spanning trees.

The point is that we expect the expected number of such components to be very very small when , so it doesn’t really matter if we overcount by some factor. Somewhat abusing notation, we define to be the number of such spanning trees in G(n,p), so

Here the terms determine, respectively:

- the number of ways to choose the vertex set of the component;
- the number of trees on that vertex set (Cayley’s formula);
- the probability of E(G(n,p)) including the edges of the tree in question;
- the probability that G(n,p) includes no edges between the vertex of the tree and the rest of the graph (so it is actually a component).

Applying Stirling’s formula, and bounding helpfully, we obtain

We have already seen (modulo the abuse of notation) that . Now we need to show that

At this point, we should optimise our approximations based on how easy they are to sum, rather than how strong they are. One weak option is

but it turns out that this is perfectly strong enough, and gives a geometric series, which is easy to sum. The only issue is that we get

so the sum is only bounded as we want when . Fortunately, almost any argument will work to show separately.

Once we know that , we can use the fact that for non-negative-valued RVs X,

which you can view as an example of Markov’s inequality if you want, but really is just one of the fundamental properties of integer-valued RVs that allows useful interchange of probabilities and expectations.

In any case, combining this shows that with high probability there is no cut into classes of size k and n-k for any k, which is the same as being connected.

**Remarks**

- The similarity to the famous
*coupon collector*problem is unsurprising, modulo the heuristic about isolated vertices. Under the right setup (continuous-time, for example), whether the coupons have been collected is independent between different coupons. That’s not the case here, as can be seen in the second-moment calculation. But under any version of the coupon collector problem, the second-moment calculation setup is essentially the same as here. - It becomes clear in the proof that the threshold for the existence of cuts into k and n-k with high probability is for fixed k. In particular for every k>1, we are working with a range of p much larger than this threshold in the calculation above, so it’s unsurprising that we could make weak assumptions and approximations, and still show that the expected number of such cuts vanishes.
- Once one has a coupling of G(n,p) for all p, one can ask whether the graph becomes connected at
*exactly*the same time / value of p as the last isolated vertex receives an incident edge. I will not discuss this more here, as it’s one of the extended exercises, but this is just the start of a family of questions concerning whether closely-related properties occur simultaneously or not.

**Renyi’s study of G(n,m)**

Renyi’s original argument concerned the related model *G(n,m), *that is, a uniform choice amongst the graphs on vertex set [n] with exactly m edges. There’s no right or wrong answer to the question of which is the *better *choice as the canonical model for a random graph. However, for this question, I guess it is natural to pose the question for G(n,m) since there’s an obvious extremal answer in terms of edges. That is, we require at least n-1 edges before there’s any chance that the graph is connected (cf the exercise about trees), so it’s natural to ask how many edges are required before there’s a strong chance that the G(n,m) is connected.

Clearly, conditioning on m edges being present in G(n,p) gives G(n,m). We can view G(n,p) as a *mixture* of distributions G(n,m) across values of m from 0 to . Some graph properties are clearly badly focused under this mixture, for example the property A={the graph has an even number of edges}.

However, for *increasing* events (which are preserved under the addition of edges), proving that an event holds whp for G(n,p) given it holds whp for G(n,m), and vice versa, is normally straightforward.

For example, given a sequence p(n), take m(n) to be a sequence such that with high probability. This is just about the tail of the binomial distribution, so for example taking suffices. Then if increasing property A holds for G(n,m) with high probability, it holds for G(n,p) with high probability too.

One can argue similarly in the other direction.

An interesting adjustment to this argument is required in the final section of my paper with James Martin in ALEA concerning a model of random *forests* which does not have the natural coupling property enjoyed by G(n,p) and G(n,q). This may be the subject of an upcoming post, time permitting.