I am aiming to write a short post about each lecture in my ongoing course on *Random Graphs*. Details and logistics for the course can be found here.

**Preliminary – positive correlation, Harris inequality**

I wrote about independence, association, and the FKG property a long time ago, while I was still an undergraduate taking a first course on Percolation in Cambridge. That post is here. In the lecture I discussed the special case of the FKG inequality applied in the setting of product measure setting, of which the Erdos-Renyi random graph is an example, and which is sometimes referred to as the Harris inequality.

Given two increasing events A and B, say for graphs on [n], then if is product measure on the edge set, we have

Intuitively, since both A and B are ‘positively-correlated’ with the not-rigorous notion of ‘having more edges’, then are genuinely positively-correlated with each other. We will use this later in the post, in the form , whenever X is an increasing RV and A is an increasing event.

**The critical window**

During the course, we’ve discussed separately the key qualitative features of the random graph in the

*subcritical regime*when , for which we showed that all the components are small, in the sense that , although the same argument would also give with high probability if we used stronger Chernoff bounds;*supercritical regime*when , for which there is a unique*giant component*, ie that , the survival probability of a Galton-Watson branching process with Poisson() offspring distribution. Arguing for example by a duality argument shows that with high probability all other components are small in the same sense as in the subcritical regime.

In between, of course we should study , for which it was known that . (*) That is, the largest components are on the scale , and there are lots of such *critical components*.

In the early work on random graphs, the story ended roughly there. But in the 80s, these questions were revived, and considerable work by Bollobas and Luczak, among many others, started investigating the critical setting in more detail. In particular, between the subcritical and the supercritical regimes, the ratio between the sizes of the largest and second-largest components goes from ‘concentrated on 1’ to ‘concentrated on 0’. So it is reasonable to ask what finer scaling of the edge probability around should be chosen to see this transition happen.

**Critical window**

In this lecture, we studied the *critical window*, describing sequences of probabilities of the form

where . (Obviously, this is a different use of to previous lectures.)

It turns out that as we move from to , this window gives exactly the right scaling to see the transition of described above. Work by Bollobas and Luczak and many co-authors and others in the 80s establish a large number of results in this window, but for the purposes of this course, this can be summarised as saying that the critical window has the same scaling behaviour as , with a large number of components on the scale (see (*) earlier), but different scaling limits.

**Note: **Earlier in the course, we have discussed local limits, in particular for , where the local limit is a Galton-Watson branching process tree with offspring distribution . Such local properties are not sufficient to distinguish between different probabilities *within* the critical window. Although there are lots of critical components, it remains the case that asymptotically almost all vertices are in ‘small components’.

The precise form of the scaling limit for

as was shown by Aldous in 1997, by lifting a scaling limit result for the exploration process, which was discussed in this previous lecture and this one too. Since Brownian motion lies outside the assumed background for this course, we can’t discuss that, so this lecture establishes *upper bounds* on the correct scale of in the critical window. Continue reading