Lecture 9 – Inhomogeneous random graphs

I am aiming to write a short post about each lecture in my ongoing course on Random Graphs. Details and logistics for the course can be found here.

As we enter the final stages of the semester, I want to discuss some extensions to the standard Erdos-Renyi random graph which has been the focus of most of the course so far. In doing so, we can revisit material that we have already covered, and discover how easily one can extend this directly to more exotic settings.

The focus of this lecture was the model of inhomogeneous random graphs (IRGs) introduced by Soderberg [Sod02] and first studied rigorously by Bollobas, Janson and Riordan [BJR07]. Soderberg and this blog post address the case where vertices have a type drawn from a finite set. BJR address the setting with more general typespaces, in particular a continuum of types. This generalisation is essential if one wants to use IRGs to model effects more sophisticated than those of the classical Erdos-Renyi model G(n,c/n), but most of the methodology is present in the finite-type setting, and avoids the operator theory language which is perhaps intimidating for a first-time reader.

Inhomogeneous random graphs

Throughout, k\ge 2 is fixed. A graph with k types is a graph G=(V,E) together with a type function V\to \{1,\ldots,k\}. We will refer to a k\times k symmetric matrix with non-negative entries as a kernel.

Given n\in\mathbb{N} and a vector p=(p_1,\ldots,p_k)\in\mathbb{N}_0^k satisfying \sum p_i=n, and \kappa a kernel, we define the inhomogeneous random graph G^n(p,\kappa) with k types as:

  • the vertex set is [n],
  • types are assigned uniformly at random to the vertices such that exactly p_i vertices have type i.
  • Conditional on these types, each edge v\leftrightarrow w (for v\ne w\in [n]) is present, independently, with probability

1 - \exp\left(-\frac{\kappa_{\mathrm{type}(v),\mathrm{type}(w)} }{n} \right).

Notes on the definition:

  • Alternatively, we could assign the types so that vertices \{1,\ldots,p_1\} have type 1, \{p_1+1,\ldots,p_1+p_2\} have type 2, etc etc. This makes no difference except in terms of the notation we have to use if we want to use exchangeability arguments later.
  • An alternative model considers some distribution \pi on [k], and assigns the types of the vertices of [n] in an IID fashion according to \pi. Essentially all the same results hold for these two models. (For example, this model with ‘random types’ can be studied by quenching the number of each type!) Often one works with whichever model seems easier for a given proof.
  • Note that the edge probability given is \approx \frac{\kappa_{\mathrm{type}(v),\mathrm{type}(w)}}{n}. The exponential form has a more natural interpretation if we ever need to turn the IRGs into a process. Additionally, it avoids the requirement to treat small values of n (for which, a priori, k/n might be greater than 1) separately.

In the above example, one can see that, roughly speaking, red vertices are more likely to be connected to each other than blue vertices. However, for both colours, they are more likely to be connected to a given vertex of the same colour than a vertex of the opposite colour. This might, for example, correspond to the kernel \begin{pmatrix}3&1\\1&2\end{pmatrix}.

The definition given above corresponds to a sparse setting, where the typical vertex degrees are \Theta(1). Obviously, one can set up an inhomogeneous random graph in a dense regime by an identical argument.

From an applications point of view, it’s not hard to imagine that an IRG of some flavour might be a good model for many phenomena observed in reality, especially when a mean-field assumption is somewhat appropriate. The friendships of boys and girls in primary school seems a particularly resonant example, though doubtless there are many others.

One particular application is to recover the types of the vertices from the topology of the graph. That is, if you see the above picture without the colours, can you work out which vertices are red, and which are blue? (Assuming you know the kernel.) This is clearly impossible to do with anything like certainty in the sparse setting – how does one decide about isolated vertices, for example? The probabilities that a red vertex is isolated and that a blue vertex is isolated differ by a constant factor in the n\rightarrow\infty limit. But in the dense setting, one can achieve this with high confidence. When studying such statistical questions, these IRGs are often referred to as stochastic block models, and the recent survey of Abbe [Abbe] gives a very rich history of this type of problem in this setting.

Poisson multitype branching processes

As in the case of the classical random graph G(n,c/n), we learn a lot about the IRG by studying its local structure. Let’s assume from now on that we are given a sequence of IRGs G^n(p^n,\kappa) for which \frac{p^n}{n}\rightarrow \pi, where \pi=(\pi_1,\ldots,\pi_k)\in[0,1]^k satisfies ||\pi||_1=1.

Now, let v^n be a uniformly-chosen vertex in [n]. Clearly \mathrm{type}(v^n)\stackrel{d}\rightarrow \pi, with the immediate mild notation abuse of viewing \pi as a probability distribution on [k].

Then, conditional on \mathrm{type}(v^n)=i:

  • when j\ne i, the number of type j neighbours of v^n is distributed as \mathrm{Bin}\left(p_j,1-\exp\left(-\frac{\kappa_{i,j}}{n}\right)\right).
  • the number of type i neighbours of v^n is distributed as \mathrm{Bin}\left( p_i-1,1-\exp\left(-\frac{\kappa_{i,i}}{n}\right)\right).

Note that p_j\left[1-\exp\left(-\frac{\kappa_{i,j}}{n}\right)\right]\approx \frac{p_j\cdot \kappa_{i,j}}{n} \approx \kappa_{i,j}\pi_j, and similarly in the case j=i, so in both cases, the number of neighbours of type j is distributed approximately as \mathrm{Poisson}(\kappa_{i,j}\pi_j).

This motivates the following definition of a branching process tree, whose vertices have k types. Given \pi as before, and \kappa a kernel, we define \Xi^{\pi,\kappa}, a random tree with k types as follows:

  • Declare the root to have type i with probability \pi_i;
  • Then, inductively, a vertex with type i has some number of children of type j, distributed as \mathrm{Poisson}(\kappa_{i,j}\pi_j), independently across types j\in[k], and other parent vertices.

(It would be safer to define this formally as a subset of the Ulam-Harris tree \mathbb{U} endowed with a type function, or even as k coupled copies of \mathbb{U}, but for the purposes of this exposition, hopefully this is clear enough.)

(It may be helpful to think of the branching mechanism as a Poisson random measure on the type space. This definition then immediately generalises to the case where type-space is continuous.)

Claim: G^n(p^n,\kappa) converges in probability in the local weak sense to \Xi^{\pi,\kappa}.

Note: in Lecture 6, we set up the notation of local weak convergence in considerable detail. It’s important to stress that this was established for graphs, not for graphs with k types. That said, the extension of the definition is immediate if one extends the definition of graph isomorphism to include a correct matching of types. However, rather more care is required if one is to extend the definition to the case of a continuum of types, for which one must handle the situation where two graphs with types are close if they are isomorphic as graphs, and all the types of pairs of vertices matched by the isomorphism are close. See [vdHRGCN2, Section 1.2] or our recent pre-print [CRY18] for considerably more detail.

Survival probabilities, Perron-Frobenius theorem

Recall that for G(n,c/n), we have a phase transition around the critical value c=1, with major qualitative changes in the properties of the graph, in particular the emergence of a unique giant component occupying a positive proportion of the vertices for c>1. The goal for the rest of the lecture is to characterise whether a sequence of sparse IRGs exhibits sub/supercritical behaviour, in terms of \kappa,\pi.

First, we recall a classical result about positive matrices, which may be unfamiliar to anyone approaching this random graphs course without a full mathematical background in linear algebra.

Theorem (Perron – 1907): Let A\in\mathbb{R}_+^{k\times k} be a strictly positive k x k matrix. Then \exists \rho(A)\in(0,\infty) such that \rho(A) is a simple eigenvalue of A, and all other eigenvalues \rho'\in\mathbb{C} satisfy |\rho'|<\rho(A). We call \rho(A) the principal eigenvalue or Perron root of A. Furthermore, the left-eigenvector (or right-eigenvector) corresponding to \rho(A) may be normalised such that all its components are strictly positive.

Proof: One can use Brouwer’s fixed point theorem, among many other methods.

Note: This is relevant to the existence of equilibrium distributions for (finite) Markov chains. (Though there are easier arguments to demonstrate this existence.) A very old post on this blog gives a rather obscure proof using linear programming in this special case of stochastic matrices.

Frobenius (1912 etc): gives further results in the case of non-negative matrices. For the purposes of an initial study of IRGs, one can assume that all relevant results hold so long as the kernel is irreducible. This can be characterised by the existence of some m for which \kappa^m has all entries strictly positive. In the graph setting, this says that there could exist paths between all pairs of types.

The punchline is that, as in the homogeneous setting of G(n,c/n), the presence of a giant component in the random graph, and the survival of the corresponding branching process, are essentially the same problem. We won’t explore the connection between these in detail during this course (not least because this is a single lecture), but this motivates focusing on the following theorem. Firstly, since it will be relevant repeatedly, this post borrows the following notation from [Yeo18]: let \kappa\circ \pi be the matrix [\kappa_{i,j}\pi_j]_{i,j\in[k]} which gives the expected offspring counts in \Xi^{\pi,\kappa}.

Theorem: \mathbb{P}(|\Xi^{\pi,\kappa}|=\infty) \; \begin{cases}=0&\quad\text{if }\rho(\kappa\circ\pi)\le 1\\ >0&\quad \text{if }\rho(\kappa\circ\pi)>1.\end{cases}

Notes: The content of this theorem is first proved in Section 5 of [BJR07] in, as discussed earlier, the setting of more general type-spaces.

In other words, the principal eigenvalue of \kappa\circ\pi plays the same role as c in G(n,c/n) in determining criticality. I think this is a genuinely cool result, and will outline a proof now.

Proof: We start with the subcritical and critical settings \rho(\kappa\circ\pi)\le 1, for which we have to show that \Xi^{\pi,\kappa} is almost surely finite.

It’s useful to study the survival probability conditional on the type of the root, with a view to finding some sort of recursion, just as in the original monotype setting. (Throughout this argument, we suppress the notational dependence on \pi,\kappa except when necessary for clarity.) We set

\zeta_i:= \mathbb{P}\left( |\Xi| = \infty\,|\, \mathrm{type}(\mathrm{root})=i\right).

Then the vector \zeta must satisfy

\zeta_i= 1- \exp\left(-[(\kappa\circ\pi)\zeta]_i\right), (*)

since the tree survives if, and only if, the root has at least one child (of any type) for which the subtree rooted there survives.

Linearising the RHS, we obtain

\zeta \le (\kappa\circ\pi)\zeta,

and this inequality is strict in at least one coordinate if \zeta\ne 0. In fact, if \kappa is irreducible and \pi>0, one can then show that the inequality is strict in all coordinates if \zeta\ne 0. Intuitively, this doesn’t feel possible for a matrix whose ‘largest’ eigenvalue is at most 1, but it’s hard to turn this into an argument using eg a basis of eigenvectors. However, this does contradict the Collatz-Wielandt characterisation of the Perron root (see here) and so we conclude that there are no non-zero solutions to (*), and thus the survival probability is zero in subcriticality and criticality.

(As demanded in the main exercise for this lecture, one can show directly that in the subcritical setting when \rho(\kappa\circ\pi)<1 that \mathbb{E}\left[|\Xi|\right]<\infty, which certainly also implies the result.)

Supercritical case \rho(\kappa\circ\pi)>1.

For this, we will show that the equation (*) has a strictly positive solution. It’s not a priori clear that the survival probabilities should correspond to the maximal solution. I will omit this here because, modulo a change of notation, it’s covered very clearly in Lemma 5.6 of [BJR07], and is intuitively motivated by the corresponding result in the monotype case.

Motivated by (*), introduce the function f:\mathbb{R}^k\to\mathbb{R}^k by

f(x) = \left( 1 - \exp\left(-[(\kappa\circ\pi)x]_i \right)\right)_{i\in[k]},

so that (*) becomes f(\zeta)=\zeta. Note that f is increasing, in the sense that

x\ge y\;\Rightarrow\; (\kappa\circ\pi)x\ge (\kappa\circ\pi)y\;\Rightarrow\; f(x)\ge f(y).

Now, let \nu be some right-eigenvector of \kappa\circ\pi corresponding to the principal eigenvalue \rho=\rho(\kappa\circ\pi), normalised to have positive components. We will study \epsilon \nu as \epsilon\rightarrow 0^+.

[f(\epsilon \nu)]_i = 1-\exp(-\rho \epsilon \nu_i) = \rho\epsilon \nu_i + O(\epsilon^2),

and so for \epsilon>0 small enough, we have

f(\epsilon \nu)\ge \epsilon \nu\;\Rightarrow\; f(f(\epsilon \nu)) \ge f(\epsilon \nu)\;\Rightarrow \; \ldots

So, writing x^0=\epsilon \nu, x^n = f^n(\epsilon \nu), we have 0< x^0\le x^1\le x^2\le\ldots \le (1,1,\ldots,1). By compactness, we must have x^n\uparrow x\in[0,1]^k with x>0. Since f is continuous, we may take a limit on both sides of the equality x^{n+1}=f(x^n), to obtain x=f(x), and thus we have found a strictly positive solution to (*), as required.


[Abbe] – Community detection and stochastic block models

[BJR07] – Bollobas, Janson, Riordan – 2007 – The phase transition in inhomogeneous random graphs

[CRY18] – Crane, Rath, Yeo – 2018+ – Age evolution in the mean field forest fire model via multitype branching process

[Sod02] – Soderberg – 2002 – General formalism for inhomogeneous random graphs

[vdHRGCN2] – van der Hofstad – Random graphs and complex networks, Volume II

[Yeo18] – Yeo – 2018+ – Frozen percolation with k types


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s