I’ve been working for much of the past few months on a version of the frozen percolation random graph process with types. The connectivity between types is controlled by a (finite) non-negative square matrix, and so I’ve been engaging with linear algebra theory to an extent I haven’t really experienced since the second or third year of undergraduate maths.
We are interested in whether the graphs in question are subcritical, critical or supercritical. As in the case of multitype branching processes, this is controlled by the principal eigenvalue of a related non-negative matrix. So I’ve been looking up lots of methods for controlling eigenvalues, and some have proved useful, and some have not, but I thought it would be worthwhile to present some of them here.
Bounds and characterisations of spectral radius
Throughout, I will be talking about finite, square matricies. Eigenvalues may be defined as roots of the characteristic polynomial, and so by the fundamental theorem of algebra, there is always at least one complex eigenvalue. There is always at least one eigenvector associated to any eigenvalue. However, the dimension of the eigenspace is not always the same as the multiplicity of the eigenvalue as a root of the characteristic polynomial. The latter is called algebraic multiplicity, while the former is geometric multiplicity.
For now though, this distinction will be unimportant. The spectral radius of a matrix A is defined as
We can bound the spectral radius in terms of the norm of the matrix. Remember that a matrix norm has to satisfy all the usual properties of a norm, as well as a submultiplicative property . This is good, as otherwise we would be free to replace any norm by an arbitrary multiple of itself, and so no useful bounds could ever emerge. Note that the submultiplicativity implies that
.
Now, let be some eigenvalue and associated (right-)eigenvector respectively of matrix A. Let X be the square matrix given by taking all the columns to be x. Now
implies
, and so
and thus we conclude our most basic bound .
When A is diagonalisable, life is particularly easy, but in general we can write A as a conjugate of its Jordan normal form. Then, by looking at each diagonal block of the Jordan normal form separately, we can show that
Then, applying this, with additional care, to the matrices , we derive Gelfand’s Formula, that
. Again, this applies for any matrix norm.
Real symmetric matrices
When the matrix is real and symmetric, it is not too hard to show that all the eigenvalues are real, and furthermore that all the geometric multiplicities are equal to the algebraic multiplicities. That is, the matrix is diagonalisable, and there is an (orthogonal) basis of eigenvectors. Once we assume we are working with respect to this eigenbasis, it is easy to see how the Rayleigh quotient characterisation of the largest (and smallest) eigenvalue works. Let’s say the eigenvalues are , then for any
, we have
, and equality is attained when x is the respective eigenvector, normalised appropriately.
This is an especially useful characterisation of the largest eigenvalue, as for example we can see fairly easily that this means is a convex function of the (real, symmetric) matrix.
We can generalise this Rayleigh quotient idea if we take k orthonormal vectors in , arrange them in an nxk matrix P, so that
. Now we consider the matrix
. [Note that if k=1, we are exactly considering
as before.] Then Poincare’s Separation Theorem say that the eigenvalues
of
(which is also real, symmetric) are bounded by the original eigenvalues:
Since the trace is preserved under conjugation, and the trace is the sum of eigenvalues, we can apply this result with P’s columns taken to be the any k canonical basis vectors of . Without loss of generality, we may assume the basis has been chosen so that the diagonal elements of A satisfy
, and so now we have that the sequence
is majorised by
and majorises
. The first of these relations can be used via the setup of Karamata’s inequality to conclude that for any convex function f, we have
Gershgorin Circles
In fact, we can relate the eigenvalues to the diagonal entries of the matrix in a more general setting. We are motivated by the thought that if the off-diagonal entries are all very small, then the set of eigenvalues should be approximately given by the set of diagonal entries.
For a square complex matrix, let be an eigenvalue, eigenvector pair. For any index i, we have
Now consider the i such that , and take absolute values and apply the triangle inequality,
Let’s define to be the sum of the non-diagonal entries of the ith row. Then the Gershgorin circle theorem says that every eigenvalue lies within at least one of the discs
, in the complex plane. So our motivation still makes sense. If the off-diagonal entries are small, this is a strong restriction, and if they are not typically smaller than the diagonal entries, then we perhaps do not learn very much. Obviously, we could apply the same argument to the columns too.
When the diagonal entries are distinct, and the off-diagonal entries are small, the Gershgorin discs are distinct, and we would expect each to contain exactly one eigenvalue, corresponding to the appropriate diagonal entry. In fact, we can say something stronger. In general, the union of the discs is a subset of the complex plane with some connected components. Then, if a component is the union of exactly r discs, then it contains exactly r of the eigenvalues.
To see this, consider multiplying all the off-diagonal entries by and observe what happens as z varies from 0 to 1. When z=0, the matrix is diagonal, and each eigenvalue is in the Gershgorin disc (which is a single complex number). As z varies continuously, the characteristic polynomial varies continuously, and also its roots, that is the set of eigenvalues. So since each of the r eigenvalues are initially within the union of the r original, large Gershgorin discs, they must remain within this union as z varies, since they cannot ‘jump’ to another component.
It’s hard to know how time will allow, but provisionally in the next post I will talk about how to control the evolution of eigenvectors as a function of the matrix, and in particular what can go wrong.
REFERENCES
For the middle section, I used the progression from Chapter 4 of Matrix Differential Calculus with Applications in Statistics and Econometrics (Magnus and Neudecker).
Pingback: Perturbation of Eigenvectors | Eventually Almost Everywhere
Pingback: Linear Algebra II: Eigenvectors and Diagonalisability | Eventually Almost Everywhere
I think that the bound of the Separation Theorem is inverted: $\lambda_i$ is greater than $\lambda_{n-k+i}$.
Nice post!