Random Walks and Spanning Trees

Introduction

In this post, I’m going to talk about probability distributions on graphs. In particular, consider paths on a graph, and how to assign a distribution to these in a natural way. One way is to consider a standard random walk, treating the vertices as states of a discrete Markov chain. But, in many situations, this isn’t really enough. We might want paths, that is, walks which are self-avoiding. But even in regular lattices like \mathbb{Z}^d, it is hard to say a great deal about the set of self-avoiding walks (SAWs) of length n. We would prefer a distribution which has a natural product form, or which at least we can sample from without large combinatorial calculations.

A spanning tree is a connected graph without cycles. The set of edges is maximal, in the sense that adding any further edge creates a cycle, and also minimal, as removing one will disconnect the graph. Counting the number of spanning trees on labelled vertices is harder than one might suspect, and is possibly worth a post by itself. However, in general the uniform distribution on spanning trees is a useful object to consider. Any spanning tree contains a unique path between two vertices of the graph, and so a Uniform Spanning Tree (UST) induces a distribution on paths.

An alternative construction is to take a random walk and remove the cycles. This is not well-defined unless you specify a canonical order in which to remove them, and the obvious option is to remove the cycles in the order in which they appear. That is, every time you end up at a vertex which you have already visited, you remove all the edges traversed since you were last at v. This gives a Loop-Erased Random Walk (LERW), and from this another measure on paths between two vertices (subject to connectivity conditions which guarantee that the LERW almost surely hits the target vertex eventually).

At an informal level, the difference between these measures is significant. Inducing a measure down from a natural but potentially unwieldy uniform distribution is theoretically natural, but hard to work with. On the other hand a computer could sample from LERW, just by performing a random walk and storing the history suitably, which immediately makes it useful. So, the following theorem is elegant in its own right, but in particular as a bridge between these two frameworks.

Theorem: The measures on paths from to in a finite graph G induced by UST and generated by LERW are the same.

Proof: Some books give proofs which involve coupling a LERW construction with a sequence of STs directly, driven by an underlying random walk on the graph. The difficulty in this approach is that to prove that the uniform distribution on spanning trees is stationary often requires the random walk to be in equilibrium, a criterion which causes difficulties when it come to starting at x later in the proof. Essentially, the difficulty in many LERW constructions is that the edges being removed are incident to the wrong vertex in the random walk, and so RW equilibrium is required to show that the spanning tree transitions are reversible. Instead, we proceed by an argument motivated by the fact that LERWs appear ‘backwards’ in UST constructions.

Take a random walk (v_i:i\in\mathbb{Z}) on the graph G. For each integer i, consider the random walk \{v_i,v_{i+1},\ldots\} and from this, generate a oriented spanning tree T_i in the ‘obvious’ way. Formally, take v_i to be the root, then add each edge in the order it appears in the RW, unless to do so would create a cycle. Almost surely, if G is connected, this generates a spanning tree for each i.

Now, consider the family \{T_i: i\in\mathbb{Z}\}, which forms a Markov chain on the space of rooted spanning trees. Note that the position of the root informs us where the underlying random walk is, so we can effectively forget about the RW in what follows. This turns out to have a nice equilibrium measure, given by the number of neighbours in G of the root. This is extremely easy to check, using the reversibility of the RW, and a picture, but will be annoying to explain in writing here.

In particular, if we condition on the root of the tree, say v_0=x, which is in many ways equivalent to unrooting, this reduces to the uniform measure on unrooted trees.

Now consider the tree generated by this process started at y. This gives a unique path from x to y. Now, reverse the underlying random walk (so it goes from x to y, which gives an identical random walk because it is reversible), and perform loop erasure on it. By drawing a picture, or setting up a formal induction on the vertices in the paths, these paths from x to y are the same.

Combining these two results gives exactly what we want. By rooting from y we get a path to x in a UST, and this path, considered in the direction to y, is equal in law to a sample from LERW. So we are done.

An immediate and perhaps surprising consequence, based on this last remark and the fact that STs are non-oriented, is that the LERWs from to y are equal in law to the LERWs from y to x.

Consequences: – An extension of the ideas in this result gives Wilson’s algorithm, a ‘faster’ method than more naive approaches for producing a sample from UST.

– This gives an idea of how one might construct a uniform measure on spanning trees for infinite graphs. An LERW can be defined exactly the same for the lattice \mathbb{Z}^2 as for finite graph G, and so, modulo technicalities, a similar extension should be possible for USTs.

– It’s not really a consequence, but LERW is of interest because it is conformally invariant and converges to SLE2.

References

PEMANTLE, R. – Choosing a Spanning Tree for the Integer Lattice Uniformly

NEINHAUS, B. & KAGEN, W. – Chapter 15 of Lecture Notes in Physics 775, Polygons, Polyominoes and Polycubes

WILSON, D. – Generating Random Spanning Trees More Quickly than the Cover Time

Also thanks to Lee Zhuo Zhao for an extremely helpful email regarding \mathbf{T}(\gamma).

Advertisement

2 thoughts on “Random Walks and Spanning Trees

  1. Pingback: Mixing Times 6 – Aldous-Broder Algorithm and Cover Times | Eventually Almost Everywhere

  2. Pingback: Mixing Times 6 – Aldous-Broder Algorithm and Cover Times | Eventually Almost Everywhere

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s