
The theory of simple random walks on the integer lattice is a classical topic in probability theory. Polya proved in the 1920s that such a SRW on
is recurrent only for d=1 or 2. The argument is essentially combinatorial. We count the number of possible paths from 0 back to itself and show that this grows fast enough that even with the probabilistic penalty of having a particular long path we will still repeatedly see this event happening. In larger dimensions there is essentially ‘more space’ at large distances, at least comparatively, so a typical walk is more likely to escape into this space.
As Kakutani (of the product martingale theorem) said, and was subsequently quoted as the dedication on every undergraduate pdf about random walks: “A drunk man will find his way home, whereas a drunk bird may get lost forever.”
But transience in some sense a long-distance property. We can fiddle with the transition rates near zero and, so long as we don’t make anything deterministic this shouldn’t affect transience properties. Obviously if we have a (space-)homogeneous nearest-neighbour random walk on the integers with non-zero drift the process will be transient: it drifts towards positive infinity if the drift is positive. But can we have a random walk with non-zero drift, but where the drift tends to zero at large distances fast enough, and the process is still recurrent? What is the correct scaling for the decay of the drift to see interesting effects?
The answers to these questions is seen in the so-called Lamperti random walks, which were a recurring theme of the meeting on Aspects of Random Walks held in Durham this week. Thanks to the organisers for putting on such an excellent meeting. I hadn’t known much about this topic before, so thought it might be worth writing a short note.
As explained above, we consider time-homogeneous random walks. It will turn out that the exact distributions of the increments is not hugely important. Most of the properties we might care about will be determined only by the first two moments, which we define as:
![\mu_1(x)=\mathbb{E}[X_{t+1}-X_t | X_t=x],](https://s0.wp.com/latex.php?latex=%5Cmu_1%28x%29%3D%5Cmathbb%7BE%7D%5BX_%7Bt%2B1%7D-X_t+%7C+X_t%3Dx%5D%2C&bg=ffffff&fg=333333&s=0&c=20201002)
![\mu_2=\mathbb{E}[(X_{t+1}-X_t)^2 | X_t=x].](https://s0.wp.com/latex.php?latex=%5Cmu_2%3D%5Cmathbb%7BE%7D%5B%28X_%7Bt%2B1%7D-X_t%29%5E2+%7C+X_t%3Dx%5D.&bg=ffffff&fg=333333&s=0&c=20201002)
Note that because the drift will be asymptotically zero, the second term is asymptotically equal to the variance of the increment. It will also turn out that the correct scaling for
to see a phase transition is
.
We begin by seeing how this works in the simplest possible example, from Harris (1952). Let’s restrict attention to a random walk on the non-negative integers, and impose the further condition that increments are +1 or -1. In the notation of a birth-and-death process from a first course on Markov chains, we can set:

We will set
. Then a condition for transience is that

In our special case:

So we can deduce that this sum converges if c>1/2, giving transience. A similar, but slightly more complicated calculation specifies the two regimes of recurrence. If -1/2<=c<=1/2 then the chain is null-recurrent, meaning that the expected time to return to any given state is infinite. If c<-1/2, then it is positive recurrent.
In general, we assume
and
. In the case above, obviously
. The general result is that under mild assumptions on the increment distributions, for instance a
-moment, if we define
, then the RW is transient if r<-1, positive-recurrent if r>1, and null-recurrent otherwise. This is the main result of Lamperti.
To explain why we have parameterised exactly like this, it makes sense to talk about the more general proof methods, as obviously the direct Markov chain calculation won’t work in general. The motivating idea is that we can deal well with the situation where the drift is zero, so let’s transform the random walk so that the drift becomes zero. A function of a Markov chain that is more stable (in some sense) that the original MC, for analysis at least, is sometimes called a Lyapunov function. Here, the sensible thing is to consider
, for some exponent
.
So long as our distributions are fairly well-behaved (eg a finite
-moment), we can calculate the drift of Y as
![\mathbb{E}[Y_{t+1}-Y_t| X_t=x]=\frac{\gamma}{2}x^{\gamma-2}(2c+(1-\gamma)s^2) +o(x^{\gamma-2}).](https://s0.wp.com/latex.php?latex=%5Cmathbb%7BE%7D%5BY_%7Bt%2B1%7D-Y_t%7C+X_t%3Dx%5D%3D%5Cfrac%7B%5Cgamma%7D%7B2%7Dx%5E%7B%5Cgamma-2%7D%282c%2B%281-%5Cgamma%29s%5E2%29+%2Bo%28x%5E%7B%5Cgamma-2%7D%29.&bg=ffffff&fg=333333&s=0&c=20201002)
In particular, taking
results in a random walk that is ‘almost’ a martingale. Note that the original RW was almost a martingale, in the sense that the drift is asymptotically zero, but now it is zero to second order as well.
To draw any rigorous conclusions, we need to be careful about exactly how precise this approximation is, but we won’t worry about that now. In particular, we need to know whether we can take this approximation over the optional stopping theorem, as this allows us to say:

This is particularly useful for working out the expected excursion time away from 0, which precisely leads to the condition for null-recurrence.
In his talk, Ostap Hryniv showed that this Lyapunov function analysis can be taken much further, to derive much more precise results about excursions, maxima and ergodicity. Results of Menshikov and Popov from the 90s further specify the asymptotics for the invariant distribution, if it exists, in terms of r.
One cautionary remark I should make is that earlier I implied that once we know the drift of such a random walk is zero, we have recurrence. This is true on
with very mild restrictions, but is not necessarily true in higher dimensions. For example, consider the random walk on
, where conditional on
, the increment is
is of length 1 and perpendicular to the vector
. The two possible directions are equally likely. The drift is therefore 0 everything, and the second moment is also well-behaved, but note that
, just by considering Pythagoras. So in higher dimensions, we have to be a bit more careful, and put restrictions on the covariance structure of the increment distributions.
As a final comment, note that from Lamperti’s result, we can re-derive Polya’s result about SRW in higher dimensions. If we have
an SRW on
, then consider
. By considering a couple of examples in two-dimensions, it is clear that this is not Markov. But the methods we considered above for the Lamperti walks were really martingale methods rather than Markov chain methods. And indeed this process Y has asymptotically zero drift with the right scaling. Here,

and so r=d-1, leading to exactly the result we know to be true, that the SRW is transient precisely in three dimensions and higher.
REFERENCES
Harris – First Passage and Recurrence Distributions (1952)
The slides from Ostap Hryniv’s talk, on which this was based, can be found here.