I haven’t published a post about probability for far too long. Several are queued, so perhaps this will be the start of a deluge.
Anyway, with my advisor at Technion, I’m still working on some problems concerning Gaussian random walk subject to some conditioning which is complicated, but in practice (we hope) only mildly different to conditioning the walk to stay positive. Our conditioning at step n depends on some external randomness, but also on the future trajectory of the walk (related to the embedding of the walk in a 2D DGFF), thus ruining the possibility of applying the Markov property in any proof without significant preliminary work.
It seemed worth writing briefly about some of these results in a slightly simpler setting. The goal is to assemble many of the ingredients required to prove a local limit for Gaussian random walk conditioned to stay positive, in a sense which will be clarified towards the end. This is not the best way to go about getting scaling limits (as discussed briefly here, and for which see references [Ig74] and [Bo76]), and it’s probably not the best way to get local limits in the simplest setting, but it’s the method we are currently working to generalise, and follows the outline of [B-JD05], but in much less technical detail.
Probabilities via the reflection principle
We start with Brownian motion. The reflection principle, as described briefly in this post from the depths of history, is a classical technique for studying the maximum of Brownian motion. Roughly speaking, we exploit the fact that , but we then apply this at the hitting time of a particular positive value, using the Strong Markov Property.
Let be the running maximum of the Brownian motion
, and
the hitting time of b. Then
which, by SMP at and the reflection invariance of a standard BM, is equal to
This obviously assumed , but if we set
, we find
Or, in other words, .
While we can’t derive such nice equalities in distribution, the reflection principle is robust enough to treat more complicated settings, such as Brownian bridge.
We might want to ask about the maximum of a standard Brownian bridge, but we can be more general, and ask about the maximum of a Brownian bridge with drift (let’s say general bridge here). It’s important to remember that a general Brownian bridge has the same distribution as a linear transformation of a standard Brownian bridge. Everything is Gaussian, after all. So asking whether the maximum of a general Brownian bridge is less than a particular value is equivalent to asking whether a standard Brownian bridge lies below a fixed line. Wherever possible, we make such a transformation at the start and perform the simplest version of the required calculation.
So, suppose we have a bridge B from (0,0) to (t,a), and we want to study . Fix some
, and work with a standard Brownian motion
. By a similar argument to before,
and
So
Random walk conditioned to stay positive
Our main concern is conditioning to stay above zero. Let be some complete if cumbersome notation for a Brownian bridge B from (0,x) to (t,y). Then another simple transformation of the previous result gives
Then, if , we can approximate this by
. (*)
Extend the notation so describes Brownian motion started from (0,x). Then integrating over y, gives
(It might appear that we have integrated the approximation (*) over parts of the range where it is not a valid approximation, but the density of vanishes exponentially fast, and so actually it’s not a problem.)
We now want to extend this to random walks. Some remarks:
- We used the Gaussian property of Brownian motion fairly heavily throughout this derivation. In general random walks are not Gaussian, though we can make life easier by focusing on this case.
- We also used the continuity property of Brownian motion when we applied the reflection principle. For a general random walk, it’s hopeless to consider the hitting times of individual values. We have to consider instead the hitting times of regions
, and so on. One can still apply SMP and a reflection principle, but this gives bounds rather than equalities. (The exception is simple random walk, for which other more combinatorial methods may be available anyway.)
- On the flip side, if we are interested in Brownian motion/bridge staying positive, we can’t start it from zero, as then the probability of this event is zero, by Blumenthal’s 0-1 law. By contrast, we can certainly ask about random walk staying positive when started from zero without taking a limit.
A useful technique will be the viewpoint of random walk as the values taken by Brownian motion at a sequence of stopping times. This Skorohod embedding is slightly less obvious when considering a general random walk bridge inside a general Brownian bridge, but is achievable. We want to study quantities like
where for simplicity let’s just take to be a random walk with standard Gaussian increments. It’s possible we might want to take a scaling limit in x and y as functions of n. But first if we take x,y fixed, and embed the random walk bridge with these endpoints into the corresponding Brownian bridge with
, we are then faced with the question:
What’s the probability that the Brownian bridge goes below zero, but the embedded RW with n steps does not?
If the Brownian bridge conditioned to go below zero spends time below zero, then for large n it’s asymptotically very unlikely that the n places at which we embed the random walk avoids this set of intervals.
Several technical estimates are required to make this analysis rigorous. The conclusion is that there exists a function for which
as
, such that
As earlier, the second is obtained from the first by integrating over suitable y. This function has to account for the extra complications when either end-point is near zero, for which the event where the Brownian motion goes negative without the random walk going negative requires additional care.
Limits for the conditioned random walk
In the previous post on this topic, we addressed scaling limits in space and time for conditioned random walks. But we don’t have to look at the classical Donsker scaling to see the effects of conditioning to stay positive. In our setting, we are interested in studying the distribution of conditional on the event
, with limits taken in the order
and then
.
(At a more general level, it is meaningful to describe the random walk conditioned on staying positive forever. Although this would a priori require conditioning on an event of probability zero, it can be handled formally as an example of an h-transform.)
As explained in that previous post, the scaling invariance of the Bessel process (which it’s not unreasonable to think of as ‘Brownian motion conditioned to stay non-negative’) suggests that this limit should exist, and be given by the entrance law of
. But this is hard to extract from a scaling limit.
However, we can use the previous estimate to start a direct calculation.
Here, we used the Markov property at time m to split the event that and the walk stays positive into two time-intervals. We will later take m large, so we may approximate as
This final probability emphasises that as we only really have to consider
, so set
, and we obtain
This is precisely the entrance law of the 3-dimensional Bessel process, usually denoted . This process is invariant under time-rescaling in the same fashion as Brownian motion. Indeed, one representation of R is as the radial part of a three-dimensional Brownian motion, given by independent BMs in each coordinate. (See [Pi75] for explanation of the relation to ‘BM conditioned to stay non-negative’.) We could complete the analogy by showing that
converges to the transition density of R as well. (Cf the prelude to Theorem 2.2 of [B-JD05].)
Final remarks
The order of taking limits is truly crucial. We can also obtain a distributional scaling limit at time n under conditioning to stay non-negative up to time n. But then this is the size-biased normal distribution (the Rayleigh distribution), rather than the square-size-biased normal distribution we say in this setting. And we can exactly see why. Relative to the normal distribution which applies in the absence of conditioning, we require size-biasing to account for the walk staying positive up to time m, and then also size-biasing to account for the walk staying positive for the rest of time (or up to n in the
limit if you prefer).
The asymptotics for were the crucial step, for which only heuristics are present in this post. It remains the case that estimates of this kind form the crucial step in other more exotic conditioning scenarios. This is immediately visible (even if the random walk notation is rather exotic) in, for example, Proposition 2.2 of [CHL17], of which we currently require a further level of generalisation.
References
[Bo76] – Bolthausen – On a functional central limit theorem for random walks conditioned to stay positive
[B-JD05] – Bryn-Jones, Doney – A functional limit theorem for random walk conditioned to stay non-negative
[CHL17] – Cortines, Hartung, Louidor – The structure of extreme level sets in branching Brownian motion
[Ig74] – Iglehart – Functional central limit theorems for random walks conditioned to stay positive
[Pi75] – Pitman – One-dimensional Brownian motion and the three-dimensional Bessel process