Sticky Brownian Motion

This follows on pretty much directly from the previous post about reflected Brownian motion. Recall that this is a process defined on the non-negative reals which looks like Brownian motion away from 0. We consider whether RBM is the only such process, and how any alternative might be constructed as a limit of discrete-time Markov processes.

One of the alternatives is called Sticky Brownian motion. This process spends more time at 0 than reflected Brownian motion. In fact it spends some positive proportion of time at 0. My main aim here is to explain why some intuitive ideas I had about how this might arise are wrong.

The first thought was to ensure that each visit to 0 last some positive measure of time. This could be achieved by staying at 0 for an Exp(1) duration, every time the process visited it. It doesn’t seem unreasonable that this might appear as the limit of a standard SRW adjusted so that on each visit to 0 the walker waits for a time given by independent geometric distributions. These distributions are memoryless, so that is fine, but by Blumenthal’s 0-1 Law, starting from 0 a Brownian motion hits zero infinitely many times before any small time t. So in fact the process described above would be identically zero as before it gets anywhere it would have to spend some amount of time at 0 given by an infinite sum of Exp(1) RVs.

We will return later to the question of why the proposed discrete-time model will still converge to reflected BM rather than anything more exotic. First though, we should discount the possibility of any intermediate level of stickiness, where the set of times spent at 0 still has measure zero, but the local time at 0 grows faster than for standard reflected BM. We can define the local time at 0 through a limit

L_t=\lim_{\epsilon\downarrow 0}\frac{1}{2\epsilon}\text{Leb}(\{0\le s \le t: |B_t|<\epsilon\})

of the measure of time spent very near 0, rescaled appropriately. So if the measure of the times when the process is at 0 is zero, then the local time is determined by behaviour near zero rather than by behaviour at zero. More precisely, on the interval [-\epsilon,\epsilon], the process behaves like Brownian motion, except on a set of measure zero, so the local time process should look the same as that of BM itself. Note I don’t claim this as a formal proof, but I hope it is a helpful heuristic for why you can’t alter the local time process without altering the whole process.

At this stage, it seems sensible to define Sticky Brownian motion. For motivation, note that we are looking for a process which spends a positive measure of time at 0. So let’s track this as a process, say C_t. Then the set of times when C is increasing is sparse, as it coincides with the process being 0, but we know we cannot wait around at 0 for some interval of time without losing the Markov property. So C shares properties with the local time of a reflected BM. The only difference is that the measure of times when C is increasing is positive here, but zero for the local time.

So it makes sense to construct the extra time spent at zero from the local time of a standard reflected BM. The heuristic is that we slow down the process whenever it is at 0, so that local time becomes real time. We can also control the factor by which this slowing-down happens, so define

\sigma(s)=\rho L(s)+s,

where L is the local time process of an underlying reflected BM, and \rho>0 is a constant. So \sigma is a map giving a random time-change. Unsurprisingly, we now define Sticky BM as the reflected BM with respect to this time-change. To do this formally, it is easiest to define a family of stopping times \{\tau_t\}, such that \sigma(\tau_t)=t, \tau_{\sigma(s)}=s, then if X is the reflected BM, define Y_t=X_{\tau_t} for the sticky BM.

It is worth thinking about what the generator of this process should be. In particular, why should it be different to reflected BM? The key observation is that the drift of the underlying reflected BM is essentially infinite at 0. By slowing down the process at 0, this drift becomes finite. So the martingale associated with sticky BM is precisely a time-changed version of the martingale associated with the underlying reflected BM, but this time-change is precisely what is required to give a generator. We get:

(\mathcal{L}f)(x)=\begin{cases}\frac12f''(x)&\quad x>0\\ \rho^{-1}f'(0) &\quad x=0.\end{cases}

Now that we have the generator, it starts to become apparent how sticky BM might appear as a limit of discrete-time walks. The process must look like mean-zero, unit-variance RW everywhere except near 0, where the limiting drift should be \rho^{-1}. Note that when considering the limiting drift near zero, we are taking a joint limit in x and h. The order of this matters. As explained at the end of the previous article, we only need to worry about the limiting drift along sequences of x,h such that a_h(x)\rightarrow 0. If no such sequences exist, or the limiting drift along any of these is infinite, then we actually have a reflected boundary condition.

This highlights one confusing matter about convergence of reflected processes. The boundary of the discrete-time process should converge to the boundary of the reflected process, but we also have to consider where reflective behaviour happens. Can we get sticky BM with reflection only at the boundary in the discrete-time processes? The answer turns out to be no. At the start of this article, I proposed a model of SRW with geometric waiting times whenever the origin was visiting. What is the limit of this?

The trick is to consider how long the discrete process spends near 0, after rescaling. It will spend a multiple 1/p more time at 0 itself, where p is the parameter of the geometric distribution, but no more time than expected at any point x\in(0,\epsilon). But time spent in (0,\epsilon) dominates time spent at 0 before this adjustment, so must also dominate it after the adjustment, so in the limit, the proportion of time spent at 0 is unchanged, and so in particular it cannot be positive.

Because of all of this, in practice it seems that most random walks we might be interested in converge (if they converge to a process at all) to a reflected SDE/diffusion etc, rather than one with sticky boundary conditions. I feel I’ve been talking a lot about Markov processes converging, so perhaps next, or at least soon, I’ll write some more technical things about exactly what conditions and methods are required to prove this.

REFERENCES

S. Varadhan – Chapter 16 from a Lecture Course at NYU can be found here.

Enhanced by Zemanta
Advertisements

Reflected Brownian Motion

A standard Brownian motion is space-homogeneous, meaning that the behaviour of B_{T+t}-B_T does not depend on the value of B_T. By Donsker’s Theorem, such a Brownian motion is also the limit in a process space of any homogeneous random walk with zero-drift and constant variance, after suitable rescaling.

In many applications, however, we are interested in real-valued continuous-time Markov processes that are defined not on the whole of the real line, but on the half-line \mathbb{R}_{\ge 0}. So as BM is the fundamental real-valued continuous-time Markov process, we should ask how we might adjust it so that it stays non-negative. In particular, we want to clarify uniqueness, or at least be sure we have found all the sensible ways to make this adjustment, and also to consider how Donsker’s Theorem might work in this setting.

We should consider what properties we want this non-negative BM to have. Obviously, it should be non-negative, but it is also reasonable to demand that it looks exactly like BM everywhere except near 0. But since BM has a scale-invariance property, it is essentially meaningful to say ‘near 0’, so we instead demand that it looks exactly like BM everywhere except at 0. Apart from this, the only properties we want are that it is Markov and has continuous sample paths.

A starting point is so-called reflected Brownian motion, defined by X_t:=|B_t|. This is very natural and very convenient for analysis, but there are some problems. Firstly, this has the property that it looks like Brownian motion everywhere except 0 only because BM is space-homogeneous but also symmetric, in the sense that B_t\stackrel{d}{=}-B_t. This will be untrue for essentially any other process, so as a general method for how to keep stochastic processes positive, this will be useless. My second objection is a bit more subtle. If we consider this as an SDE, we get

dX_t=\text{sign}(B_t)dB_t.

This is a perfectly reasonable SDE but it is undesirable, because we have a function of B as coefficient on the RHS. Ideally, increments of X would be a function of X, and the increments of B, rather than the values of B. That is, we would expect X_{t+\delta t}-X_t to depend on X_t and on (B_{t+s}-B_t, 0\le s\le \delta t), but not on B_t itself, as that means we have to keep track of extra information while constructing X.

So we need an alternative method. One idea might be to add some non-negative process to the BM so that the sum stays non-negative. If this process is deterministic and finite, there there is some positive probability that the sum will eventually be negative, so this won’t do. We are looking therefore so a process which depends on the BM. Obviously we could take \max(-B_t,0), but this sum would then spend macroscopic intervals of time at 0, and these intervals would have the Raleigh distribution (for Brownian excursions) rather than the exponential distribution, hence the process given by the sum would not be memoryless and Markov.

The natural alternative is to look for an increasing process A_t, and then it makes sense to talk about the minimal increasing process that has the desired property. A moment’s thought suggests that A_t=-min_{s\le t}B_t satisfies this. So we have the decomposition

B_t=-A_t+S_t,

where S_t is the height of B above its running minimum. So S is an ideal alternative definition of reflecting BM. In particular, when B is away from its minimum, dB_t=dS_t, so this has the property that it evolves exactly as the driving Brownian motion.

What we have done is to decompose a general continuous process into the sum of a decreasing continuous process and a non-negative process. This is known as the Skorohod problem, and was the subject of much interest, even in the deterministic case. Note that process A has the property that it is locally constant almost everywhere, and is continuous, yet non-constant. Unsurprisingly, since A only changes when the underlying BM is 0, A is continuous with respect to the local time process at 0. In fact, A is the local time process of the underlying Brownian motion, by comparison with the construction by direct reflection.

One alternative approach is to look instead at the generator of the process. Recall that the generator of a process is an operator on some space of functions, with \mathcal{L}f giving the infinitissimal drift of f(X_t). In the case of Brownian motion, the generator (\mathcal{L}f)(x)=\frac12 f''(x) for bounded smooth functions f. This is equivalent to saying that

f(X_t)-f(X_0)-\int_0^t \frac12 f''(X_s)ds (*)

is a martingale. This must hold also for reflected Brownian motion, whenever x is greater than 0. Alternatively, if the function f is zero in a small neighbourhood of 0, it should have the same generator with respect to reflected BM. Indeed, for a general smooth bounded function f, we can still consider the expression (*) with respect to reflected BM. We know this expression behaves as a martingale except when X is zero. If f'(0)>0, and T is some hitting time of 0, then f(X_{T+\delta T})-f(X_T)\ge 0, hence the expression (*) is a submartingale. So if we restrict attention to functions with f'(0)=0, the generator remains the same. Indeed, by patching together all such intervals, it can be argued that even if f'(0) is not zero,

f(X_t)-f(X_0)-\int_0^t \frac12 f''(X_s)ds - f'(0)A_t

is a martingale, where A is the local time process at zero.

I was aware when I started reading about this that there was another family of processes called ‘Sticky Brownian Motion’ that shared properties with Reflected BM, in that it behaves like standard BM away from zero, but is also constrained to the non-negative reals. I think this will get too long if I also talk about that here, so that can be postponed, and for now we consider reflected BM as a limit of reflected (or other) random walks, bearing in mind that there is at least one other candidate to be the limit.

Unsurprisingly, if we have a family of random walks constrained to the non-negative reals, that are zero-drift unit-variance away from 0, then if they converge as processes, the limit is Brownian away from zero, and non-negative. Note that “away from 0” means after rescaling. So the key aspect is behaviour near zero.

What is the drift of reflected BM at 0? We might suspect it is infinite because of the form of the generator, but we can calculate it directly. Given X_0=0, we have:

\frac{\mathbb{E}X_t}{t}=\frac{\mathbb{E}|B_t|}{t}=\frac{\sqrt{t}\mathbb{E}|B_1|}{t},

so letting t\rightarrow 0, we see indeed that the drift is infinite at 0.

For convergence of discrete processes, we really need the generators to converge. Typically we index the discrete-time processes by the time unit h, which tends to 0, and b_h(x),a_h(x) are the rescaled drift and square-drift from x. We assume that we don’t see macroscopic jumps in the limit. For the case of simple random walk reflected at 0, it doesn’t matter exactly how we construct the joint limit in h and x, as the drift is uniform on x>0, but in general this does matter. I don’t want to discuss sticky BM right now, so it’s probably easiest to be vague and say that the discrete Markov processes converge to reflected BM so long they don’t spend more time than expected near 0 in the limit, as the title ‘sticky’ might suggest.

The two ways in which this can happen is if the volatility term a_h(x) is too small, in which case the process looks almost deterministic near 0, or if the drift doesn’t increase fast enough. And indeed, this leads to two conditions. The first is straightforward, if a_h(x) is bounded below, in the sense that \liminf_{h,x\rightarrow 0} a_h(x)\ge C>0, then we have convergence to reflected BM. Alternatively, the only danger can arise down those subsequences where a_h(x)\rightarrow 0, so if we have that b_h(x)\rightarrow +\infty whenever h,x,a_h(x)\rightarrow 0, then this convergence also holds.

Next time I’ll discuss what sticky BM means, what it doesn’t mean, why it isn’t easy to double the local time, and how to obtain sticky BM as a limit of discrete random walks in a similar way to the above.

REFERENCES

S. Varadhan – Chapter 16 from a Lecture Course at NYU can be found here.

Enhanced by Zemanta

Random Mappings for Cycle Deletion

In previous posts here and here, I’ve talked about attempts to describe a cycle deleting process. We amend the dynamics of the standard random graph process by demanding that whenever a cycle is formed in the graph we delete all the edges that lie on the cycle. The aim of this is to prevent the system growing giant components, and perhaps give a system that displays the characteristics of self-organised criticality. In the posts linked to, we discuss the difficulties caused by the fact that the tree structure of components in such a process is not necessarily uniform.

Today we look in the opposite direction. It gives a perfectly reasonable model to take a multiplicative coalescent with quadratic fragmentation (this corresponds to cycle deletion, since there are O(n^2) edges which would give a cycle if added to a tree on n vertices) and a fragmentation kernel corresponding to adding an extra edge to a uniform spanning tree on n vertices then deleting the edges of the unique cycle. The focus of the rest of this post, we consider this fragmentation mechanism, in particular thinking about how we would sample from it most practically. Not least, without going through Prufer codes or some other clever machinery, it is not trivial to sample a uniform spanning tree.

First, we count the number of unicyclic graphs on n labelled vertices. If we know that the vertices on the cycle are v_1,\ldots,v_k, then the number of cycles with an identified edge is

u_1=1,\quad u_k=\frac{k!}{2},\, k\ge 2.

If we know that the tree coming off the cycle from vertex v_i has size m, say, then each of the possible rooted labelled trees with size m is equally likely. So taking w_j=j^{j-1}, the number of rooted trees on j labelled vertices, we get B_n(u_\bullet,w_\bullet) for the number of such unicyclic graphs on [n]. Recall B_n is the nth Bell polynomial, which gives the size of a compound combinatorial structure, where we have some structure on blocks and some other structure within blocks. Then the random partition of [n] given by the tree sizes has the distribution \text{Gibbs}_n(u_\bullet,w_\bullet).

Consider now a related object, the so-called random mapping digraph. What follows is taken from Chapter 9 of Combinatorial Stochastic Processes. We can view any mapping M_n:[n]\rightarrow[n] as a digraph where every vertex has out-degree 1. Each such digraph contains a collection of directed cycles, supported on those elements x for which M_n^k(x)=x for some k. Such an element x is called a cyclic point. Each cyclic point can be viewed as the root of a labelled tree.

In an identical manner to the unicyclic graph, the sizes of these directed trees in the digraph decomposition of a uniform random mapping is distributed as \text{Gibbs}_n(\bullet !,w_\bullet). So this is exactly the same as the cycle deletion kernel, apart from in the probability that the partition has precisely one block. In practice, for large n, the probability of this event is very small in both cases. And if we wanted to sample the cycle deletion kernel exactly, we could choose the trivial partition with some probability p, and otherwise sample from the random mapping kernel, where p is chosen such that

p+\frac{1-p}{B_n(\bullet !, w_\bullet)}=\frac{1}{B_n(u_\bullet,w_\bullet)}.

At least we know from the initial definition of a random mapping, that B_n(\bullet !,w_\bullet)=n^n. The number of unicyclic graphs with an identified edge is less clear. It turns out that the partition induced by the random mapping has a nice limit, after rescaling, as the lengths of excursions away from 0 in the standard Brownian bridge on [0,1].

The time for a fuller discussion of this sort of phenomenon is in the context of Poisson-Dirichlet distributions, as the above exchangeable partition turns out to be PD(1/2,1/2). However, for now we remark that the jumps of a subordinator give a partition after rescaling. The case of a stable subordinator is particularly convenient, as calculations are made easier by the Levy-Khintchine formula.

A notable example is the stable-1/2 subordinator, which can be realised as the inverse of the local time process at zero of a Brownian motion. The jumps of this process are then the excursion lengths of the original Brownian motion. A calculation involving the tail of the w_j’s indicates that 1/2 is the correct parameter for a subordinator to describe the random mappings. Note that the number of blocks in the partition corresponds to the local time at zero of the Brownian motion. (This is certainly not obvious, but it should at least be intuitively clear why a larger local time roughly indicates more excursions which indicates more blocks.)

So it turns out, after checking some of the technicalities, that it will suffice to show that the rescaled number of blocks in the random mapping partition \frac{|\Pi_n|}{\sqrt{n}} converges to the Raleigh density, which is a size-biased Normal random variable (hence effectively first conditioned to be positive), and which also is the distribution of the local time of the standard Brownian bridge.

After that very approximate description, we conclude by showing that the distribution of the number of blocks does indeed converge as we require. Recall Cayley’s formula kn^{n-k-1} for the number of labelled forests on [n] with a specified set of k roots. We also need to know how many labelled forests there are with any set of roots. Suppose we introduce an extra vertex, labelled 0, and connect it only to the roots of a rooted labelled forest on [n]. This gives a bijection between unlabelled trees on {0,1,…,n} and labelled forests with a specified set of roots on [n]. So we can use Cayley’s original formula to conclude there are (n+1)^{n-1} such forests. We can do a quick sanity check that these are the same, which is equivalent to showing

\sum_{k=1}^n k n^{-k-1}\binom{n}{k}=\frac{1}{n}(1+\frac{1}{n})^{n-1}.

This odd way of writing it is well-motivated. The form of the LHS is reminiscent of a generating function, and the additional k suggests taking a derivative. Indeed, the LHS is the derivative

\frac{d}{dx}(1+x)^n,

evaluated at \frac{1}{n}. This is clearly the same as the RHS.

That said, having established that the random mapping partition is essentially the same, it is computationally more convenient to consider that instead. By the digraph analogy, we again need to count forests with k roots on n vertices, and multiply by the number of permutations of the roots. This gives:

\mathbb{P}(|\Pi_n|=k)=\frac{kn^{n-k-1}\cdot k! \binom{n}{k}}{n^n}=\frac{k}{n}\prod_{i=1}^{k-1}\left(1-\frac{i}{n}\right).

Now we can consider the limit. Being a bit casual with notation, we get:

\lim \mathbb{P}(\frac{|\Pi_n|}{\sqrt{n}}\in dl)\approx \sqrt{n}dl \mathbb{P}(|\Pi_n|=l\sqrt{n}).

Since the Raleigh distribution has density l\exp(-\frac12 l^2)dl, it suffices for this informal verification to check that

\prod_{i=1}^{l\sqrt{n}}(1-\frac{i}{n})\approx \exp(-\frac12 l^2). (*)

We take logs, so the LHS becomes:

\log(1-\frac{1}{n})+\log(1-\frac{2}{n})+\ldots+\log(1-\frac{l\sqrt{n}}{n}).

If we view this as a function of l and differentiate, we get

d(LHS)=\sqrt{n}dl \log (1-\frac{l}{\sqrt{n}})\approx \sqrt{n}dl \left[-\frac{l}{\sqrt{n}}-\frac{l^2}{2n}\right]\approx -ldl.

When l is zero, the LHS should be zero, so we can obtain the desired result (*) by integrating then taking an exponential.

Markovian Excursions

In the previous post, I talked about the excursions of a Brownian motion. Today I’m thinking about how to extend these ideas to more general Markov chains. First we want to rule out some situations. In particular, we aren’t hugely interested in discrete time Markov chains. The machinery is fairly well established for excursions, whether or not the chain is transient. Furthermore, if the state space is discrete, as for a Poisson process for example, the discussion is not hugely interesting either. Remember that the technical challenges in the constructions of local time arise because of the Blumenthal 0-1 law property that Brownian motion visits 0 infinitely often in the small window after the start time. We therefore want the process to be regular at the point of the state space under discussion. This is precisely the condition described above for BM about 0.

Why is it harder in general?

The informal notion of a local time should transfer to a more general Markov chain, but there are some problems. Firstly, to define something in terms of an integral is not general enough. The state space E needs some topological structure, but any meaningful definition must be in terms of functions from E into the reals. There were also all sorts of special properties of Brownian motion, including the canonical time-space rescaling that came in handy in that particular case. It turns out to be easiest to consider the excursion measure on a general Markov chain through its Laplace transform.

Definition and Probabilistic interpretations

The resolvent is the Laplace transform of the transition probability P_t(x,A), viewed as an operator on functions f:E\rightarrow \mathbb{R}.

R_\lambda f(x):=\mathbb{E}_x\left[\int_0^\infty e^{-\lambda t}f(X_t)dt\right]=\int_0^\infty e^{-\lambda t}P_tf(x)dt.

We can interpret this in terms of the original process in a couple of ways which may be useful later. The motivation is that we would like to specify a Poisson process of excursions, for which we need to know the rate. We hope that the rate will in fact be constant, so it will in fact to suffice to work out the properties of the expected number of excursions (or whatever) up to various random times, in particular those given by exponential RVs.

So, we take \zeta\sim\text{Exp}(\lambda) independent of everything else, and assume that we ‘kill the chain’ at time \zeta. Then, by shuffling expectations in and out of the integral and separating independent bits, we get:

R_\lambda f(x)=\mathbb{E}_x\int_0^\zeta f(X_s)ds = \frac{1}{\lambda}\mathbb{E}_xf(X_\zeta).

As in the Brownian local time description

R_\lambda 1_A(x)=\mathbb{E}(\text{time spent in }A\text{ before death at time }\zeta_\lambda).

Markovian property

We want to show that excursions are Markov, once we’ve sorted out what an ‘excursion’ actually means in this context. We do know how to deal with the Markovian property once we are already on an excursion. It is relatively straightforward to define an extension of the standard transition probability operator, to include a condition that the chain should not hit point a during the transition. That is

_aP_t(x,A):= \mathbb{P}_x(X_t\in A\cap H_a>t).

This will suffice to define the behaviour once an excursion has started. The more complicated bit will be the entrance law n_t(A), being the probability of arriving at A after time t of an excursion. To summarise, as with BM, all the technical difficulties with an excursion happen at the beginning, ie bouncing around the start point, but once it is ‘up-and-running’, its path is Markovian and controlled by _aP_t.

Marking

The link between the resolvent and the excursions, is provided as in the Brownian case, by supplying a PPP of marks at uniform rate \lambda to real time. This induces a mark process on excursions, weighted by an (exponential) function of excursion length. We make no distinction between an excursion including one mark or many marks. Then the measure on marked excursion is, in a mild abuse of notation:

n_\lambda=(1-e^{-\lambda\zeta(f)})\cdot n.

We compare with the Laplace transform n_\lambda(dx)=\int_0^\infty e^{-\lambda t}dtn_t(dx) using a probabilistic argument.

We can apply the measure to a function in the usual way: \lambda n_\lambda(1_A) is the measure of those excursions for which the first mark occurs in A. So by taking A=E, we get

\lambda n_\lambda(1)=\text{ Excursion measure }=\int_U n(df)(1-e^{-\lambda\zeta(f)}).

We have therefore linked the exponential mark process on excursion measure with the Laplace transform of the entrance law. So in particular:

\frac{\lambda n_\lambda(A)}{\lambda n_\lambda(1)}=\mathbb{P}(\text{first mark when in }A)=\int_0^\infty \lambda e^{-\lambda t}P_t(0,A)dt=\lambda R_\lambda 1_A(0).

The resolvent is relatively easy to calculate explicitly, and so we can find the Laplace transform n_\lambda(A). From this it is generally possible to invert the transform to find the entrance law n_t.

References

A Guided Tour Through Excursions – L. C. G. Rogers.

This pair of posts is very much a paraphrase of chapters 3 and 4 of this excellent text. The original can be found here (possibly not open access?)

Brownian Excursions and Local Time

I’ve been spending a fair bit of time this week reading and thinking about the limits of various combinatorial objects, in particular letting the number of vertices tend to \infty in models of random graphs with various constraints. Perhaps predictably, like so many continuous stochastic objects, yet again the limiting ‘things’ turn out to be closely linked to Brownian Motion. As a result, I’ve ended up reading a bit about the notion of local time, and thought it was sufficiently elegant even by itself to justify a quick post.

Local Time

In general, we might be interested in calculating a stochastic integral like

\int_0^t f(B_s)ds.

Note that, except in some highly non-interesting cases, this is a random variable. Our high school understanding of Riemannian integration encourages thinking of this as a ‘pathwise’ integral along the path evolving in time. But of course, that’s orthogonal to the approach we start thinking about when we are introduced to the Lebesgue integral. There we think about potential values of the integrand, and weight their contribution by the (Lebesgue) measure of the subset of the domain in which they appear.

Can we do the same for the stochastic integral? That is, can we find a measure which records how long the Brownian Motion spends at a point x? This measure will not be deterministic – effectively the stochastic behaviour of BM will be encoded through the measure rather than the argument of the function.

The answer is yes, and the measure in question is referred to as local time. More formally, we want

\int_0^t f(B_s)ds=\int_\mathbb{R}f(x)L(t,x)dx. (*)

where the local time L(t,x) is a random process, increasing for fixed x. Informally, one could take

\partial_t L(t,x) \propto 1(B_t=x)

but clearly in practice that won’t do at all for a definition, and so instead we use (*). In the usual way, if we want (*) to hold for all reasonably nice functions f, it suffices to check it for the indicator functions of Borel sets. L(t,.) is therefore often referred to as occupation density, while L(.,A) is local time.

Local Time as natural index for Excursions

An excursion, for example of Brownian Motion, is a segment of the path that has zero value only at its endpoints. Alternatively, it is a maximal open interval of time such that the path is away from 0. We want to specify the measure on these excursions. Here are some obvious difficulties.

By Blumenthal’s 0-1 law, BM started from zero hits zero infinitely often in any time interval [0,e], so in the same way that there is no first positive rational, there is no first excursion. We could pick the excursion occurring in progress at a fixed time t, but this is little better. Firstly, the resultant measure is size-biased by the length of the excursion, and more importantly, the proximity of t to the origin may be significant unless we know of some memorylessness type of property to excursions.

Local time allows us to solve these problems. We restrict attention to L_t:=L(t,0), the occupation density of 0. Let’s think about some advantages of indexing excursions by local time rather than by the start time:

  • The key observation is that local time remains constant on excursions. That is, if we are avoiding 0, the local time at 0 cannot grow because the BM spends no time there!
  • If we use start time, then we have a countably infinite number of small excursions accumulating close to 0, ie with very small start time. However, local time increases rapidly when there are lots of small excursions. Remember, lots of small excursions means that the BM hits 0 lots of times. So local time grows quickly through the annoying bits, and effectively provides a size-biasing for excursions that allows us to ignore the effects of the ‘Blumenthal excursions’ near time 0.
  • When indexed by time, excursions might be Markovian, in the sense that subsequent excursions (and in particular their lengths) are independent of past excursions.This is certainly not the case if you index by start time! If an excursion starts at time t and has length u, then the ‘next’ excursions, in as much as that makes sense, must surely start at time t+u.

We know there are only countably many excursions, hence there are only countably many local times which pertain to an excursion. This motivates considering the set of excursions as a Poisson Point Process on local time. Once you’ve had this idea, everything follows quite nicely. Working out the distribution of the constant rate (which is a measure on the set of excursions) remains, but essentially we now have a sensible framework for tracking the process of excursions, and from this we can reconstruct the original Brownian Motion.