Tightness in Skorohod Space

This post continues the theme of revising topics in the analytic toolkit relevant to proving convergence of stochastic processes. Of particular interest is the question of how to prove that families of Markov chains might have a process scaling limit converging to a solution of some stochastic differential equation, in a generalisation of Donsker’s theorem for Brownian motion. In this post, however, we address more general aspects of convergence of stochastic processes, with particular reference to Skorohod space.

Topological Background

I’ve discussed Skorohod space in a previous post. For now, we focus attention on compactly supported functions, D[0,T]. Some of what follows can be extended to the infinite-time setting easily, and some requires more work. Although we can define a metric on the space of cadlag functions in lots of ways, it is more useful to think topologically, or at least with a more vague sense of metric. We say two cadlag functions are close to one another if there is a reparameterisation of the time-axis, (a function [0,T] to itself) that is uniformly close to the identity function, and when applied to one of the cadlag functions, brings it close to the other cadlag function. Heuristically, two cadlag functions are close if their large jumps are close to one another and of similar size, and if they are uniformly close elsewhere. It is worth remembering that a cadlag function on even an unbounded interval can have only countably many jumps, and only finitely many with magnitude greater than some threshold on any compact interval.

For much of the theory one would like to use, it is useful for the spaces under investigation to be separable. Recall a topological space is separable if there exists a countable dense subset. Note in particular that D[0,T] is not separable under the uniform metric, since we can define f_x(\cdot)=\mathbf{1}_{(\cdot \ge x)} for each x\in[0,T], then ||f_x-f_y||_\infty=1 whenever x\ne y. In particular, we have an uncountable collection of disjoint open sets given by the balls \mathcal{B}(f_x,\frac12), and so the space is not countable. Similarly, C[0,\infty) is not separable. A counterexample might be given by considering functions which take the values {0,1} on the integers. Thus we have a map from \{0,1\}^{\mathbb{N}}\rightarrow C[0,\infty), where the uniform distance between any two distinct image points is at least one, hence the open balls of radius 1/2 around each image point give the same contradiction as before. However, the Stone-Weierstrass theorem shows that C[0,T] is separable, as we can approximate any such function uniformly well by a polynomial, and thus uniformly well by a polynomial with rational coefficients.

In any case, it can be shown that D[0,T] is separable with respect to the natural choice of metric. It can also be shown that there is a metric which gives the same open sets (hence is a topologically equivalent metric) under which D[0,T] is complete, and hence a Polish space.

Compactness in C[0,T] and D[0,T]

We are interested in tightness of measures on D[0,T], so first we need to address compactness for sets of deterministic functions in D[0,T]. First, we consider C[0,T]. Here, the conditions for a set of functions to be compact is given by the celebrated Arzela-Ascoli theorem. We are really interested in compactness as a property of size, so we consider instead relative compactness. A set is relatively compact (sometimes pre-compact) if its closure is compact. For the existence of subsequential limits, this is identical to compactness, only now we allow the possibility of the limit point lying outside the set.

We note that the function C[0,T]\rightarrow \mathbb{R} given by ||f||_\infty is continuous, and hence uniform boundedness is certainly a required condition for compactness in C[0,T]. Arzela-Ascoli states that uniform boundedness plus equicontinuity is sufficient for a set of such functions to be compact. Equicontinuity should be thought of as uniform continuity that is uniform among all the functions in the set, rather than just within the argument of an individual particular function.

For identical reasons, we need uniform boundedness for relative compactness in D[0,T], but obviously uniform continuity won’t work as a criterion for discontinuous functions! We seek some analogue of the modulus of continuity that ignores jumps. We define

\omega'_\delta(f):=\inf_{\{t_i\}} \max_i \sup_{s,t\in[t_{i-1},t_i)} |f(s)-f(t)|,

where the infimum is taken over all meshes 0=t_0<t_1<\ldots<t_r with t_i-t_{i-1}>\delta. Note that as \delta\downarrow 0, we can, if we want, place the t_i so that large jumps of the function f take place over the boundaries between adjacent parts of the mesh. In particular, for a given cadlag function, it can be shown fairly easily that \omega'_\delta(f)\downarrow 0 as \delta\rightarrow 0. Then, unsurprisingly, in a similar fashion to the Arzela-Ascoli theorem, it follows that a set of functions A\subset D[0,T] is relatively compact if it is uniformly bounded, and

\lim_{\delta\rightarrow 0} \sup_{f\in A}\omega'_\delta(f)=0.

Note that this ‘modulus of continuity’ needs to decay uniformly across the set of functions, but that we do not need to choose the mesh at level \delta uniformly across all functions. This would obviously not work, as then the functions \mathbf{1}_{(\cdot\ge x_n)} for any sequence x_n\rightarrow x would not be compact, but they clearly converge in Skorohod space!

Tightness in C[0,T] and D[0,T]

Naturally, we are mainly interested in (probability) measures on D[0,T], and in particular conditions for tightness on this space. Recall a family of measures is tight if for any \epsilon>0, there exists some compact set A such that

\pi(A)>1-\epsilon,\quad \forall \pi\in\Pi.

So, for measures (\mu_n) on D[0,T], the sequence is tight precisely if for any \epsilon>0, there exists M,\delta and some N such that for any n>N, both

\mu_n(||f||_\infty >M)\le \epsilon,\quad \mu_n(\omega'_\delta(f)>\epsilon)\le \epsilon

hold. In fact, the second condition controls variation sufficiently strongly, that we can replace the first condition with

\mu_n(|f(0)|>M)\le \epsilon.

Often we might be taking some sort of scaling limit of these processes in D[0,T], where the jumps become so small in the limit that we expect the limit process to be continuous, perhaps an SDE or diffusion. If we can replace \omega'_\delta by \omega_\delta, the standard modulus of continuity, then we have the additional that any weak limit lies in C[0,T].

In general, to prove convergence of some stochastic processes, we will want to show that the processes are tight, by demonstrating the properties above, or something equivalent. Then Prohorov’s theorem (which I tend to think of as a probabilistic functional version of Bolzano-Weierstrass) asserts that the family of processes has a weak subsequential limit. Typically, one then shows that any weak subsequential limit must have the law of some particular random process. Normally this is achieved by showing some martingale property (eg for an SDE) in the limit, often by using the Skorohod representation theorem to use almost sure subsequential convergence rather than merely weak convergence. Then one argues that there is a unique process with this property and a given initial distribution. So since all weak subsequential limits are this given process, in fact the whole family has a weak limit.

Advertisement

Non-separable Skorohod Representations

In the previous post, I discussed the statement and proof of the Skorohod representation theorem. This concerns the conditions under which it is possible to couple distributions which converge in law, to obtain a family of random variable on a possibly very large probability space, which converge almost surely. The condition for the theorem to hold is that the base space, or at least the support of the limiting distribution should be a separable metric space. Skorohod’s original proof concerned the case where all the distributions were supported on a complete, separable metric space (Polish space), but this extension is not particularly involved, and was proven not long after the original result.

It is natural to ask exactly what goes wrong in non-separable or non-metrizable spaces. Recall a space is separable if it contains a countable dense subset. Obviously, finite or countable sets are by definition separable with any metric. Considering the points with rational coordinates shows that \mathbb{R}^d is separable for each d, and the Stone-Weierstrass theorem shows that continuous functions with on a bounded interval are also separable with the uniform topology, as they can be approximated uniformly well by polynomials with rational coefficients. One heuristic is that a separable space does not have ‘too many’ open sets.

There are references (for example, see [2]) to examples of Skorohod non-representation in non-metrizable topological spaces, which are ‘big’ enough to allow convergence in distribution with respect to a particular class of test functions, but where the distributions are not uniformly tight, so cannot converge almost surely. However, I don’t really understand this well at all, and have struggled to chase the references, some of which are unavailable, and some in French.

Instead, I want to talk about an example given in [1] of a family of distributions on a non-separable space, which cannot be coupled to converge almost surely. The space is (0,1) equipped with the discrete metric, which says that d(x,y)=1 whenever x\ne y. Note that it is very hard to have even deterministic convergence in this space, since the only way to be close to a element of the space is indeed to be equal to that element. We will construct random variables and it will unsurprising that they cannot possibly converge almost surely in any coupling, but the exact nature of the construction will lead to convergence in distribution.

Based on what we proved last time, the support of the limiting distribution will be non-separable. It turns out that the existence of such a distribution is equiconsistent in the sense of formal logic with the existence of an extension of Lebesgue measure to the whole power set of (0,1). This is not allowed under the Axiom of Choice, but is consistent under the slightly weaker Axiom of Dependent Choice (AC). This weaker condition says, translated into language more familiar to me, that every directed graph with arbitrary (and in particular, potentially uncountable) vertex set, and with all out-degrees at least 1 contains an infinite directed path. This seems obvious when viewed through the typically countable context of graph theory. But the natural construction is to start somewhere and ‘just keep going’ wherever possible, which involves making a choice from the out-neighbourhood at lots of vertices. Thus it is clear why this is weaker than AC. Anyway, in the sequel, we assume that this extension of Lebesgue measure exists.

Example (from [1]): We take (X_n)_{n\ge 1} to be an IID sequence of non-negative RVs defined on the probability space ((0,1),\mathcal{B}(0,1),\mathrm{Leb}), with expectation under Lebesgue measure equal to 1. It is not obvious how to do this, with the restriction on the probability space. One example might be to write \omega\in(0,1) as \overline{\omega_1\omega_2\ldots}, the binary expansion, and then set X_n=2\omega_n. We will later require that X_n is not identically 1, which certainly holds in this example just given.

Let \mu be the extension of Lebesgue measure to the power set \mathcal{P}=\mathcal{P}(0,1). Now define the measures:

\mu_n(B)=\mathbb{E}_\mu(X_n \mathbf{1}_B),\quad \forall B\in\mathcal{P}.

To clarify, we are defining a family of measures which also are defined for all elements of the power set. We have defined them in a way that is by definition a coupling. This will make it possible to show convergence in distribution, but they will not converge almost surely in this coupling, or, in fact, under any coupling. Now consider a restricted class of sets, namely B\in \sigma(X_1,\ldots,X_k), the class of sets distinguishable by the outcomes of the first k RVs.

[Caution: the interpretation of this increasing filtration is a bit different to the standard setting with for example Markov processes, as the sets under consideration are actually subsets of the probability space on which everything is defined. In particular, there is no notion that a ‘fixed deterministic set’ lies in all the layers of the filtration.]

Anyway, by independence, when n>k, by independence, we have

\mu_n(B)=\mathbb{E}_\mu(X_n\mathbf{1}_B)=\mathbb{E}_\mu(X_n)\mathbb{E}_\mu(\mathbf{1}_B)=\mu(B).

So whenever B\in\mathcal{F}\bigcup_k \sigma(X_1,\ldots,X_k), \lim_n \mu_n(B)=\mu(B). By MCT, we can extend this convergence to any bounded \mathcal F-measurable function.

This is the clever bit. We want to show that \mu_n(B)\rightarrow\mu(B) for all B\in\mathcal P, but we only have it so far for B\in\mathcal F. But since \mathcal{F}\subset \mathcal P, which is the base field of the probability space under the (non-AC) assumption, we can take conditional expectations. In particular for any B\in\mathcal P, \mathbb{E}_\mu[\mathbf{1}_B | \mathcal{F}] is a bounded, \mathcal F-measurable function. Hence, by definition of \mu_n and the extended MCT result:

\mu_n(B)=\mathbb{E}_\mu[X_n\mathbb{E}_\mu[\mathbf{1}_B|\mathcal F]]=\mathbf{E}_{\mu_n}[\mathbb{E}_\mu[\mathbf{1}_B|\mathcal F]] \rightarrow \mathbb{E}_\mu [\mathbb{E}_\mu[\mathbf{1}_B |\mathcal{F}]].

Now, since by definition \mathbf{1}_B is \mathcal{P}-measurable, applying the tower law gives that this is equal to \mu(B). So we have

\mu_n(B)\rightarrow \mu(B),\quad \forall B\in\mathcal{P}. (*)

This gives weak convergence \mu_n\Rightarrow \mu. At first glance it might look like we have proved a much stronger condition than we need. But recall that in any set equipped with the discrete topology, any set is both open and closed, and so to use the portmanteau lemma, (*) really is required.

Now we have to check that we can’t have almost sure convergence in any coupling of these measures. Suppose that we have a probability space with random variables Y,(Y_n) satisfying \mathcal L(Y)=\mu, \mathcal L(Y_n)=\mu_n. But citing the example I gave of X_n satisfying the conditions, the only values taken by Y_n are 0 and 2, and irrespective of the coupling,

\mathbb{P}(Y_n=2\text{ infinitely often})>0.

So it is impossible that Y_n can converge almost surely to any supported on [0,1].

References

[1] Berti, Pratelli, Rigo – Skorohod Representation and Disintegrability (here – possibly not open access)

[2] Jakubowski – The almost sure Skorokhod representation for subsequences in non-metric spaces.

Skorohod Representation Theorem

Continuing the theme of revising theory in the convergence of random processes that I shouldn’t have forgotten so rapidly, today we consider the Skorohod Representation Theorem. Recall from the standard discussion of the different modes of convergence of random variables that almost sure convergence is among the strongest since it implies convergence in probability and thus convergence in distribution. (But not convergence in L_1. For example, take U uniform on [0,1], and X_n=n\mathbf{1}_{\{U<\frac{1}{n}\}}.)

Almost sure convergence is therefore in some sense the most useful form of convergence to have. However, it comes with a strong prerequisite, that the random variables be defined on the same probability space, which is not required for convergence in distribution. Indeed, one can set up weak versions of convergence in distribution which do not even require the convergents to be random variables. The Skorohod representation theorem gives a partial converse to this result. It states some conditions under which random variables which converge in distribution can be coupled on some larger probability space to obtain almost sure convergence.

Skorohod’s original proof dealt with convergence of distributions defined on complete, separable metric spaces (Polish spaces). The version discussed here is from Chapter 5 of Billingsley [1], and assumes the limiting distribution has separable support. More recent authors have considered stronger convergence conditions (convergence in total variation or Wasserstein distance, for example) with weaker topological requirements, and convergence of random variables defined in non-metrizable spaces.

Theorem (Skorohod representation theorem): Suppose that distributions P_n\Rightarrow P, where P is a distribution with separable support. Then we can define a probability space (\Omega,\mathcal{F},\mathbb{P}) and random variables X,(X_n)_{n\ge 1} on this space such that the laws of X,X_n are P,P_n respectively and X_n(\omega)\rightarrow X(\omega) for all \omega\in\Omega.

NB. We are proving ‘sure convergence’ rather than merely almost sure convergence! It is not surprising that this is possible, since changing the value of all the X_ns on a set with measure zero doesn’t affect the conditions for convergence in distribution.

Applications: Before going through the Billingsley proof, we consider one simple application of this result. Let S be a separable metric space containing the support of X, and g a continuous function S\rightarrow S'. Then

X_n\stackrel{a.s.}{\rightarrow}X\quad\Rightarrow\quad g(X_n)\stackrel{a.s.}{\rightarrow}g(X).

So, by applying the Skorohod representation theorem once, and the result that almost sure convergence implies convergence in distribution, we have shown that

X_n\stackrel{d}{\rightarrow}X\quad\Rightarrow\quad g(X_n)\stackrel{d}{\rightarrow}g(X),

subject to these conditions on the space supporting X. And we have avoided the need to be careful about exactly which class of functions determine convergence in distribution, as would be required for a direct argument.

Proof (from [1]): Unsurprisingly, the idea is to construct realisations of the (X_n) from a realisation of X. We take X, and a partition of the support of X into small measurable sets, chosen so that the probability of lying in a particular set is almost the same for X_n as for X, for large n. Then, the X_n are constructed so that for large n, with limitingly high probability X_n lies in the same small set as X.

Constructing the partition is the first step. For each x\in S:=\mathrm{supp}(X), there must be some radius \frac{\epsilon}{4}<r_x<\frac{\epsilon}{2} such that P(\partial B(x,r_x)=0. This is where we use separability. Since every point in the space is within \frac{\epsilon}{4} of some element of a countable sequence of elements of the space, we can take a countable subset of these open balls B(x,r_x) which cover the space. Furthermore, we can take a finite subset of the balls which cover all of the space apart from a set of measure at most \epsilon. We want the sets to be disjoint, and we can achieve this by removing the intersections inductively in the obvious way. We end up with a collection B_0,B_1,\ldots,B_k, where B_0 is the leftover space, such that

  • P(B_0)<\epsilon
  • P(\partial B_i)=0,\quad i=0,1,\ldots,k
  • \mathrm{diam}(B_i)<\epsilon,\quad i=1\ldots,k.

Now suppose for each m, we take such a partition B^m_0,B^m_1,\ldots,B^m_{k_m}, for which \epsilon_m=\frac{1}{2^m}. Unsurprisingly, this scaling of \epsilon is chosen so as to use Borel-Cantelli at the end. Then, from convergence in distribution, there exists an integer N_m such that for n\ge N_m, we have

P_n(B^m_i)\ge (1-\epsilon_m)P(B^m_i),\quad i=0,1,\ldots,k_m. (*)

Now, for N_m\le n <N_{m+1}, for each B^m_i with non-zero probability under P, take Y_{n,i} to be independent random variables with law P_n(\cdot | B^m_i) equal to the restriction onto the set. Now take \xi\sim U[0,1] independent of everything so far. Now we make concrete the heuristic for constructing X_n from X. We define:

X_n=\sum_{i=0}^{k_m}\mathbf{1}_{\{\xi\le 1-\epsilon_m, X\in B^m_i\}} Y_{n,i} + \mathbf{1}_{\{\xi>1-\epsilon_m\}}Z_n.

We haven’t defined Z_n yet. But, from (*), there is a unique distribution such that taking Z_n to be independent of everything so far, with this distribution, we have \mathcal{L}(X_n)=P_n. Note that by iteratively defining random variables which are independent of everything previously defined, our resulting probability space \Omega will be a large product space.

Note that \xi controls whether the X_n follow the law we have good control over, and we also want to avoid the set B^m_0. So define E_m:=\{X\not \in B^m_0, \xi\le 1-\epsilon_m\}. Then, P(E_m)<2\epsilon_m=2^{-(m-1)}, and so by Borel-Cantelli, with probability 1, E_m holds for all m larger than some threshold. Let us call this \liminf_m E_m=: E, and on this event E, we have by definition X_n \rightarrow X. So we have almost sure convergence. But we can easily convert this to sure convergence by removing all \omega\in\Omega for which \xi(\omega)=1 and setting X_n\equiv X on E^c, as this does not affect the distributions.

Omissions: 

  • Obviously, I have omitted the exact construction of the distribution of Z_n. This can be reverse reconstructed very easily, but requires more notation than is ideal for this medium.
  • It is necessary to remove any sets B^m_i with zero measure under P for the conditioning to make sense. These can be added to B^m_0 without changing any of the required conditions.
  • We haven’t dealt with any X_n for n<N_1.

The natural question to ask is what happens if we remove the restriction that the space be separable. There are indeed counterexamples to the existence of a Skorohod representation. The clearest example I’ve found so far is supported on (0,1) with a metric inducing the discrete topology. If time allows, I will explain this construction in a post shortly.

References

[1] – Billingsley – Convergence of Probability Measures, 2nd edition (1999)

Weak Convergence and the Portmanteau Lemma

Much of the theory of Large Deviations splits into separate treatment of open and closed sets in the rescaled domains. Typically we seek upper bounds for the rate function on closed sets, and lower bounds for the rate function on open sets. When things are going well, these turn out to be same, and so we can get on with some applications and pretty much forget about the topology underlying the construction. Many sources made a comment along the lines of “this is natural, by analogy with weak convergence”.

Weak convergence is a topic I learned about in Part III Advanced Probability. I fear it may have been one of those things that leaked out of my brain shortly after the end of the exam season… Anyway, this feels like a good time to write down what it is all about a bit more clearly. (I’ve slightly cheated, and chosen definitions and bits of the portmanteau lemma which look maximally similar to the Large Deviation material, which I’m planning on writing a few posts about over the next week.)

The motivation is that we want to extend the notion of convergence in distribution of random variables to general measures. There are several ways to define convergence in distribution, so accordingly there are several ways to generalise it. Much of what follows will be showing that these are equivalent.

We work in a metric space (X,d) and have a sequence (\mu_n) and \mu of (Borel) probability measures. We say that (\mu_n) converges weakly to \mu, or \mu_n\Rightarrow\mu if:

\mu_n(f)\rightarrow\mu(f), \quad\forall f\in\mathcal{C}_b(X).

So the test functions required for result are the class of bounded, continuous functions on X. We shall see presently that it suffices to check a smaller class, eg bounded Lipschitz functions. Indeed the key result, which is often called the portmanteau lemma, gives a set of alternative conditions for weak convergence. We will prove the equivalence cyclically.

Portmanteau Lemma

The following are equivalent.

a) \mu_n\Rightarrow \mu.

b) \mu_n(f)\rightarrow\mu(f) for all bounded Lipschitz functions f.

c) \limsup_n \mu_n(F)\leq \mu(F) for all closed sets F. Note that we demanded that all the measures be Borel, so there is no danger of \mu(F) not being defined.

d) \liminf_n \mu_n(F)\geq \mu(G) for all open sets G.

e) \lim_n \mu_n(A)=\mu(A) whenever \mu(\partial A)=0. Such an A is called a continuity set.

Remarks

a) All of these statements are well-defined if X is a general topological space. I can’t think of any particular examples where we want to use measures on a non-metrizable space (eg C[0,1] with topology induced by pointwise convergence), but there seem to be a few references (such as the one cited here) implying that the results continue to hold in this case provided X is locally compact Hausdorff. This seems like an interesting thing to think about, but perhaps not right now.

b1) This doesn’t strike me as hugely surprising. I want to say that any bounded continuous function can be uniformly approximated almost everywhere by bounded Lipschitz functions. Even if that isn’t true, I am still not surprised.

b2) In fact this condition could be replaced by several alternatives. In the proof that follows, we only use one type of function, so any subset of \mathcal{C}_b(X) that contains the ones we use will be sufficient to determine weak convergence.

c) and d) Why should the sign be this way round? The canonical example to have in mind is some sequence of point masses \delta_{x_n} where x_n\rightarrow x in some non-trivial way. Then there is some open set eg X\{x} such that \mu_n(X\backslash x)=1 but \mu(X\backslash x)=0. Informally, we might say that in the limit, some positive mass could ‘leak out’ into the boundary of an open set.

e) is then not surprising, as the condition of being a continuity set precisely prohibits the above situation from happening.

Proof

a) to b) is genuinely trivial. For b) to c), find some set F’ containing F such that \mu(F')-\mu(F)=\epsilon. Then find a Lipschitz function f which is 0 outside F’ and 1 on F. We obtain

\limsup_n \mu_n(F)\leq \limsup \mu_n(f)=\mu(f)\leq \mu(F').

But \epsilon was arbitrary, so the result follows as it tends to zero. c) and d) are equivalent after taking F^c=G. If we assume c) and d) and apply them to A^\circ, \bar{A}, then e) follows.

e) to a) is a little trickier. Given a bounded continuous function f, assume WLOG that it has domain [0,1]. At most countably many events \{f=a\} have positive mass under each of \mu, (\mu_n). So given M>0, we can choose a sequence

-1=a_0<a_1<\ldots<a_M=2, such that |a_{k+1}-a_k|<\frac{1}{M},

and \mu(f=a_k)=\mu_n(f=a_k)=0 for all k,n. Now it is clear what to do. \{f\in[a_k,a_{k+1}]\} is a continuity set, so we can apply e), then patch everything together. There are slightly too many Ms and \epsilons to do this sensibly in WordPress, so I will leave it at that.

I will conclude by writing down a combination of c) and d) that will look very familiar soon.

\mu(A^\circ)\leq \liminf_n \mu_n(A)\leq \limsup_n\mu_n(A)\leq \mu(\bar{B}).

References

Apart from the Part III Advanced Probability course, this article was prompted by various books on Large Deviations, including those by Frank den Hollander and Ellis / Dupuis. I’ve developed the proof above from the hints given in the appendix of these very comprehensible notes by Rassoul-Agha and Seppalainen.

Skorohod Space

The following is a summary of a chapter from Billingsley’s Convergence of Probability Measures. The ideas are easy to explain heuristically, but this was the first text I could find which explained how to construct Skorohod space for functions on the whole of the non-negative reals in enough stages that it was easily digestible.

It is relatively straightforward to define a topology on C[0,1], as we can induce from the most sensible metric. In this topology, functions f and g are close together if

\sup_{t\in[0,1]} |f(t)-g(t)| is small.

For cadlag functions, things are a bit more complicated. Two functions might be very similar, but have a discontinuity of similar magnitude at slightly different places. The sup norm of the difference is therefore macroscopically large. So we want a metric that also allows uniformly small deformations of the time scale.

We define the Skorohod (or Skorokhod depending on your transliteration preferences) metric d on D[0,1] as follows. Let \Lambda be the family of continuous, strictly increasing functions from [0,1] to [0,1] which map 0 to 0 and 1 to 1. This will be our family of suitable reparameterisations of the time scale (or abscissa – a new word I learned today. The other axis in a co-ordinate pair is called the ordinate). Anyway, we now say that

d(x,y)<\epsilon\quad\text{if }\exists \lambda\in\Lambda\text{ s.t. }

||\lambda - id||_\infty<\epsilon\quad\text{and}\quad ||f-\lambda\circ g||_\infty<\epsilon.

In other words, after reparameterising the time scale for g, without moving any time by more than epsilon, the functions are within epsilon in the sup metric.

Weak Convergence

We have the condition: if \{P_n\} is a tight sequence of probability measures and we have

\text{If }P_n\pi_{t_1,\ldots,t_k}^{-1}\Rightarrow P\pi_{t_1,\ldots,t_k}^{-1}\quad\forall t_1,\ldots,t_k\in[0,1],\quad\text{then }P_n\Rightarrow P,

where \pi_{t_1,\ldots,t_k} is the projection onto a finite-dimensional set. This is a suitable condition for C[0,1]. For D[0,1], we have the additional complication that these projections might not be continuous everywhere.

We can get over this problem. For a measure P, set T_P to be the set of t\in[0,1] such that \pi_t is continuous P-almost everywhere (ie for all f\in D apart from a collection with P-measure = 0). Then, for all P, it is not hard to check that 0,1\in T_P and [0,1]\backslash T_P is countable.

The tightness condition requires two properties:

1) \lim_{K\rightarrow\infty} \limsup_{n}P_n[f:||f||\geq K]=0.

2) \forall \epsilon>0:\,\lim_\delta\limsup_n P_n[f:w_f'(\delta)\geq\epsilon]=0.

These say, respectively, that the measure of ||f|| doesn’t escape to \infty, and there is no mass given in the limit to functions which ‘wiggle with infinite frequency on an epsilon scale of amplitude’.

D_\infty=D[0,\infty)

Our earlier definition of the Skorohod metric could have been written:

d(f,g)=\inf_{\lambda\in\Lambda}\{||\lambda-\text{id}||\vee||f-\lambda\circ g||\}.

From a topological convergence point of view, there’s no need to use the sup norm on \lambda - \text{id}. We want to regulate smoothness of the reparameterisation, so we could use the norm:

||\lambda||^\circ=\sup_{s<t}|\log\frac{\lambda(t)-\lambda(s)}{t-s}|,

that is, the slope is uniformly close to 1 if ||\lambda||^\circ is small. The advantage of this choice of norm is that an extension to D[0,\infty) is immediate. Also, the induced product norm

d^\circ(f,g)=\inf_{\lambda\in\Lambda} \{||\lambda - \text{id}||^\circ \vee||x-\lambda\circ y||\}

is complete. This gives us a few problems, as for example

d_\circ(1_{[0,1)},1_{[0,1-\frac{1}{n})})=1,

as you can’t reparameterise over the jump in a way that ensures the log of the gradient is relatively small. (In particular, to keep the sup norm less than 1, we would need \lambda to send [1-\frac{1}{n}]\mapsto 1, and so ||\lambda||^\circ=\infty by definition.)

So we can’t immediately define Skorohod convergence on D_\infty by demanding convergence on any restriction to [0,t]. We overcome this in a similar way to convergence of distribution functions.

Lemma: If d_t^\circ (f_n,f)\rightarrow_n 0 then for any s<t with f cts at s, then d_s^\circ(f_n,f)\rightarrow_n 0.

So this says that the functions converge in Skorohod space if for arbitrarily large times T where the limit function is continuous, the restrictions to [0,T] converge. (Note that cadlag functions have at most countably many discontinuities, so this is fine.)

A metric for D_\infty

If we want to specify an actual metric d_\infty^\circ, the usual tools for specifying a countable product metric will do here:

d_\infty^\circ(f,g)=\sum_{m\geq 1}2^{-m}[1\wedge d_m^\circ(f^m,g^m)],

where f^m is the restriction of f to [0,m], with the potential discontinuity at m smoothed out:

f^m(t)=\begin{cases}t&t\leq m-1\\ (m-t)f(t)&t\in[m-1,m]\\ 0&t\geq m.\end{cases}

In particular, d_\infty^\circ(f,g)=0\Rightarrow f^m=g^m\,\forall m.

It can be checked that:

Theorem: d_\infty^\circ(f_n,f)\rightarrow 0 in D_\infty if and only iff

\exists \lambda_n\in\Lambda_\infty\text{ s.t. }||\lambda_n-\text{id}||\rightarrow 0

\text{and }\sup_{t\leq m}|\lambda_n\circ f_n-f|\rightarrow_n 0,\,\forall m,

and that d_\infty^\circ (f_n,f)\rightarrow 0 \Rightarrow d_t^\circ(f_n,f)\rightarrow 0 for every point of continuity t of f.

Similarly weak convergence and tightness properties are available, roughly as you might expect. It is probably better to reference Billingsley’s book or similar sources rather than further attempting to summarise them here.

Levy’s Convergence Theorem

We consider some of the theory of Weak Convergence from the Part III course Advanced Probability. It has previously been seen, or at least discussed, that characteristic functions uniquely determine the laws of random variables. We will show Levy‘s theorem, which equates weak convergence of random variables and pointwise convergence of characteristic functions.

We have to start with the most important theorem about weak convergence, which is essentially a version of Bolzano-Weierstrass for measures on a metric space M. We say that a sequence of measures is tight if for any \epsilon>0, there exists a compact K_\epsilon such that $\sup_n\mu(M\backslash K_\epsilon)\leq \epsilon$. Informally, each measure is concentrated compactly, and this property is uniform across all the measures. We can now state and prove a result of Prohorov:

Theorem (Prohorov): Let (\mu_n) be a tight sequence of probability measures. Then there exists a subsequence (n_k) and a probability measure \mu such that \mu_{n_k}\Rightarrow \mu.

Summary of proof in the case M=\mathbb{R}By countability, we can use Bolzano-Weierstrass and a standard diagonal argument to find a subsequence such that the distribution functions

F_{n_k}(x)\rightarrow F(x)\quad\forall x\in\mathbb{Q}

Then extend F to the whole real line by taking a downward rational limit, which ensures that F is cadlag. Convergence of the distribution functions then holds at all points of continuity of F by monotonicity and approximating by rationals from above. It only remains to check that F(-\infty)=0,F(\infty)=1, which follows from tightness. Specifically, monotonicity guarantees that F has countably many points of discontinuity, so can choose some large N such that both N and -N are points of continuity, and exploit that eventually

\sup_n \mu_n([-N,N])>1-\epsilon

We can define the limit (Borel) measure from the distribution function by taking the obvious definition F(b)-F(a) on intervals, then lifting to the Borel sigma-algebra by Caratheodory’s extension theorem.

Theorem (Levy): X_n,X random variables in \mathbb{R}^d. Then:

L(X_n)\rightarrow L(X)\quad\iff\quad \phi_{X_n}(z)\rightarrow \phi_X(z)\quad \forall z\in\mathbb{R}^d

The direction \Rightarrow is easy: x\mapsto e^{i\langle z,x\rangle} is continuous and bounded.

In the other direction, we can in fact show a stronger constructive result. Precisely, if \exists \psi:\mathbb{R}^d\rightarrow \mathbb{C} continuous at 0 with \psi(0)=1 (*) and such that \phi_{X_n}(z)\rightarrow \psi(z)\quad \forall z\in\mathbb{R}^d, then \psi=\phi_X the characteristic function of some random variable and L(X_n)\rightarrow L(X). Note that the conditions (*) are the minimal such that \phi could be a characteristic function.

We now proceed with the proof. We apply a lemma that is basically a calculation that we don’t repeat here.

\mathbb{P}(||X||_\infty>K)\stackrel{\text{Lemma}}{<}C_dK^d\int_{[-\frac{1}{K},\frac{1}{K}]^d}(1-\Re \phi_{X_n}(u))du\stackrel{\text{DOM}}{\rightarrow}C_dK^d\int (1-\Re \psi(u))du

where we apply that the integrand is dominated by 2. From the conditions on \psi, this is <\epsilon for large enough K. This bound is of course also uniform in n, and so the random variables are tight. Prohorov then gives a convergent subsequence, and so a limit random variable exists.

Suppose the whole sequence doesn’t converge to X. Then by Prohorov, there is a separate subsequence which converges to Y say, so by the direction of Levy already proved there is convergence of characteristic functions along this subsequence. But characteristic functions determine law, so X=Y, which is a contradiction.