Continuing the theme of revising theory in the convergence of random processes that I shouldn’t have forgotten so rapidly, today we consider the Skorohod Representation Theorem. Recall from the standard discussion of the different modes of convergence of random variables that almost sure convergence is among the strongest since it implies convergence in probability and thus convergence in distribution. (But not convergence in . For example, take U uniform on [0,1], and .)

Almost sure convergence is therefore in some sense the most useful form of convergence to have. However, it comes with a strong prerequisite, that the random variables be defined on the same probability space, which is not required for convergence in distribution. Indeed, one can set up weak versions of convergence in distribution which do not even require the convergents to be random variables. The Skorohod representation theorem gives a partial converse to this result. It states some conditions under which random variables which converge in distribution can be coupled on some larger probability space to obtain almost sure convergence.

Skorohod’s original proof dealt with convergence of distributions defined on complete, separable metric spaces (Polish spaces). The version discussed here is from Chapter 5 of Billingsley [1], and assumes the limiting distribution has separable support. More recent authors have considered stronger convergence conditions (convergence in total variation or Wasserstein distance, for example) with weaker topological requirements, and convergence of random variables defined in non-metrizable spaces.

**Theorem (Skorohod representation theorem): **Suppose that distributions , where P is a distribution with separable support. Then we can define a probability space and random variables on this space such that the laws of are respectively and for all .

*NB. We are proving ‘sure convergence’ rather than merely almost sure convergence! It is not surprising that this is possible, since changing the value of all the s on a set with measure zero doesn’t affect the conditions for convergence in distribution.*

**Applications: **Before going through the Billingsley proof, we consider one simple application of this result. Let S be a separable metric space containing the support of X, and g a continuous function . Then

So, by applying the Skorohod representation theorem once, and the result that almost sure convergence implies convergence in distribution, we have shown that

subject to these conditions on the space supporting X. And we have avoided the need to be careful about exactly which class of functions determine convergence in distribution, as would be required for a direct argument.

**Proof (from [1]):** Unsurprisingly, the idea is to construct realisations of the from a realisation of X. We take X, and a partition of the support of X into small measurable sets, chosen so that the probability of lying in a particular set is almost the same for as for X, for large n. Then, the are constructed so that for large n, with limitingly high probability lies in the same small set as X.

Constructing the partition is the first step. For each , there must be some radius such that . This is where we use separability. Since every point in the space is within of some element of a countable sequence of elements of the space, we can take a countable subset of these open balls which cover the space. Furthermore, we can take a finite subset of the balls which cover all of the space apart from a set of measure at most . We want the sets to be disjoint, and we can achieve this by removing the intersections inductively in the obvious way. We end up with a collection , where is the leftover space, such that

- .

Now suppose for each m, we take such a partition , for which . Unsurprisingly, this scaling of is chosen so as to use Borel-Cantelli at the end. Then, from convergence in distribution, there exists an integer such that for , we have

(*)

Now, for , for each with non-zero probability under P, take to be independent random variables with law equal to the restriction onto the set. Now take independent of everything so far. Now we make concrete the heuristic for constructing from X. We define:

We haven’t defined yet. But, from (*), there is a unique distribution such that taking to be independent of everything so far, with this distribution, we have . Note that by iteratively defining random variables which are independent of everything previously defined, our resulting probability space will be a large product space.

Note that controls whether the follow the law we have good control over, and we also want to avoid the set . So define . Then, , and so by Borel-Cantelli, with probability 1, holds for all m larger than some threshold. Let us call this , and on this event E, we have by definition . So we have almost sure convergence. But we can easily convert this to sure convergence by removing all for which and setting on , as this does not affect the distributions.

**Omissions: **

- Obviously, I have omitted the exact construction of the distribution of . This can be reverse reconstructed very easily, but requires more notation than is ideal for this medium.
- It is necessary to remove any sets with zero measure under P for the conditioning to make sense. These can be added to without changing any of the required conditions.
- We haven’t dealt with any for .

The natural question to ask is what happens if we remove the restriction that the space be separable. There are indeed counterexamples to the existence of a Skorohod representation. The clearest example I’ve found so far is supported on (0,1) with a metric inducing the discrete topology. If time allows, I will explain this construction in a post shortly.

**References**

[1] – Billingsley – Convergence of Probability Measures, 2nd edition (1999)

Pingback: Tightness in Skorohod Space | Eventually Almost Everywhere

Thanks a lot! Clear explanations.

Pingback: Skorohod Representation Theorem – El Blog de Isadore Nabi

Pingback: Convergence in probability and pointwise convergence of densities? – Math Solution