Non-separable Skorohod Representations

In the previous post, I discussed the statement and proof of the Skorohod representation theorem. This concerns the conditions under which it is possible to couple distributions which converge in law, to obtain a family of random variable on a possibly very large probability space, which converge almost surely. The condition for the theorem to hold is that the base space, or at least the support of the limiting distribution should be a separable metric space. Skorohod’s original proof concerned the case where all the distributions were supported on a complete, separable metric space (Polish space), but this extension is not particularly involved, and was proven not long after the original result.

It is natural to ask exactly what goes wrong in non-separable or non-metrizable spaces. Recall a space is separable if it contains a countable dense subset. Obviously, finite or countable sets are by definition separable with any metric. Considering the points with rational coordinates shows that \mathbb{R}^d is separable for each d, and the Stone-Weierstrass theorem shows that continuous functions with on a bounded interval are also separable with the uniform topology, as they can be approximated uniformly well by polynomials with rational coefficients. One heuristic is that a separable space does not have ‘too many’ open sets.

There are references (for example, see [2]) to examples of Skorohod non-representation in non-metrizable topological spaces, which are ‘big’ enough to allow convergence in distribution with respect to a particular class of test functions, but where the distributions are not uniformly tight, so cannot converge almost surely. However, I don’t really understand this well at all, and have struggled to chase the references, some of which are unavailable, and some in French.

Instead, I want to talk about an example given in [1] of a family of distributions on a non-separable space, which cannot be coupled to converge almost surely. The space is (0,1) equipped with the discrete metric, which says that d(x,y)=1 whenever x\ne y. Note that it is very hard to have even deterministic convergence in this space, since the only way to be close to a element of the space is indeed to be equal to that element. We will construct random variables and it will unsurprising that they cannot possibly converge almost surely in any coupling, but the exact nature of the construction will lead to convergence in distribution.

Based on what we proved last time, the support of the limiting distribution will be non-separable. It turns out that the existence of such a distribution is equiconsistent in the sense of formal logic with the existence of an extension of Lebesgue measure to the whole power set of (0,1). This is not allowed under the Axiom of Choice, but is consistent under the slightly weaker Axiom of Dependent Choice (AC). This weaker condition says, translated into language more familiar to me, that every directed graph with arbitrary (and in particular, potentially uncountable) vertex set, and with all out-degrees at least 1 contains an infinite directed path. This seems obvious when viewed through the typically countable context of graph theory. But the natural construction is to start somewhere and ‘just keep going’ wherever possible, which involves making a choice from the out-neighbourhood at lots of vertices. Thus it is clear why this is weaker than AC. Anyway, in the sequel, we assume that this extension of Lebesgue measure exists.

Example (from [1]): We take (X_n)_{n\ge 1} to be an IID sequence of non-negative RVs defined on the probability space ((0,1),\mathcal{B}(0,1),\mathrm{Leb}), with expectation under Lebesgue measure equal to 1. It is not obvious how to do this, with the restriction on the probability space. One example might be to write \omega\in(0,1) as \overline{\omega_1\omega_2\ldots}, the binary expansion, and then set X_n=2\omega_n. We will later require that X_n is not identically 1, which certainly holds in this example just given.

Let \mu be the extension of Lebesgue measure to the power set \mathcal{P}=\mathcal{P}(0,1). Now define the measures:

\mu_n(B)=\mathbb{E}_\mu(X_n \mathbf{1}_B),\quad \forall B\in\mathcal{P}.

To clarify, we are defining a family of measures which also are defined for all elements of the power set. We have defined them in a way that is by definition a coupling. This will make it possible to show convergence in distribution, but they will not converge almost surely in this coupling, or, in fact, under any coupling. Now consider a restricted class of sets, namely B\in \sigma(X_1,\ldots,X_k), the class of sets distinguishable by the outcomes of the first k RVs.

[Caution: the interpretation of this increasing filtration is a bit different to the standard setting with for example Markov processes, as the sets under consideration are actually subsets of the probability space on which everything is defined. In particular, there is no notion that a ‘fixed deterministic set’ lies in all the layers of the filtration.]

Anyway, by independence, when n>k, by independence, we have

\mu_n(B)=\mathbb{E}_\mu(X_n\mathbf{1}_B)=\mathbb{E}_\mu(X_n)\mathbb{E}_\mu(\mathbf{1}_B)=\mu(B).

So whenever B\in\mathcal{F}\bigcup_k \sigma(X_1,\ldots,X_k), \lim_n \mu_n(B)=\mu(B). By MCT, we can extend this convergence to any bounded \mathcal F-measurable function.

This is the clever bit. We want to show that \mu_n(B)\rightarrow\mu(B) for all B\in\mathcal P, but we only have it so far for B\in\mathcal F. But since \mathcal{F}\subset \mathcal P, which is the base field of the probability space under the (non-AC) assumption, we can take conditional expectations. In particular for any B\in\mathcal P, \mathbb{E}_\mu[\mathbf{1}_B | \mathcal{F}] is a bounded, \mathcal F-measurable function. Hence, by definition of \mu_n and the extended MCT result:

\mu_n(B)=\mathbb{E}_\mu[X_n\mathbb{E}_\mu[\mathbf{1}_B|\mathcal F]]=\mathbf{E}_{\mu_n}[\mathbb{E}_\mu[\mathbf{1}_B|\mathcal F]] \rightarrow \mathbb{E}_\mu [\mathbb{E}_\mu[\mathbf{1}_B |\mathcal{F}]].

Now, since by definition \mathbf{1}_B is \mathcal{P}-measurable, applying the tower law gives that this is equal to \mu(B). So we have

\mu_n(B)\rightarrow \mu(B),\quad \forall B\in\mathcal{P}. (*)

This gives weak convergence \mu_n\Rightarrow \mu. At first glance it might look like we have proved a much stronger condition than we need. But recall that in any set equipped with the discrete topology, any set is both open and closed, and so to use the portmanteau lemma, (*) really is required.

Now we have to check that we can’t have almost sure convergence in any coupling of these measures. Suppose that we have a probability space with random variables Y,(Y_n) satisfying \mathcal L(Y)=\mu, \mathcal L(Y_n)=\mu_n. But citing the example I gave of X_n satisfying the conditions, the only values taken by Y_n are 0 and 2, and irrespective of the coupling,

\mathbb{P}(Y_n=2\text{ infinitely often})>0.

So it is impossible that Y_n can converge almost surely to any supported on [0,1].

References

[1] Berti, Pratelli, Rigo – Skorohod Representation and Disintegrability (here – possibly not open access)

[2] Jakubowski – The almost sure Skorokhod representation for subsequences in non-metric spaces.

Skorohod Representation Theorem

Continuing the theme of revising theory in the convergence of random processes that I shouldn’t have forgotten so rapidly, today we consider the Skorohod Representation Theorem. Recall from the standard discussion of the different modes of convergence of random variables that almost sure convergence is among the strongest since it implies convergence in probability and thus convergence in distribution. (But not convergence in L_1. For example, take U uniform on [0,1], and X_n=n\mathbf{1}_{\{U<\frac{1}{n}\}}.)

Almost sure convergence is therefore in some sense the most useful form of convergence to have. However, it comes with a strong prerequisite, that the random variables be defined on the same probability space, which is not required for convergence in distribution. Indeed, one can set up weak versions of convergence in distribution which do not even require the convergents to be random variables. The Skorohod representation theorem gives a partial converse to this result. It states some conditions under which random variables which converge in distribution can be coupled on some larger probability space to obtain almost sure convergence.

Skorohod’s original proof dealt with convergence of distributions defined on complete, separable metric spaces (Polish spaces). The version discussed here is from Chapter 5 of Billingsley [1], and assumes the limiting distribution has separable support. More recent authors have considered stronger convergence conditions (convergence in total variation or Wasserstein distance, for example) with weaker topological requirements, and convergence of random variables defined in non-metrizable spaces.

Theorem (Skorohod representation theorem): Suppose that distributions P_n\Rightarrow P, where P is a distribution with separable support. Then we can define a probability space (\Omega,\mathcal{F},\mathbb{P}) and random variables X,(X_n)_{n\ge 1} on this space such that the laws of X,X_n are P,P_n respectively and X_n(\omega)\rightarrow X(\omega) for all \omega\in\Omega.

NB. We are proving ‘sure convergence’ rather than merely almost sure convergence! It is not surprising that this is possible, since changing the value of all the X_ns on a set with measure zero doesn’t affect the conditions for convergence in distribution.

Applications: Before going through the Billingsley proof, we consider one simple application of this result. Let S be a separable metric space containing the support of X, and g a continuous function S\rightarrow S'. Then

X_n\stackrel{a.s.}{\rightarrow}X\quad\iff\quad g(X_n)\stackrel{a.s.}{\rightarrow}g(X).

So, by applying the Skorohod representation theorem once, and the result that almost sure convergence implies convergence in distribution, we have shown that

X_n\stackrel{d}{\rightarrow}X\quad\iff \quad g(X_n)\stackrel{d}{\rightarrow}g(X),

subject to these conditions on the space supporting X.

Proof (from [1]): Unsurprisingly, the idea is to construct realisations of the (X_n) from a realisation of X. We take X, and a partition of the support of X into small measurable sets, chosen so that the probability of lying in a particular set is almost the same for X_n as for X, for large n. Then, the X_n are constructed so that for large n, with limitingly high probability X_n lies in the same small set as X.

Constructing the partition is the first step. For each x\in S:=\mathrm{supp}(X), there must be some radius \frac{\epsilon}{4}<r_x<\frac{\epsilon}{2} such that P(\partial B(x,r_x)=0. This is where we use separability. Since every point in the space is within \frac{\epsilon}{4} of some element of a countable sequence of elements of the space, we can take a countable subset of these open balls B(x,r_x) which cover the space. Furthermore, we can take a finite subset of the balls which cover all of the space apart from a set of measure at most \epsilon. We want the sets to be disjoint, and we can achieve this by removing the intersections inductively in the obvious way. We end up with a collection B_0,B_1,\ldots,B_k, where B_0 is the leftover space, such that

  • P(B_0)<\epsilon
  • P(\partial B_i)=0,\quad i=0,1,\ldots,k
  • \mathrm{diam}(B_i)<\epsilon,\quad i=1\ldots,k.

Now suppose for each m, we take such a partition B^m_0,B^m_1,\ldots,B^m_{k_m}, for which \epsilon_m=\frac{1}{2^m}. Unsurprisingly, this scaling of \epsilon is chosen so as to use Borel-Cantelli at the end. Then, from convergence in distribution, there exists an integer N_m such that for n\ge N_m, we have

P_n(B^m_i)\ge (1-\epsilon_m)P(B^m_i),\quad i=0,1,\ldots,k_m. (*)

Now, for N_m\le n <N_{m+1}, for each B^m_i with non-zero probability under P, take Y_{n,i} to be independent random variables with law P_n(\cdot | B^m_i) equal to the restriction onto the set. Now take \xi\sim U[0,1] independent of everything so far. Now we make concrete the heuristic for constructing X_n from X. We define:

X_n=\sum_{i=0}^{k_m}\mathbf{1}_{\{\xi\le 1-\epsilon_m, X\in B^m_i\}} Y_{n,i} + \mathbf{1}_{\{\xi>1-\epsilon_m\}}Z_n.

We haven’t defined Z_n yet. But, from (*), there is a unique distribution such that taking Z_n to be independent of everything so far, with this distribution, we have \mathcal{L}(X_n)=P_n. Note that by iteratively defining random variables which are independent of everything previously defined, our resulting probability space \Omega will be a large product space.

Note that \xi controls whether the X_n follow the law we have good control over, and we also want to avoid the set B^m_0. So define E_m:=\{X\not \in B^m_0, \xi\le 1-\epsilon_m\}. Then, P(E_m)<2\epsilon_m=2^{-(m-1)}, and so by Borel-Cantelli, with probability 1, E_m holds for all m larger than some threshold. Let us call this \liminf_m E_m=: E, and on this event E, we have by definition X_n \rightarrow X. So we have almost sure convergence. But we can easily convert this to sure convergence by removing all \omega\in\Omega for which \xi(\omega)=1 and setting X_n\equiv X on E^c, as this does not affect the distributions.

Omissions: 

  • Obviously, I have omitted the exact construction of the distribution of Z_n. This can be reverse reconstructed very easily, but requires more notation than is ideal for this medium.
  • It is necessary to remove any sets B^m_i with zero measure under P for the conditioning to make sense. These can be added to B^m_0 without changing any of the required conditions.
  • We haven’t dealt with any X_n for n<N_1.

The natural question to ask is what happens if we remove the restriction that the space be separable. There are indeed counterexamples to the existence of a Skorohod representation. The clearest example I’ve found so far is supported on (0,1) with a metric inducing the discrete topology. If time allows, I will explain this construction in a post shortly.

References

[1] – Billingsley – Convergence of Probability Measures, 2nd edition (1999)

Recent Research Activity

I’ve spent this week in Luminy, near Marseille, attending a summer school run by ALEA, the organisation of French probabilists. We’ve been staying in CIRM, a dedicated maths research conference centre at the edges of the calanques, the area of mountains and jagged coastal inlets between Marseille and Cassis. The walking possibilities have been excellent, as have the courses and lectures, on a range of topics in probability theory.

Anyway, the time here has been an excellent moment to reflect on my research progress, and try to come up with the sort of fresh ideas that are perhaps slightly inhibited by sitting at a desk with an endless supply of paper on which to try calculations. When I get back, I have to submit a first-year report, so at least for a little while I will have to suppress the desire to make further progress and instead diligently assemble the progress I have made.

The Model

I’ve defined some of these processes in past posts, but I see no harm in doing so again. We take the standard Erdos-Renyi random graph process, where edges are added one-at-a-time uniformly at random between n vertices, and amend it by adding a deletion mechanism. The aim is to arrive at a process which looks in equilibrium more like the critical random graph than either the subcritical or supercritical regimes, where the components are very small, and dominated by one giant component respectively. Rath, Toth and others have studied the process where each vertex is hit by lightning at uniform rate. When this happens, we delete all the edges in the component containing that vertex. Naturally, big components will be hit by lightning more often than small components, and so this acts as a mechanism to prevent the formation of giant components, if scaled correctly.

We take a different approach. We observe that criticality in the original random graph process is denoted by the first appearance of a giant component, but also by the first appearance of a) lots of cycles, and b) large cycles. In particular, it is very unlikely that a giant component could form without containing any cycles. We will therefore use the appearance of a cycle to trigger some form of deletion mechanism.

Our final goal is to treat the so-called ‘Cycle Deletion’ model. Here, whenever a cycle appears, we delete all the edges in that cycle immediately. There are several challenges in treating this model, because the rate at which cycles emerge in a tree is a function of the tree structure. The trees in this model will not be Uniform Spanning Trees (though it is very possible that they will be ‘almost USTs’ in some sense – we need to investigate this further) so it will be hard to make nice statements about the rates. For the standard random graph process, if we are only interested in the sizes of the components, we are actually allowed to ignore the graph structure entirely. The component sizes evolve as a discrete, stochastic version of the multiplicative coalescent (sometimes called a Marcus-Lushnikov process). We would like a deletion mechanism that has a nice interpretation as a fragmentation operation in the same sense. The rate at which a component fragments will be quadratic in the size of the component, since there are O(k^2) possible edges between k vertices forming a component, and adding any of precisely these will create a cycle.

I’ve talked previously about how to overcome the problems with the tree structure in Cycle Deletion with the so-called Uniform Cycle Deleting model. In any case, as a starting point we might consider the Cycle-Induced Forest Fire model. Here, whenever a cycle appears, we delete all the edges, including the new one, in the whole component which contains the cycle.

We suspect this model may resemble the critical random graph at all times. The main characteristic of G(n,1/n) is that the largest component is of size O(n^2/3), and indeed there are arbitrarily many components of this size, with high probability in the limit. Since CIFF is recurrent for any fixed n, meaning that it will visit any state infinitely often (rather than tending to infinity or similar), we should ask what the largest component is typically in the equilibrium distribution. Our aim is to prove that it is O(n^2/3). We might suspect that the typical size of the largest component will be greater in the Cycle Deletion model, since each fragmentation event is less severe there, removing fewer edges.

An Upper Bound

The nice thing about Markov chains is that they have an ergodic property, which means that if you run them for long enough, the proportion of time spent in any state is given by the stationary probability of being in that state. It doesn’t matter whether or not you start in equilibrium, since it will converge anyway. Thus it is meaningful to talk about properties like the average number of isolated vertices as a time-average as well as an average with respect to some distribution.

This quantity is the key to an upper bound. We can equally talk about the average change in the number of isolated vertices in a time-step. This will increase when a component fragments, and will decrease when an isolated vertex coalesces with another component. In particular, the largest possible decrease in the number of isolated vertices in a single time-step is 2, corresponding to an edge appearing between two isolated vertices.

Suppose that with probability \Theta(1) there is a component of size n^\alpha for some \alpha>2/3. Then such a component makes a contribution to the expected change in the number of isolated vertices of

\Theta(1) n^\alpha \left(\frac{n^\alpha}{n}\right)^2. (*)

Where does this come from? Well, we are tracking the contributions from the event that the largest component is of this size and that it fragments, giving n^\alpha new isolated vertices. So the \Theta(1) accounts for the probability that there is such a component to begin with. Then, conditional on that, the probability that it gets fragmented in the next time-step is the probability that both ends of the next edge added lie in that component. Since the edge is chosen uniformly at random, the probability of this is n^\alpha/n. Note that this is under a slightly odd definition of an edge, that allows loops. Basically, I don’t want to have lots of correction terms involving \binom{n}{2} floating around. However, it would make no difference to the orders of magnitude if we to do it with these.

So, this is only one contribution to the typical rate of gain of isolated vertices. Now note that if \alpha>2/3, then this expression is >> 1. This is bad since the negative contributions to this expected flux in the number of isolated vertices is O(1). So this suggests that over time, the number of isolated vertices will keep growing. This is obviously ridiculous since a) we are in equilibrium, so the expected flux should be 0 and b) the number of isolated vertices cannot exceed n, for clear reasons.

This gives us an upper bound of n^2/3 as the typical scale of the largest component. We can come up with a similar argument for the cycle deleting model. The most helpful thing to track there is the number of edges in the graph. Note that since the graph is at all times a forest on n vertices, the number of edges is equal to n minus the number of (tree) components. We use the fact that the typical fragmentation of a component of size k creates O(\sqrt{k}) new components. It is possible to argue via isolated vertices here too, but the estimates are harder, or at least less present in the literature.

Lower Bounds?

The problem with lower bounds is that it is entirely possible that the flux in the number of isolated vertices is not driven by typical behaviour. Suppose for example we had a different rule. We begin a random graph process, and the first time we see a cycle in a component with size larger than n^2/3, we delete all the edges in the whole graph. Then we will see a sequence of random graph processes starting with the empty graph and stopped at some point close to criticality (in fact, with high probability in the *critical window*), and these will all be glued together. So then, most of the time the process will look subcritical, but the gains in isolated vertices will occur only during the critical periods, which are only an asymptotically small proportion of the time.

At the moment, my approach to the lower bound is instead to prove that the upper bound is tight. I mean this in the following sense. Suppose we wanted to be sure that (*) was in fact equal to the average rate of gain of isolated vertices. We would have to check the following:

  • That the total contributions from all other components were similar or smaller than from the component(s) of size roughly n^{\alpha}.
  • That there were only a few components of size n^{\alpha}. In particular, the estimate would be wrong if there were n^\epsilon such components for any \epsilon>0.
  • That it cannot be the case that for example, some small proportion of the time there is a component of size roughly n^{\alpha+\epsilon}, and over a large enough time these make a greater contribution to the average gain in isolated vertices.

A nice way to re-interpret this is to consider some special vertex and track the size of its component in time. It will be involved in repeated fragmentations over the course of time, so it is meaningful to talk about the distribution of the size of the component containing the vertex when it is fragmented. Our aim is to show that this distribution is concentrated on the scaling O(n^\alpha).

So this has turned out to be fairly hard. Rather than try to explain some of the ideas I’ve employed in attempting to overcome this, I will finish by giving one reason why it is hard.

We have seen that the component sizes in random graphs evolve as the multiplicative coalescent, but at a fixed moment in time, we can derive good estimates from an analogy with branching processes. We might like to do that here. If we know what the system looks like most of the time, we might try to ‘grow’ a multiplicative coalescent, viewing it like a branching process, with distribution given by the typical distribution. The problem is that when I do this, I find that the expectation of the offspring distribution is \Theta(1). This looks fine, since 1 is the threshold for extinction with probability 1. However, throughout the analysis, I have only been paying attention to the exponent of n in all the time and size estimates. For example, I view n^\alpha and n^\alpha \log n as the same. This is a problem, as when I say the expectation is \Theta(1), I am really saying it is \sim n^0. This means it could be \frac{1}{\log n} or \log n. Of course, there is a massive difference between these, since a branching process grows expectationally!

So, this approach appears doomed in its current form. I have some other ideas, but a bit more background may be required before going into those. I’m going to be rather busy with teaching on my return to the office, so unfortunately it is possible that there may be many posts about second year probability and third year applied probability before anything more about CIFF.

From G(n,p) to G(n,m)

In the previous posts about random graphs, I was focusing on the model G(n,p). Here, we have n vertices, and we insert an edge between any pair of vertices independently with probability p. In particular, the number of edge which appear in a realisation of G(n,p) is a random variable, distributed as \text{Bin}(\frac{n(n-1)}{2},p).

The original model examined by Erdos and Renyi, after whom the random graph described above was named, was slightly different. Still with n vertices, they specified how many edges m they wanted in the graph, and chose uniformly at random from the set of graphs with this number of edges. This model is usually denoted G(n,m). Normally we can tell them apart by context. Obviously, p is a probability so lies in [0,1], whereas m is a positive integer, so there isn’t much room for ambiguity.

The key observation is that, if H is some graph with n vertices and m edges, then the probability that this is appears as G(n,p) is

p^{E(H)}(1-p)^{n-E(H)}.

This is constant if we vary H while fixing m. In other words, G(n,p) conditioned to have m edges is G(n,m). So, via some sort of law of total probability, we can construct G(n,p) by taking m to be distributed as \text{Bin}(\frac{n(n-1)}{2},p), and conditional on that, sampling from G(n,m). (*)

We can couple G(n,p) for all p, by assigning iid uniform [0,1] random variables to each pair of vertices, then including the edge in G(n,p) only if the value of the RV is greater than 1-p. Similarly, it is often helpful to think of G(n,m) as m varies as a random process, where edges are added one at a time, and at each stage the next edge is chosen uniformly at random from those not currently present. Perhaps because of this, it is sometimes easier to prove results for G(n,p) than for G(n,m) so we want to develop a framework to move between the two.

The decomposition (*) gives a relatively straightforward way to move from a result in G(n,m) to a result in G(n,p). By the Central Limit Theorem, the number of edges in G(n,p) is \binom{n}{2}p+O(n) in the limit, and so if a result with high probability in G(n,m) for all m in some interval, say [\binom{n}{2}p-Kn,\binom{n}{2}p+Kn] for some large K, then the law of total probability shows that the property holds with high probability in G(n,p).

In general, we get more interesting properties when p is a function of n. As discussed in previous posts, the scaling p=\frac{\lambda}{n} is particularly worth studying. CLT now shows that G(n,\frac{\lambda}{n}) has \frac{\lambda n}{2}+O(\sqrt{n}) edges in the limit. If you are confused why you can’t just substitute this value for p into the previous expression, note that p(1-p) does appear in the general asymptotic variance, but this gets absorbed into the “big O” notation when p is constant.

More importantly, many properties that we might want to consider are not in general affected in the limit by adding or removing O(\sqrt{n}) edges. For example, with high probability, G(n,m) has largest component of size (\zeta_\lambda+o(1)) n whenever \lambda>1 and m\in [\frac{\lambda n}{2}-O(\sqrt{n}),\frac{\lambda n}{2}+O(\sqrt{n})]. Some of this notation would need to be made a bit more precise in a formal argument, but for now, let’s take that as given. This then implies that with high probability, G(n,\frac{\lambda}{n}) has largest component of size (\zeta_\lambda+o(1))n also.

Of course, from the logical structure of this blog, this deduction is a bit bogus, because we have only just introduced G(n,m), and have no idea about the properties of its giant components yet. We seek instead an argument to deduce facts about G(n,m) from facts about G(n,p). Because G(n,m) cannot obviously be written as some conditioned combination of G(n,p)s, this instinctively seems harder. Bollobas gives various general conditions to carry results over between the two regimes in his Part III course notes, but I feel that an examples would be the easiest way to explain the ideas.

The size of the largest component is such an important quantity, we might as well consider that, in the subcritical case. We work with G(n,\frac{\lambda}{n}), for which we have the result:

\mathbb{P}_{\lambda+o(1)}(C_1\geq a\log n)=O(n^{-\delta}),

for some \delta>0, whenever a>I_\lambda^{-1}, the rate function at 1 of the total population size of a Poisson \lambda branching process. For now, that doesn’t matter too much, except that it is continuous as a function of \lambda. We want to show that G(n,\frac{\lambda n}{2}) has the same property.

The trick is to consider G(n,\frac{\lambda+\epsilon}{n}) instead. Let E_a be the event described above. By the law of total probability and the decomposition mentioned above, we have:

\mathbb{P}_{\lambda+\epsilon}(E_a)=\sum_{m=0}^n\mathbb{P}(\text{Bin}(\frac{n(n-1)}{2},\frac{\lambda+\epsilon}{n})=m)\mathbb{P}_{n,m}(E_a).

We are going to split this sum into

\sum_{m=0}^{\frac{(\lambda+\epsilon )n}{2}-n^{3/4}}+\sum_{m=\frac{(\lambda+\epsilon)n}{2}-n^{3/4}}^n.

On the first of these sums, we bound using the fact that probabilities are less than 1, and on the second, we use that \mathbb{P}_{n,m}(E_a) is an increasing function of m. This property is special to the event we are considering – in general one might have to be a bit more clever, perhaps using continuity of P_{n,m}(E), interpreting continuity in the limit with n. Anyway, this enables us to bound:

\mathbb{P}_{\lambda+\epsilon}(E_a)\geq \mathbb{P}(\text{Bin}(\frac{n(n-1)}{2},\frac{\lambda+\epsilon}{n})\leq \frac{(\lambda+\epsilon)n}{2}-n^{3/4})+\mathbb{P}_{\frac{(\lambda+\epsilon)n}{2}-n^{3/4}}(E_a)\mathbb{P}(\text{Bin}(\frac{n(n-1)}{2},\frac{\lambda+\epsilon}{n})\geq\frac{(\lambda+\epsilon)n}{2}-n^{3/4}).

By the Central Limit Theorem, this first probability tends to 0, while the final term tends to 1. We therefore have:

\mathbb{P}_{\lambda+\epsilon}(E_a)\geq \mathbb{P}_{\frac{(\lambda+\epsilon)n}{2}-n^{3/4}}(E_a).

We demanded that a>I_\lambda^{-1}, and mentioned that this function was continuous, so since we have total freedom over \epsilon, in particular, we can choose \epsilon>0 such that a>I_{\lambda+\epsilon}^{-1}. By the work on G(n,p), we have \mathbb{P}_{\lambda+\epsilon}(E_a)=O(n^{-\delta}), and for large enough n, we have \frac{(\lambda+\epsilon)n}{2}-n^{3/4}>>\frac{\lambda n}{2}, and so the result \mathbb{P}_{\frac{\lambda n}{2}}(E_a)=O(n^{-\delta}) follows.

Analytic vs Probabilistic Arguments for a Supercritical BP

This follows on directly from the previous post. I was originally going to talk only about what follows, but I got rather carried away with the branching process account. I was stuck on a particular exercise, and we ended up coming up with two arguments: one analytic and one probabilistic. Since the typical flavour of this blog is to present problems which show the advantage of the probabilistic approach, it seems only fair to remark on this case, where the analytic method was less interesting, but much simpler.

Recall that we have a supercritical random graph G(n,\frac{\lambda}{n}), \lambda>1, and we are considering the rescaled exploration process S_{nt}, which has asymptotic mean \mu_t=1-t-e^{-\lambda t}. We can calculate similarly an expression for the asymptotic variance

\frac{\text{Var}(S_{nt})}{n}\rightarrow v_t=e^{-\lambda t}(1-e^{-\lambda t}).

To use this to verify the result about the size of the giant component, we verify that \mu_{\zeta_\lambda+x/\sqrt{n}} is negative, and has small variance, which would confirm that the giant component has size bounded above by \zeta_\lambda almost surely. A similar argument is required for the lower bound. The variance is a separate matter, but it is therefore necessary that \mu_t should be decreasing at t=\zeta_\lambda, that is \mu_t'=\lambda e^{-\lambda \zeta_\lambda}<0. This is what we try to prove in the remainder of this post. Recall that in the previous post we have checked that it is equal to zero here.

Heuristic Explanation

\mu_t has been rescaled from the original definition of the exploration process in both size and time-scale so some care is needed to see why this should hold in the limit. Remember that all components apart from the giant component are of size O(log n). So immediately after exhausting the giant component, you are likely to be visiting components of size roughly log n. A time interval of dt for \mu corresponds to ndt for S, during which S will visit some components of size log n and some of O(1) and some in between. In particular, some fixed proportion of vertices are isolated, that is, in a component of size 1.

There is then a complicated size-biasing train of thought. A component of size log n is more likely to come up than an isolated vertex, but there are not as many of them. The log n components push the derivative \mu_t' towards zero, because S_t decreases by 1 over a time-interval of length log n, which gives a gradient of zero in the limit. However, the isolated vertices give a gradient of -1, because S_t decreases by 1 over a time interval of 1. Despite the fact that log n intervals are likely to appear earlier, it still remains the case that after exhausting a component (in particular, at time t=\zeta_\lambda, after exhausting the giant component), with some bounded below positive probability you will choose an isolated vertex next. The component size only affects that time-scale if it is O(n), which none of the remaining components are, so the derivative \mu_{\zeta_\lambda}' consists of some complicated weighted mean of 0 and -1. In particular, it is negative.

Analytic solution

Obviously, that won’t do in practice. Suppressing lambdas for ease of notation, the key fact is: e^{-\lambda \zeta}=1-\zeta. We want to show that \lambda e^{-\lambda \zeta}<1. Substituting

\lambda=-\frac{\log(1-\zeta)}{\zeta},

means that it is required to show:

-\frac{1-\zeta}{\zeta}\log(1-\zeta)<1.

Differentiating the left hand side gives:

\frac{\log(1-\zeta)+\zeta}{\zeta^2}<0,

since of course \log(1-\zeta)=\zeta+\frac{\zeta^2}{2}+\frac{\zeta^3}{3}+\dots. So it suffice to check the result for small \zeta. But, again using a Taylor series:

-\frac{1-\zeta}{\zeta}\log(1-\zeta)=1-\frac12\zeta+O(\zeta^2)<1,

for small \zeta. This gives the required result.

Probabilistic Interpretation and Solution

First, we observe that \lambda e^{-\lambda\zeta}=\lambda(1-\zeta) is the expected number of vertices in the first generation of a \text{Po}(\lambda) whose progeny become extinct. This motivates considering the canonical decomposition of a supercritical branching process Z into the skeleton process and the dual process. The skeleton Z^+ consists of all vertices which have infinitely many successors. It is relatively easy to show that this is a branching process with offspring distribution \text{Po}(\lambda\zeta) conditioned on being positive. The dual process Z^* is a G-W branching process with offspring distribution \text{Po}(\lambda) conditioned on dying. This is the same as a branching process with offspring distribution \text{Po}(\lambda(1-\zeta), by a sprinkling argument, which says that if we begin with a Poisson number of things, then remove each one independently with some fixed probability, the remaining number of things is Poisson also.

We can construct the original branching process by

  • With probability \zeta, take the skeleton, and affixe independent copies of Z^* at every vertex in the skeleton.
  • With probability 1-\zeta, just take a copy of Z^*.

It is immediately clear that \lambda(1-\zeta)\leq 1. After all, the dual process is almost surely finite, so the offspring distribution cannot have expectation greater than 1. Checking that this is strong is more fiddly. The best way I have come up with is to examine the tail of the distribution of total population size of the original branching process.

The total population size T of a branching process has an exponential tail if the offspring distribution is subcritical. It isn’t hugely surprising that this behaves like a large deviation for iid RVs, since in the limit such an event requires a lot of the offspring counts to deviate substantially from the mean. The same holds in the supercritical case, with the additional complication that though the finite tail decays exponential, there is positive probability that the total size will be infinite. In the critical case, however, there is a power-law decay. This is not hugely surprising as it marks the threshhold for the appearance of the infinite population, just as in a multiplicative coalescent at time 1, we have a load of very large components just about to form a giant component. The tool for all of these results is Dwass’s Theorem, which says:

\mathbb{P}(T=n)=\frac{1}{n}\mathbb{P}(X_1+\ldots+X_n=n-1),

where X_1 are iid with the offspring distribution. When \mathbb{E}X_1\neq 1, this is a large deviation event, for which Cramer’s theorem applies (assuming, as is the case for the Poisson distribution, that the offspring distribution has finite variance). When, \mathbb{E}X=1, the Central Limit Theorem says that with high probability,

X_1+\ldots+X_n\in [n-n^{3/4},n+n^{3/4}],

so, skating over the details of whether everything is exactly uniform within this CLT scaling window,

\mathbb{P}(T=n)\geq \frac{1}{n}\cdot\frac{1}{2n^{3/4}}.

The true exponent of the power law decay is substantially slower than this, but the above argument works as a back-of-the-envelope bound.

In particular, if the dual process has mean 1, then the population size of the original branching process is given by taking a distribution with exponential tail with some probability and a distribution with power-law tail with some probability. Obviously the power-law will dominate, which contradicts the assumption that the original branching process was supercritical, and so has an exponential tail.

CLT and Stable Distributions

One of the questions I posed at the end of the previous post about the Central Limit Theorem was this: what is special about the normal distribution?

More precisely, for a large class of variables (those with finite variance) the limit in distribution of S_n after a natural rescaling is distributed as N(0,1). As a starting point for investigating similar results for a more general class of underlying distributions, it is worth considering what properties we might require of a distribution if it is to appear as a limit in distribution of sums of IID RVs, rescaled if necessary.

The property required is that the distribution is stable. In the rest of the post I am going to give an informal precis of the content of the relevant chapter of Feller.

Throughout, we assume a collection of IID RVs, X,X_1,X_2,\ldots, with the initial sums S_n:=X_1+\ldots+X_n. Then we say X is stable in the broad sense if

S_n\stackrel{d}{=}c_nX+\gamma_n,

for some deterministic parameters c_n,\gamma_n for every n. If in fact \gamma_n=0 then we say X is stable in the strict sense. I’m not sure if this division into strict and broad is still widely drawn, but anyway. One interpretation might be that a collection of distributions is stable if they form a non-trivial subspace of the vector space of random variables and also form a subgroup under the operation of adding independent RVs. I’m not sure that this is hugely useful either though. One observation is that if \mathbb{E}X exists and is 0, then so are all the \gamma_ns.

The key result to be shown is that

c_n=n^{1/\alpha} for some 0<\alpha\leq 2.

Relevant though the observation about means is, a more useful one is this. The stability property is retained if we replace the distribution of X with the distribution of X_1-X-2 (independent copies naturally!). The behaviour of c_n is also preserved. Now we can work with an underlying distribution that is symmetric about 0, rather than merely centred. The deduction that \gamma_n=0 still holds now, whether or not X has a mean.

Now we proceed with the proof. All equalities are taken to be in distribution unless otherwise specified. By splitting into two smaller sums, we deduce that

c_{m+n}X=S_{m+n}=c_mX_1+c_nX_2.

Extending this idea, we have

c_{kr}X=S_{kr}=S_k^{(1)}+\ldots+S_k^{(r)}=c_kX_1+\ldots+c_kX_r=c_kS_r=c_kc_rX.

Note that it is not even obvious yet that the c_ns are increasing. To get a bit more control, we proceed as follows. Set v=m+n, and express

X=\frac{c_m}{c_v}X_1+\frac{c_n}{c_v}X_2,

from which we can make the deduction

\mathbb{P}(X>t)\geq \mathbb{P}(X_1>0,X_2>t\frac{c_v}{c_n})=\frac12\mathbb{P}(X_2>t\frac{c_v}{c_n}). (*)

So most importantly, by taking t>>0 in the above, and using that X is symmetric, we can obtain an upper bound

\mathbb{P}(X_2>t\frac{c_v}{c_n})\leq \delta<\frac12,

in fact for any \delta<\frac12 if we take t large enough. But since

\mathbb{P}(X_2>0)=\frac12(1-\mathbb{P}(X_2=0)),

(which should in most cases be \frac12), this implies that \frac{c_v}{c_n} cannot be very close to 0. In other words, \frac{c_n}{c_v} is bounded above. This is in fact regularity enough to deduce that c_n=n^{1/\alpha} from the Cauchy-type functional equation (*).

It remains to check that \alpha\leq 2. Note that this equality case \alpha=2 corresponds exactly to the \frac{1}{\sqrt{n}} scaling we saw for the normal distribution, in the context of the CLT. This motivates the proof. If \alpha>2, we will show that the variance of X is finite, so CLT applies. This gives some control over c_n in an n\rightarrow\infty limit, which is plenty to ensure a contradiction.

To show the variance is finite, we use the definition of stable to check that there is a value of t such that

\mathbb{P}(S_n>tc_n)<\frac14\,\forall n.

Now consider the event that the maximum of the X_is is >tc_n and that the sum of the rest is non-negative. This has, by independence, exactly half the probability of the event demanding just that the maximum be bounded below, and furthermore is contained within the event with probability <\frac14 shown above. So if we set

z(n)=n\mathbb{P}(X>tc_n)

we then have

\frac14>\mathbb{P}(S_n>tc_n)\geq\frac12\mathbb{P}(\max X_i>tc_n)=\frac12[1-(1-\frac{z}{n})^n]

\iff 1-e^{-z(n)}\leq \frac12\text{ for large }n.

So, z(n)=n(1-F(tc_n)) is bounded as n varies. Rescaling suitably, this gives that

x^\alpha(1-R(x))<M\,\forall x,\,\text{for some }M<\infty.

This is exactly what we need to control the variance, as:

\mathbb{E}X^2=\int_0^\infty \mathbb{P}(X^2>t)dt=\int_0^\infty \mathbb{P}(X^2>u^2)2udu

=\int_0^\infty 4u\mathbb{P}(X>u)du\leq \int_0^\infty 1\wedge\frac{4M}{u^{-(\alpha-1)}}du<\infty,

using that X is symmetric and that \alpha>2 for the final equalities. But we know from CLT that if the variance is finite, we must have \alpha=2.

All that remains is to mention how stable distributions fit into the context of limits in distribution of RVs. This is little more than a definition.

We say F is in the domain of attraction of a broadly stable distribution R if

\exists a_n>0,b_n,\quad\text{s.t.}\quad \frac{S_n-b_n}{a_n}\stackrel{d}{\rightarrow}R.

The role of b_n is not hugely important, as a broadly stable distribution is in the domain of attraction of the corresponding strictly stable distribution.

The natural question to ask is: do the domains of attraction of stable distributions (for 0<\alpha\leq 2) partition the space of probability distributions, or is some extra condition required?

Next time I will talk about stable distributions in a more analytic context, and in particular how a discussion of their properties is motivated by the construction of Levy processes.

Branching Processes and Dwass’s Theorem

This is something I had to think about when writing my Part III essay, and it turns out to be relevant to some of the literature I’ve been reading this week. The main result is hugely helpful for reducing a potentially complicated combinatorial object to a finite sum of i.i.d. random variables, which in general we do know quite a lot about. I was very pleased with the proof I came up with while writing the essay, even if in the end it turned out to have appeared elsewhere before. (Citation at end)

Galton-Watson processes

A Galton-Watson process is a stochastic process describing a simple model for evolution of a population. At each stage of the evolution, a new generation is created as every member of the current generation produces some number of `offspring’ with identical and independent (both across all generations and within generations) distributions. Such processes were introduced by Galton and Watson to examine the evolution of surnames through history.

More precisely, we specify an offspring distribution, a probability distribution supported on \mathbb{N}_0. Then define a sequence of random variables (Z_n,n\in\mathbb{N}) by:

Z_{n+1}=Y_1^n+\ldots+Y_{Z_n}^n,

where (Y_k^n,k\geq 1,n\geq 0) is a family of i.i.d. random variables with the offspring distribution Y. We say Z_n is the size of the nth generation. From now on, assume Z_0=1 and then we call (Z_n,n\geq 0) a Galton-Watson process. We also define the total population size to be

X:=Z_0+Z_1+Z_2+\ldots,

noting that this might be infinite. We refer to the situation where X<\infty finite as extinction, and can show that extinction occurs almost surely when \mathbb{E}Y\leq 1, excepting the trivial case Y=\delta_1. The strict inequality parts are as you would expect. We say the process is critical if \mathbb{E}Y=1, and this is less obvious to visualise, but works equally well in the proof, which is usually driven using generating functions.

Total Population Size and Dwass’s Theorem

Of particular interest is X, the total population size, and its distribution. The following result gives us a precise and useful result linking the probability of the population having size n and the distribution of the sum of n RVs with the relevant offspring distribution. Among the consequences are that we can conclude immediately, by CLT and Cramer’s Large Deviations Theorem, that the total population size distribution has power-law decay in the critical case, and exponential decay otherwise.

Theorem (Dwass (1)): For a general branching process with a single time-0 ancestor and offspring distribution $Y$ and total population size $X$:

\mathbb{P}(X=k)=\frac{1}{k}\mathbb{P}(Y^1+\ldots+ Y^k=k-1),\quad k\geq 1

where Y^1,\ldots,Y^k are independent copies of Y.

We now give a proof via a combinatorial argument. The approach is similar to that given in (2). Much of the literature gives a proof using generating functions.

Proof: For motivation, consider the following. It is natural to consider a branching process as a tree, with the time-0 ancestor as the root. Suppose the event \{X=k\} in holds, which means that the tree has k vertices. Now consider the numbers of offspring of each vertex in the tree. Since every vertex except the root has exactly one parent, and there are no vertices outside the tree, we must have Y^1+\ldots+Y^k=k-1 where Y^1,\ldots,Y^k are the offspring numbers in some order. However, observe that this is not sufficient. For example, if Y^1 is the number of offspring of the root, and k\geq 2, then we must have Y^1\geq 1. Continue reading

How to take a Passport Photograph – Part 2

Recall that by the end of the last post, we had given a complete strategy for the (slightly degenerate) stochastic control problem that modelled how to select a photo from a photobooth. The strategy was determined by an infinite sequence, defined inductively by:

a(1)=1,\quad a(n+1)=a(n)\left[1-\frac{a(n)}{2}\right].

At the very end of the post, I gave a selection of highly non-rigorous justifications for why

a(n)\approx \frac{2}{n} for large n.

After trying a bit more carefully, I think I can show the following:

a(n)=\Theta(\frac{2}{n}).

More precisely, eventually

a(n)\in [\frac{2}{n}-g(n),\frac{2}{n}+h(n)],

where g(n)=h(n)=2n^{-3/2}, so in particular g(n),h(n)=o(\frac{1}{n}).

To set this up, just to avoid having ‘2’s everywhere, we rescale the a(n)s by 1/2, so that the recursive definition is now:

a(n+1)=f(a(n)):=a(n)\left[1-a(n)\right].

I will show that for suitably large n, (though in practice this won’t be very large at all)

a(n)\in[\frac{1}{n}-n^{-3/2},\frac{1}{n}+n^{-3/2}]

\Rightarrow\quad a(n+1)\in[\frac{1}{n+1}-(n+1)^{-3/2},\frac{1}{n+1}+(n+1)^{-3/2}].

It will then suffice to show that the left hand side of this deduction holds for some n.

To show that

f(\frac{1}{n}-g(n))\geq \frac{1}{n+1}-g(n+1),

it suffices to verify that:

\frac{1}{n^2(n+1)}\leq g(n+1)-(1-\frac{2}{n})g(n)-g(n)^2.

When g(n)=n^{-3/2}, we can bound using the difference of two squares:

(\frac{1}{(n+1)^{3/2}}-\frac{n-2}{n^{5/2}})=(\frac{1}{(n+1)^3}-\frac{(n-2)^2}{n^5})/(\frac{1}{(n+1)^{3/2}}+\frac{n-2}{n^{5/2}})

\geq \frac{\frac{n^4-5n^3}{(n+1)^3n^5}}{2n^{3/2}} \stackrel{n\geq 10}{\geq}\frac{n^4}{4n^{11/2}}=\frac{1}{4n^{5/2}}

so for n\geq 16

\geq\frac{1}{n^3}\geq \frac{1}{n^2(n+1)}.

A similar calculation, which is relatively straightforward to perform and a bit of a hassle to write up into WordPress, gives the corresponding upper bound. It remains to check that

a(16)\in [\frac{1}{16}-\frac{1}{64},\frac{1}{16}+\frac{1}{64}],

and a direct computation (I used MATLAB) confirms this.