Taking a course on Large Deviations has forced me to think a bit more carefully about what happens when you have large collections of IID random variables. I guess the first thing think to think about is ‘What is a *Large Deviation*‘? In particular, how large or deviant does it have to be?

Of primary interest is the tail of the distribution function of , where the are independent and identically distributed as . As we can always negate everything later if necessary, we typically consider the probability of events of the type:

where is some function which almost certainly increases fairly fast with . More pertinently, if we are looking for some limit which corresponds to an actual random variable, we perhaps want to look at lots of related s simultaneously. More concretely, we should fix and consider the probabilities

(*)

Throughout, we lose no generality by assuming that . Of course, it is possible that this expectation does not exist, but that is certainly a question for another post!

Now let’s consider the implications of our choice of . If this increases with too slowly, and the likely deviation of is greater than , then the event might not be a large deviation at all. In fact, the difference between this event and the event ( is above 0, that is, its mean) becomes negligible, and so the probability at (*) might be 1/2 or whatever, regardless of the value of . So object whatever that means, certainly cannot be a proper random variable, as if we were to have convergence in distribution, this would imply that the limit RV consisted of point mass at each of .

On the other hand, if increases rapidly with , then the probabilities at (*) might become very small indeed when . For example, we might expect:

and more information to be required when . This is what we mean by a *large deviation* event. Although we always have to define everything concretely in terms of some finite sum , we are always thinking about the behaviour in the limit. A *large deviation principle* exists in an enormous range of cases to show that these probabilities in fact decay exponentially. Again, that is the subject for another post, or indeed the lecture course I’m attending.

Instead, I want to return to the Central Limit Theorem. I first encountered this result in popular science books in a vague “the histogram of boys’ heights looks like a bell” kind of way, then, once a normal random variable had been to some extent defined, it returned in A-level statistics courses in a slightly more fleshed out form. As an undergraduate, you see it in several forms, including as a corollary following from Levy’s convergence theorem.

In all applications though, it is generally used as a method of calculating good approximations. It is not uncommon to see it presented as:

Although in many cases that is the right way to think *use* it, it isn’t the most interesting aspect of the theorem itself. CLT says that the correct scaling of so that the deviation probabilities lie between the two cases outline above is the same (that is, in some sense) for an enormous class of distributions, and in particular, most distributions that one might encounter in practice (ie finite mean, finite variance). There is even greater universality, as furthermore the limit distribution at this interface has the same form (some appropriate normal distribution) whenever is in this class of distributions. I think that goes a long way to explaining why we should care about the theorem. It also immediately prompts several questions:

- What happens for less regular distributions? It is now more clear what the right question to ask in this setting might be. What is the appropriate scaling for in this case, if such a scaling exists? Is there a similar universality property for suitable classes of distributions?
- What is special about the normal distribution? The theorem itself shows us that it appears as a universal scaling limit in distribution, but we might reasonably ask what properties such a distribution should have, as perhaps this will offer a clue to a version of CLT or LLNs for less regular distributions.
- We can see that the Weak Law of Large Numbers follows immediately from CLT. In fact we can say more, perhaps a Slightly Less Weak LLN, that

- whenever . But of course, we also have a Strong Law of Large Numbers, which asserts that the empirical mean converges almost surely. What is the threshhold for almost sure convergence, because there is no a priori reason why it should be ?

To be continued next time.

###### Related articles

- Weak Law of Large Numbers and Central Limit Theorem via the Levy’s continuity theorem. (maikolsolis.wordpress.com)

Pingback: CLT and Stable Distributions | Eventually Almost Everywhere

Pingback: Gaussian tail bounds and a word of caution about CLT | Eventually Almost Everywhere

Pingback: Azuma-Hoeffding Inequality | Eventually Almost Everywhere