It’s (probably) my last Michaelmas term in Oxford, at least for the time being, and so also the last time giving tutorials on either of the probability courses that students take in their first two years. This time, I’m teaching the second years, and as usual the aim of the majority of the first half of the course is to acquire as sophisticated an understanding as possible of the Central Limit Theorem. I feel a key step is appreciating that CLT tells you about the correct scaling for the deviations from the mean of these partial sums of IID random variables. The fact that these deviations on this correct scaling converge in law to a normal distribution, irrespective (apart from mild conditions) on the underlying distribution, is interesting, but should be viewed as a secondary, bonus, property.
Emphasising the scaling of deviations in CLT motivates the next sections of this (or any) course. We develop tools like Markov’s inequality to control the probability that a random variable is much larger than its expectation, and experiment with applying this to various functions of the random variable to get stronger bounds. When the moment generating function exists, this is an excellent choice for this analysis. We end up with a so-called Chernoff bound. For example, we might consider the probability that when we toss N coins, at least a proportion ¾ are Heads. A Chernoff bound says that this probability decays exponentially in N.
One direction to take is to ask how to control precisely the parameter of this exponential decay, which leads to Cramer’s theorem and the basis of the theory of Large Deviations. An alternative direction is to observe that the signed difference between the partial sums of independent random variables and their means is an example of a martingale, albeit not a very interesting one, since in general the increments of a martingale are not independent. So we might ask: under what circumstances can we show exponential tail bounds on the deviation of a martingale from its mean (that is, its initial value) at a fixed (perhaps large) time?
The following result was derived and used by various authors in the 60s, including Azuma and Hoeffding (separately), but also others.
Let be a martingale with respect to some filtration, and we assume that the absolute value of each increment is bounded almost surely by some . Then, recalling that , we have
We apply a Chernoff argument to each increment. First, observe that for Y a distribution supported on [-1,1] with mean zero, by convexity is maximised by taking Y equal to +1 and -1 each with probability ½. Thus
where the final inequality follows by directly comparing the Taylor series.
We’ll use this shortly. Before that, we start the usual argument for a Chernoff bound on .
and our preliminary result allows us to control this inner expectation
So now we can apply this inductively to obtain
Finally, as usual in such an argument, we need to choose a sensible value of the free parameter , and naturally we want to choose it to make this RHS as small as possible, which is achieved when , and leads exactly to the statement of the inequality.
Unsurprisingly, we can easily apply this to the process of partial sums of IID random variables with mean zero and bounded support, to recover a Chernoff bound.
A more interesting example involves revealing the state (ie open or closed) of the edges of an Erdos-Renyi graph one at a time. We need to examine some quantitative property of the graph which can’t ever be heavily influenced by the presence or non-presence of a single given edge. The size of the largest clique, or the largest cut, are good examples. Adding or removing an edge can change these quantities by at most one.
So if we order the edges, and let the filtration be generated by the state of the first k edges in this ordering, then is a martingale. (A martingale constructed backwards in this fashion by conditioning a final state on a filtration is sometimes called a Doob martingale.) Using A-H on this shows that the deviations from the mean are of order , where N is the size of the graph. In the sparse case, it can be justified fairly easily that the maximum cut has size , since for example there will always be some positive proportion of isolated vertices. However, accurate asymptotics for the mean of this quantity seem (at least after a brief search of the literature – please do correct me if this is wrong!) to be unknown. So this might be an example of the curious situation where we can control the deviations around the mean better than the mean itself!
Beyond bounded increments
One observation we might make about the proof is that it is tight only if all the increments are supported on , which is stronger than demanding that the absolute value is bounded. If in fact we have almost surely, then, with a more detailed preliminary lemma, we can have instead a bound of .
While it isn’t a problem in these examples, in many settings the restriction to bounded increments is likely to be the obstacle to applying A-H. Indeed, in the technical corner of my current research problem, this is exactly the challenge I faced. Fortunately, at least in principle, all is not necessarily lost. We might, for example, be able to establish bounds as described, such that the probability that any exceeds its is very small. You could then construct a coupled process , that is equal to whenever the increments are within the given range, and something else otherwise. For Y to fit the conditions of A-H, the challenge is to ensure we can do this such that the increments remain bounded (ie the ‘something else’ also has to be within ) and also that Y remains a martingale. This total probability of a deviation is bounded above by the probability of Y experiencing that deviation, plus the probability of Y and X decoupling. To comment on the latter probability is hard in general without saying a bit more about the dependence structure in X itself.
thanks for this post. I think in the Proof of Azuma-Hoeffding you should add a factor 1/2 after you made the induction. (Probably just a typo)
Pingback: Hoeffding’s inequality and convex ordering | Eventually Almost Everywhere