Doob inequalities and Doob-Meyer decomposition

The first post I wrote on this blog was about martingales, way back in 2012 at a time when I had known what a martingale was for about a month. I now don’t have this excuse. So I’m going to write about a couple of properties of (discrete-time) martingales that came up while adjusting a proof which my thesis examiners suggested could be made much shorter as part of their corrections.

Doob’s submartingale inequality

When we prove that some sequence of processes converges to some other process, we typically want to show that this holds in some sense uniformly over a time-interval, rather than just at some fixed time. We don’t lose much at this level of vagueness by taking the limit process to be identically zero. Then, if the convergent processes are martingales or closely similar, we want to be able to bound $\sup_{k\le n} |Z_k|$ in some sense.

Doob’s submartingale inequality allows us to do this. Recall that a submartingale has almost-surely non-negative conditional increments. You might think of it heuristically as ‘more increasing than a martingale’. If $Z_n$ is a martingale, then $|Z_n|$ is a submartingale. This will be useful almost immediately.

The statement is that for $(Z_n)$ a non-negative submartingale,

$\mathbb{P}\left( \sup_{k\le n} Z_k \ge \lambda\right) \le \frac{\mathbb{E}\left[Z_n\right]}{\lambda}.$

The similarity of the statement to the statement of Markov’s inequality is no accident. Indeed the proof is very similar. We consider whether the event in question happens, and find lower bounds on the expectation of $Z_n$ under both possibilities.

Formally, for ease of notation, let $Z_n^*$ be the running maximum $\sup_{k\le n}Z_k$. Then, we let $T:= n\wedge \inf\{k\le n, M_j\ge \lambda\}$ and apply the optional stopping theorem for submartingales at T, which is by construction at most n. That is

$\mathbb{E}[Z_n]\ge \mathbb{E}[Z_T]=\mathbb{E}\left[Z_T\mathbf{1}_{Z_n^*<\lambda}\right] + \mathbb{E}\left[Z_T \mathbf{1}_{Z_n^*\ge \lambda}\right].$

The first of these summands is positive, and the second is at least $\lambda \mathbb{P}\left( Z_N^* \ge \lambda \right)$, from which the result follows.

We’ve already said that for any martingale $Z_n$, $|Z_n|$ is a submartingale, but in fact $f(Z_n)$ is a submartingale whenever f is convex, and $\mathbb{E}|f(Z_n)|<\infty$ for each n. Naturally, this continues to hold when $Z_n$ is itself a submartingale.

[Note that $Z_n^*$ is also a submartingale, but this probably isn’t as interesting.]

A particularly relevant such function f is $f(x)=x^p$, for p>1. If we take $Z_n$ a non-negative submartingale which is uniformly bounded in $L^p$, then by applying Holder’s inequality and this submartingale inequality, we obtain

$\mathbb{E}\left( \sup_{k\le n}Z_n^p \right) \le \left(\frac{p}{p-1}\right)^p \mathbb{E}\left[ Z_n^p \right].$

Since $Z_n^p$ is a submartingale, then a limit in n on the RHS is monotone, and certainly a limit in n on the LHS is monotone, so we can extend to

$\mathbb{E}\left( \sup_{k\le n}Z_\infty^p \right) \le \left(\frac{p}{1-p}\right)^p \mathbb{E}\left[ Z_\infty^p \right].$

Initially, we have to define $\mathbb{E}\left[ Z_\infty^p \right]$ through this limit, but in fact this result, Doob’s $L^p$ inequality, shows that $Z_\infty:= \lim Z_n$ exists almost surely as well.

Naturally, we will often apply this in the case p=2, and in the third of these three sections, we will see why it might be particularly straightforward to calculate $\mathbb{E}\left[Z_\infty^2\right].$

Remark: as in the case of Markov’s inequality, it’s hard to say much if the submartingale is not taken to be non-negative. Indeed, this effect can be seen even if the process is only defined for a single time step, for which the statement really is then Markov’s inequality.

Doob-Meyer decomposition

Unfortunately, most processes are not martingales. Given an discrete-time process $X_n$ adapted to $\mathcal{F}=(\mathcal{F}_n)$, it is a martingale if the conditional expectations of the increments are all almost surely zero. But given a general adapted process $X_n$ which is integrable (so the increments have well-defined finite expectation), we can iteratively construct a new process $M_n$, where the increments are centred versions of $X_n$‘s increments. That is,

$M_{n+1}-M_n:= X_{n+1}-X_n - \mathbb{E}\left[ X_{n+1}-X_n \,\big|\, \mathcal{F}_n\right] = X_{n+1}-\mathbb{E}\left[X_{n+1} \,\big|\, \mathcal{F}_n\right].$ (*)

Then it’s immediately clear from the definition that $M_n$ is a martingale.

There’s a temptation to tie oneself up in knots with the dependence. We might have that increments of the original process $X_n$ depend on the current value of the process. And is it necessarily clear that we can recover the current value of the original process from the current value of $M_n$? Well, this is why we demand that everything be adapted, rather than just Markov. It’s not the case that $M_n$ should be Markov, but it clearly is adapted.

Now we look at the middle expression in (*), and in particular the term we are subtracting, namely the conditional expectation. If we define, in the standard terminology, $A_0=0$ and

$A_{n+1}-A_n:= \mathbb{E}\left[ X_{n+1}-X_n \,\big|\, \mathcal{F}_n\right],$

then we have decomposed the original process $X_n$ as the sum of a martingale $M_n$, and this new process $A_n$. In particular, note that the increment $A_{n+1}-A_n$ given above is adapted to $\mathcal{F}_n$, which is a stronger condition than being adapted to $\mathcal{F}_{n+1}$ as we would expect a priori. This property of the process $(A_n)$ is called predictability (or possibly previsibility).

This decomposition $X_n=X_0+M_n+A_n$ as just defined is called the Doob-Meyer decomposition, and there is a unique such decomposition where $M_n$ is a martingale, and $A_n$ is predictable. The proof of uniqueness is very straightforward. We look at the equalities given above as definitions of $M_n,A_n$, but then work in the opposite direction to show that they must hold if the decomposition holds.

I feel a final heuristic is worthwhile, using the term drift, more normally encountered in the continuous-time setting to describe infinitissimal expected increments. The increments of $A_n$ represent the drift of $X_n$, and the increments of $M_n$ are what remains from $X_n$ after subtracting the drift. In general, the process to be subtracted to turn a non-martingale into a martingale is called a compensator, and the existence or otherwise of such processes is important but challenging for some classes of continuous-time processes.

In particular, note that when $X_n$ is itself a martingale, then $A_n\equiv 0$. However, probably the most useful case is when $X_n$ is a submartingale, as then the drift is always non-negative, and so $A_n$ is almost surely increasing. The converse holds too.

This is relevant because this Doob-Meyer decomposition is obviously only a useful tool for treating $X_n$ if we can handle the two processes $M_n,A_n$ easily. We have tools to bound the martingale term, but this previsible term might in general be tricky, and so the case where $X_n$ is a submartingale is good, as increasing processes are much easier than general processes, since bounding the whole process might involve only bounding the final term in many contexts.

A particularly relevant example is the square of a martingale, that is $X_n=M_n^2$, where $M_n$ is a martingale. By the convexity condition discussed earlier, $X_n$ is a submartingale (provided it is integrable, ie $M_n$ is square-integrable), and so the process $A_n$ in its Doob-Meyer decomposition is increasing. This is often called the (predictable) quadratic variation of $(X_n)$.

This predictable quadratic variation is sometimes denoted $\langle X_n\rangle$. This differs from the (regular) quadratic variation which is defined as the sum of the squares of the increments, that is $[X_n]:= \sum_{k=0}^{n-1} (X_{k+1}-X_k)^2$. Note that this is adapted, but obviously not previsible. The distinction between these two processes is more important in continuous time. There, they are almost surely equal for a continuous local martingale, but not for eg a Poisson process. (For a Poisson process, the PQV is deterministic, indeed linear, while the (R)QV is almost surely equal to the Poisson process itself.) In the discrete time setting, the regular quadratic variation is not relevant very often, while the predictable quadratic variation is useful, precisely because of this decomposition.

Whenever we have random variables which we then centre, there is a standard trick to apply when treating their variance. That is

$A_{n+1}-A_n= \mathbb{E}\left[ M^2_{n+1}-M^2_n \,\big|\, \mathcal{F}_n\right]$
$= \mathbb{E}\left[ M^2_{n+1}\,\big|\, \mathcal{F}_n\right] - 2M_n^2 +M_n^2$
$= \mathbb{E}\left[ M^2_{n+1}\,\big|\, \mathcal{F}_n\right] - 2M_n \mathbb{E}\left[ M_{n+1}\,\big|\, \mathcal{F}_n\right] + M_n^2$
$= \mathbb{E}\left[ \left(M_{n+1}-M_n\right)^2\,\big|\, \mathcal{F}_n\right].$

One consequence is seen by taking an ‘overall’ expectation. Because $M_n^2-A_n$ is a martingale,

$\mathbb{E}\left[M_n^2\right] = \mathbb{E}\left[A_n\right] = \mathbb{E}\left[M_0^2\right] + \sum_{k=0}^{n-1} \mathbb{E}\left[A_{k+1}-A_k\right]$
$= \mathbb{E}\left[ M_0^2\right] + \sum_{k=0}^{n-1}\mathbb{E}\left[ \left(M_{k+1}-M_k\right)^2 \right].$ (**)

This additive (Pythagorean) property of the square of a martingale is useful in applications where there is reasonably good control on each increment separately.

We can also see this final property without the Doob-Meyer decomposition. For a martingale it is not the case that the increments on disjoint intervals are independent. However, following Williams 12.1 [1], disjoint intervals are orthogonal, in the sense that

$\mathbb{E}\left[(M_t-M_s)(M_v-M_u)\right]=0,$

whenever $s\le t\le u\le v$. Then, when we square the expression $M_n=M_0+\sum M_{k+1}-M_k$, and take expectations, all the cross terms vanish, leaving precisely (*).

References

[1] Williams – Probability with Martingales

I also followed the notes I made in 2011/12 while attending Perla Sousi’s course on Advanced Probability, and Arnab Sen’s subsequent course on Stochastic Calculus, though I can’t find any evidence online for the latter now.

Azuma-Hoeffding Inequality

It’s (probably) my last Michaelmas term in Oxford, at least for the time being, and so also the last time giving tutorials on either of the probability courses that students take in their first two years. This time, I’m teaching the second years, and as usual the aim of the majority of the first half of the course is to acquire as sophisticated an understanding as possible of the Central Limit Theorem. I feel a key step is appreciating that CLT tells you about the correct scaling for the deviations from the mean of these partial sums of IID random variables. The fact that these deviations on this correct scaling converge in law to a normal distribution, irrespective (apart from mild conditions) on the underlying distribution, is interesting, but should be viewed as a secondary, bonus, property.

Emphasising the scaling of deviations in CLT motivates the next sections of this (or any) course. We develop tools like Markov’s inequality to control the probability that a random variable is much larger than its expectation, and experiment with applying this to various functions of the random variable to get stronger bounds. When the moment generating function exists, this is an excellent choice for this analysis. We end up with a so-called Chernoff bound. For example, we might consider the probability that when we toss N coins, at least a proportion ¾ are Heads. A Chernoff bound says that this probability decays exponentially in N.

One direction to take is to ask how to control precisely the parameter of this exponential decay, which leads to Cramer’s theorem and the basis of the theory of Large Deviations. An alternative direction is to observe that the signed difference between the partial sums of independent random variables and their means is an example of a martingale, albeit not a very interesting one, since in general the increments of a martingale are not independent. So we might ask: under what circumstances can we show exponential tail bounds on the deviation of a martingale from its mean (that is, its initial value) at a fixed (perhaps large) time?

Azuma-Hoeffding inequality

The following result was derived and used by various authors in the 60s, including Azuma and Hoeffding (separately), but also others.

Let $X_0,X_1,X_2,\ldots$ be a martingale with respect to some filtration, and we assume that the absolute value of each increment $|X_i-X_{i-1}|$ is bounded almost surely by some $c_i<\infty$. Then, recalling that $\mathbb{E}[X_n|\mathcal{F}_0]=X_0$, we have

$\mathbb{P}(X_n \ge X_0+t) \le \exp\left( -\frac{t^2}{2\sum_{i=1}^n c_i^2}\right).$

Proof

We apply a Chernoff argument to each increment. First, observe that for Y a distribution supported on [-1,1] with mean zero, by convexity $\mathbb{E}[e^{tY}]$ is maximised by taking Y equal to +1 and -1 each with probability ½. Thus

$\mathbb{E}[e^{tY}]\le \frac12 e^t + \frac 12 e^{-t}=\cosh(t) \le e^{-t^2/2},$

where the final inequality follows by directly comparing the Taylor series.

We’ll use this shortly. Before that, we start the usual argument for a Chernoff bound on $X_n-X_0$.

$\mathbb{P}(X_n-X_0\ge t) = \mathbb{P}(e^{\theta(X_n-X_0)}\ge e^{\theta t})\le e^{-\theta t} \mathbb{E}[e^{\theta(X_n-X_0)}]$

$= e^{-\theta t} \mathbb{E}[\mathbb{E}[e^{\theta((X_n-X_{n-1}) +X_{n-1}-X_0)} | \mathcal{F}_{n-1}]]$

$= e^{-\theta t} \mathbb{E}[e^{\theta(X_{n-1}-X_0)} \mathbb{E}[e^{\theta(X_n-X_{n-1})}|\mathcal{F}_{n-1}] ],$

and our preliminary result allows us to control this inner expectation

$\le e^{-\theta t} e^{\theta^2c_n^2/2} \mathbb{E}[e^{\theta(X_{n-1}-X_0)}].$

So now we can apply this inductively to obtain

$\mathbb{P}(X_n-X_0\ge t) \le e^{-\theta t+ \theta^2 \sum_{i=1}^n c_i^2}.$

Finally, as usual in such an argument, we need to choose a sensible value of the free parameter $\theta$, and naturally we want to choose it to make this RHS as small as possible, which is achieved when $\theta = \frac{t}{\sum_{i=1}^n c_i^2}$, and leads exactly to the statement of the inequality.

Applications

Unsurprisingly, we can easily apply this to the process of partial sums of IID random variables with mean zero and bounded support, to recover a Chernoff bound.

A more interesting example involves revealing the state (ie open or closed) of the edges of an Erdos-Renyi graph one at a time. We need to examine some quantitative property of the graph which can’t ever be heavily influenced by the presence or non-presence of a single given edge. The size of the largest clique, or the largest cut, are good examples. Adding or removing an edge can change these quantities by at most one.

So if we order the edges, and let the filtration $\mathcal{F}_k$ be generated by the state of the first k edges in this ordering, then $X_k=\mathbb{E}[\text{max cut}| \mathcal{F}_k]$ is a martingale. (A martingale constructed backwards in this fashion by conditioning a final state on a filtration is sometimes called a Doob martingale.) Using A-H on this shows that the deviations from the mean are of order $\sqrt{N}$, where N is the size of the graph. In the sparse case, it can be justified fairly easily that the maximum cut has size $\Theta(N)$, since for example there will always be some positive proportion of isolated vertices. However, accurate asymptotics for the mean of this quantity seem (at least after a brief search of the literature – please do correct me if this is wrong!) to be unknown. So this might be an example of the curious situation where we can control the deviations around the mean better than the mean itself!

Beyond bounded increments

One observation we might make about the proof is that it is tight only if all the increments $X_i-X_{i-1}$ are supported on $\{-c_i,+c_i\}$, which is stronger than demanding that the absolute value is bounded. If in fact we have $X_i-X_{i-1}\in[-d_i,c_i]$ almost surely, then, with a more detailed preliminary lemma, we can have instead a bound of $\exp\left( -\frac{2t^2}{\sum_{i=1}^n (c_i+d_i)^2} \right)$.

While it isn’t a problem in these examples, in many settings the restriction to bounded increments is likely to be the obstacle to applying A-H. Indeed, in the technical corner of my current research problem, this is exactly the challenge I faced. Fortunately, at least in principle, all is not necessarily lost. We might, for example, be able to establish bounds $(c_i)$ as described, such that the probability that any $|X_i-X_{i-1}|$ exceeds its $c_i$ is very small. You could then construct a coupled process $(Y_i)$, that is equal to $X_i$ whenever the increments are within the given range, and something else otherwise. For Y to fit the conditions of A-H, the challenge is to ensure we can do this such that the increments remain bounded (ie the ‘something else’ also has to be within $[-c_i,c_i]$ ) and also that Y remains a martingale. This total probability of a deviation is bounded above by the probability of Y experiencing that deviation, plus the probability of Y and X decoupling. To comment on the latter probability is hard in general without saying a bit more about the dependence structure in X itself.

Grade Reparameterisation – A Free Lunch?

The debate has been hotting up this week in the discussion of the Scottish independence referendum, and with it has come the inevitable barrage of questionable arguments, questionable statistics and questionable logic. Nonetheless, at least lots of people seem to be questioning all of the above fairly carefully. Instead I want to question some of the logic and philosophy of a BBC report, which discusses a change to the grade divisions for GCSEs in some subjects, to be introduced from 2017. The article can be found here.

First a caveat. Picking logical holes in arguments for educational reform is like taking candy from a baby in the sense that it’s normally quite easy, and not usually entertaining or instructive for either party. This article is at least a long way short of the infamous comment from someone at the Institute of Physics that “nearly half of [schools] are actually doing worse than average” (source). In that case, a discussion of gender inequality ends up more like a discussion of Markov’s inequality.

The exact nature of the grade reparameterisation is not hugely important. Instead of indexing grade classes by the set $\{A*,A,\ldots,G,U\}$, the set is to be $\{1,2,\ldots,9\}$, with 9 the top grade. In the sequel, I’m going to talk about A-levels, because I know very slightly more about that, but everything relevant applies equally well to GCSEs. I want to discuss the request by the general secretary of the Association of School and College Leaders that “students must not be disadvantaged by the change in grading.” My claim is that this is not possible under any change in grading.

We need to consider what the value of a grade is. If different grades make no difference to a student, then by definition that student can’t be disadvantaged by a change in grading procedure. The grade gives an approximate measure of the student’s ability in the subject in question. There are several reasons why it is approximate.

Firstly, marks in exams are not a perfect measure of ability, if there even is an absolute underlying idea of ‘ability’. Some people will under-perform in a given exam, and some people will over-perform. Note if someone feels they always under-perform, a law of large numbers suggests that in fact it is their prediction skills that are below average. This is a fundamentally different question to students who don’t prosper in exam conditions (or prosper excessively!). This is a problem, but not a problem that anyone expects to be solvable by grade reparameterisation.

Secondly, a grade is an approximate measure because it represents a range of possible marks. If you believe you are the best student at a particular subject in the country, you are probably misguided, but in any case, you cannot demonstrate this quality in your GCSE grade. The best you can hope for is to get the top grade, and thus be considered first equal with tens of thousands of others. In conclusion the importance of your grade is entirely a function of how it is interpreted.

Anyway, suppose we are organising a shift in a grade boundary. To make life easy, we only adjust the boundary between the top two grades, which are called Red and Blue in accordance with available board markers. We are describing exam score as a number between 0 and 1, rather than an integer between 0 and 72 or whatever. We focus on students achieving the top Red grade. Who has been disadvantaged in the change of grading? Well naturally the portion of students who are now scoring Blue, whereas previously they were Red. Admittedly they were on the border before, and they are still on the border, just on the side with everyone who scored lower this time. So it looks like no-one has gained from this.

But this is patently false. We’ve said that value of the Red grade is a function of how it is interpreted. To put this into a more realistic context, imagine an employer or, whisper it softly, a university admissions officer, looking at a CV announcing a Red grade in maths. The employer has to make a judgement on the student’s true maths ability (whatever that means) based on this Red grade. What factors are relevant?

• How hard is the exam? Scoring top marks on GCSE maths and scoring top marks on the International Maths Olympiad are both excellent achievements, but in some contexts, the latter would stand out more. The value of the grade is an increasing function of the difficulty of the exam. (I’m assuming there is no rescaling in this toy model, but the logic is preserved under rescaling.)
• How wide is the grade interval? If any mark gave you a Red grade, it wouldn’t signify anything? If Red indicates 70%-100%, you’re have to assume that candidate will on average be somewhere in the middle. This is not uncharitable, merely realistic given an absence of more precise information. The result of this is that the value of a Red grade is a decreasing function of the width of the grade interval.

Thus, so long as the difficulty of the exam remains constant (which is up for discussion, but not here), all the students who get a Red grade under the new regime have gained an advantage from the change. In conclusion, this is a zero-sum game. Some students will benefit, others will be at a disadvantage, and the claim is true, at least within the framework of this reasonable model.

I have interpreted literally a statement that was probably not intended to be interpreted literally. But this is not about point-scoring pedantry. It’s more that this sort of vagueness on matters that could be precise distracts from more useful statements. The person I quoted finishes by saying:

“What is important is that Ofqual sets out very clearly to teachers and students what is needed to achieve a specific grade. This is not the same as simply describing what statistical proportion of pupils will achieve a grade. Employers need a clear message that if a student has achieved a particular grade, it means that they have a certain skill or knowledge level.”

Let’s see how this works in practice at the moment by looking up the OCR Mathematics A-level specification. Examine pages 31 and 32. This describes verbally the standard expected of students to achieve grades A, C and E. I feel I am not being uncharitable if I say that the algorithm to get from grade A to the others is to replace the words “almost all” and “high” with “most” and “reasonable” and drop in the adjectives “usually” or “sometimes” fairly freely.

This illustrates the underlying problem. This phenomenon is more probably apparent in maths than in subjects where quality of writing is a factor. Ultimately the difference between an A and an E is that the students were taught the same material, but one found at least some parts of it more challenging, or at least was slower and less accurate at answering questions about it in the exams. The notion that a grade might give a clear qualitative indication of ability at a more sophisticated level than the above, is both very challenging, and almost completely independent of grade boundaries. If this is genuinely what is desired, it makes sense to focus on this rather than fiddling with boundaries, or complaining about other people fiddling with boundaries.

Means and Markov’s Inequality

The first time we learn what a mean is, it is probably called an average. The first time we meet it in a maths lesson, it is probably defined as follows: given a list of values, or possibilities, the mean is the sum of all the values divided by the number of such values.

This can be seen as both a probabilistic and a statistical statement. Ideally, these things should not be different, but at a primary school level (and some way beyond), there is a distinction to be drawn between the mean of a set of data values, say the heights of children in the class, and the mean outcome of rolling a dice. The latter is the mean of something random, while the former is the mean of something fixed and determined.

The reason that the same method works for both of these situations is that the distribution for the outcome of rolling a dice is uniform on the set of possible values. Though this is unlikely to be helpful to many, you could think of this as a consequence of the law of large numbers. The latter, performed jointly in all possible values says that you expect to have roughly equal numbers of each value when you take a large number of samples. If we refer to the strong law, this says that in fact we see this effect in the limit as we take increasingly large samples with probability one. Note that it is not trivial to apply LLN jointly to all values for a general continuous random variable. The convergence of sample distribution functions to the cdf of the underlying distribution is the content of the Glivenko-Cantelli Theorem.

In any case, this won’t work when there isn’t this symmetry where all values are equally likely. So in general, we have to define the mean of a discrete random variable as

$\mu=\sum k\mathbb{P}(X=k).$

In other words, we are taking a sum of values multiplied by probabilities. By taking a suitable limit, a sum weighted by discrete probabilities converges to an integral weighted by a pdf. So this is a definition that will easily generalise.

Anyway, typically the next stage is to discuss the median. In the setting where we can define the mean directly as a sum of values, we must be given some list of values, which we can therefore write in ascending order. It’s then easy to define the median as the middle value in this ordered list. If the number of elements is odd, this is certainly well-defined. If the number is even, it is less clear. A lot of time at school was spent addressing this question, and the generally-agreed answer seemed to be that the mean of the middle two elements would do nicely. We shouldn’t waste any further time addressing this, as we are aiming for the continuous setting, where in general there won’t be discrete gaps between values in the support.

This led onwards to the dreaded box-and-whisker diagrams, which represent the min, lower quartile, median, upper quartile, and max in order. The diagram is structured to draw attention to the central points in the distribution, as these are in many applications of greater interest. The question of how to define the quartiles if the number of data points is not 3 modulo 4 is of exponentially less interest than the question of how to define the median for an even number of values, in my opinion. What is much more interesting is to note that the middle box of such a diagram would be finite for many continuous distributions with infinite support, such as the exponential distribution and the normal distribution.

Note that it is possible to construct any distribution as a function of a U[0,1] distribution by inverting the cdf. The box-and-whisker diagram essentially gives five points in this identification scheme.

Obviously, the ordered list definition fails to work for such distributions. So we need a better definition of median, which generalises. We observe that half the values are greater than the median, and so in a probabilistic setting, we say that the probability of being less than the median is equal to the probability of being greater. So we want to define it implicitly as:

$\mathbb{P}(X>M)=\mathbb{P}(X

So for a continuous distribution without atoms,

$\mathbb{P}(X>M)=\frac12,$

and this uniquely defines M.

The natural question to start asking is how this compares to the mean. In particular, we want to discuss the relative sizes. Any result about the possible relative values of the mean and median can be reversed by considering the negation of the random variable, so we focus on continuous random variables with non-negative support. If nothing else, these are the conditions for lots of data we might be interested in sampling in the ‘real world’.

It’s worth having a couple of questions to clarify what we are interested in. How about: is it possible for the mean to be 1000 times larger than the median; and is it possible for the median to be 1000 times larger than the mean?

The latter is easier to address. If the median is 1000 and the mean is 1, then with probability ½ the random variable X is at least 1000. So these values make a contribution to the mean of at least 500, while the other values make a contribution of at least zero (since we’ve demanded the RV be positive). This is a contradiction.

The former question turns out to be possible. The motivation should come from our box-and-whisker diagram! Once we have fixed the middle box, the median and quartiles are fixed, but we are free to fiddle with the outer regions as much as we like, so by making the max larger and larger, we can increase the mean freely without affecting the median. Perhaps it is clearest to view a discrete example: 1, 2, N. The median will always be 2, so we can increase N as much as desired to get a large mean.

The first answer is in a way more interesting, because it generalises to give a result about the tail of distributions. Viewing the median as the ½-quantile, we are saying that it cannot be too large relative to the mean. Markov’s inequality provides an identical statement about the general quantile. Instead of thinking about the constant a in an a-quantile, we look at values in the support.

Suppose we want a bound on $\mathbb{P}(X>a)$ for some positive a. Then if we define the function f by

$f(x)=a \textbf{1}_{\{x\ge a\}},$

so $f(x)\le x$ for all values. Hence the mean of f(X) is at most the mean of X. But the mean of f(X) can be calculated as

$a\mathbb{P}(X>a),$

and so we conclude that

$\mathbb{P}(X>a)\leq \frac{\mu}{a},$

which is Markov’s Inequality.

It is worth remarking that this is trivially true when $a\le \mu$, since probabilities are always at most 1 anyway. Even beyond this region, it is generally quite weak. Note that it becomes progressively stronger if the contribution to the mean from terms greater than a is mainly driven by the contribution from terms close to a. So the statement is strong if the random variable has a light tail.

This motivates considering deviations from the mean, rather than the random variable itself. And to lighten the tail, we can square, for example, to consider the square distance from the mean. This version is Chebyshev’s Inequality:

$\mathbb{P}(|X-\mu|^2>a\sigma^2)\le \frac{1}{a}.$

Applying Markov an exponential function of a random variable is called a Chernoff Bound, and gives in some sense the bound on tails of a distribution obtained in this way.

Effective Bandwidth

Here, devices have fixed capacity, but packet sizes are random. So, we still have a capacity constraint for the links, but we accept that it won’t be possible to ensure that we stay within those limits all the time, and seek instead to minimise the probability that the limits are exceeded, while keeping throughput as high as possible.

An important result is Chernoff’s Bound: $\mathbb{P}(Y\geq 0)\leq \inf_{s\geq 0}\mathbb{E}e^{sY}$. The proof is very straightforward: apply Markov’s inequality to the non-negative random variable $e^{SY}$. So in particular $\frac{1}{n}\log\mathbb{P}(X_1+\ldots+X_n\geq 0)\leq \inf M(s)$, where $M(s)=\log\mathbb{E}e^{sX}$, and Cramer’s Theorem asserts that after taking a limit in n on the LHS, equality holds, provided $\mathbb{E}X<0,\mathbb{P}(X>0)>0$.

We assume that the traffic has the form $S=\sum_{j=1}^J\sum_{i=1}^{n_j}X_{ji}$, where these summands are iid, interpreted as one of the $n_j$ loads used on source j. We have

$\log\mathbb{P}(S>c)\leq\log \mathbb{E}[e^{s(S-C)}]=\sum_{j=1}^Jn_jM_j(s)-sC$

so $\inf(\sum n_jM_j(s)-sC)\leq -\gamma\quad\Rightarrow\quad \mathbb{P}(s\geq C)\leq e^{-\gamma}$

so we want this to hold for large $\gamma$.

We might then choose to restrict attention to

$A=\{n:\sum n_jM_j-sC\leq-\gamma,\text{ some }s\geq 0\}$

So, when operating near capacity, say with call profile n* on (ie near) the boundary of A, with s* the argmin of the above. Then the tangent plane is $\sum n_jM_j(s^*)-s^*C=-\gamma$, and since A’s complement is convex, it suffices to stay on the ‘correct’ side (ie halfspace) of this tangent plane.

We can rewrite as $\sum n_jM_j(S^*)\leq C-\frac{\gamma}{s^*}$. Note that this is reasonable since s* is fixed, and we call $\frac{M_j(s)}{s}=:\alpha_j(s)$, the effective bandwidth. It is with respect to this average that we are bounding probabilities, hence ‘effective’.

Observe that $\alpha_j(s)$ is increasing by Jensen as $(\mathbb{E}e^X)^t\leq \mathbb{E}e^{tX}$ for t>1 implies that for t>s, $(\mathbb{E}e^{sX})^t\leq(\mathbb{E}e^{tX})^s$.

In particular,

$\mathbb{E}X\leq \alpha_j(s)\leq \text{ess sup}X$