The first post I wrote on this blog was about martingales, way back in 2012 at a time when I had known what a martingale was for about a month. I now don’t have this excuse. So I’m going to write about a couple of properties of (discrete-time) martingales that came up while adjusting a proof which my thesis examiners suggested could be made much shorter as part of their corrections.
Doob’s submartingale inequality
When we prove that some sequence of processes converges to some other process, we typically want to show that this holds in some sense uniformly over a time-interval, rather than just at some fixed time. We don’t lose much at this level of vagueness by taking the limit process to be identically zero. Then, if the convergent processes are martingales or closely similar, we want to be able to bound in some sense.
Doob’s submartingale inequality allows us to do this. Recall that a submartingale has almost-surely non-negative conditional increments. You might think of it heuristically as ‘more increasing than a martingale’. If is a martingale, then
is a submartingale. This will be useful almost immediately.
The statement is that for a non-negative submartingale,
The similarity of the statement to the statement of Markov’s inequality is no accident. Indeed the proof is very similar. We consider whether the event in question happens, and find lower bounds on the expectation of under both possibilities.
Formally, for ease of notation, let be the running maximum
. Then, we let
and apply the optional stopping theorem for submartingales at T, which is by construction at most n. That is
The first of these summands is positive, and the second is at least , from which the result follows.
We’ve already said that for any martingale ,
is a submartingale, but in fact
is a submartingale whenever f is convex, and
for each n. Naturally, this continues to hold when
is itself a submartingale.
[Note that is also a submartingale, but this probably isn’t as interesting.]
A particularly relevant such function f is , for p>1. If we take
a non-negative submartingale which is uniformly bounded in
, then by applying Holder’s inequality and this submartingale inequality, we obtain
Since is a submartingale, then a limit in n on the RHS is monotone, and certainly a limit in n on the LHS is monotone, so we can extend to
Initially, we have to define through this limit, but in fact this result, Doob’s
inequality, shows that
exists almost surely as well.
Naturally, we will often apply this in the case p=2, and in the third of these three sections, we will see why it might be particularly straightforward to calculate
Remark: as in the case of Markov’s inequality, it’s hard to say much if the submartingale is not taken to be non-negative. Indeed, this effect can be seen even if the process is only defined for a single time step, for which the statement really is then Markov’s inequality.
Doob-Meyer decomposition
Unfortunately, most processes are not martingales. Given an discrete-time process adapted to
, it is a martingale if the conditional expectations of the increments are all almost surely zero. But given a general adapted process
which is integrable (so the increments have well-defined finite expectation), we can iteratively construct a new process
, where the increments are centred versions of
‘s increments. That is,
(*)
Then it’s immediately clear from the definition that is a martingale.
There’s a temptation to tie oneself up in knots with the dependence. We might have that increments of the original process depend on the current value of the process. And is it necessarily clear that we can recover the current value of the original process from the current value of
? Well, this is why we demand that everything be adapted, rather than just Markov. It’s not the case that
should be Markov, but it clearly is adapted.
Now we look at the middle expression in (*), and in particular the term we are subtracting, namely the conditional expectation. If we define, in the standard terminology, and
then we have decomposed the original process as the sum of a martingale
, and this new process
. In particular, note that the increment
given above is adapted to
, which is a stronger condition than being adapted to
as we would expect a priori. This property of the process
is called predictability (or possibly previsibility).
This decomposition as just defined is called the Doob-Meyer decomposition, and there is a unique such decomposition where
is a martingale, and
is predictable. The proof of uniqueness is very straightforward. We look at the equalities given above as definitions of
, but then work in the opposite direction to show that they must hold if the decomposition holds.
I feel a final heuristic is worthwhile, using the term drift, more normally encountered in the continuous-time setting to describe infinitissimal expected increments. The increments of represent the drift of
, and the increments of
are what remains from
after subtracting the drift. In general, the process to be subtracted to turn a non-martingale into a martingale is called a compensator, and the existence or otherwise of such processes is important but challenging for some classes of continuous-time processes.
In particular, note that when is itself a martingale, then
. However, probably the most useful case is when
is a submartingale, as then the drift is always non-negative, and so
is almost surely increasing. The converse holds too.
This is relevant because this Doob-Meyer decomposition is obviously only a useful tool for treating if we can handle the two processes
easily. We have tools to bound the martingale term, but this previsible term might in general be tricky, and so the case where
is a submartingale is good, as increasing processes are much easier than general processes, since bounding the whole process might involve only bounding the final term in many contexts.
Predictable quadratic variation
A particularly relevant example is the square of a martingale, that is , where
is a martingale. By the convexity condition discussed earlier,
is a submartingale (provided it is integrable, ie
is square-integrable), and so the process
in its Doob-Meyer decomposition is increasing. This is often called the (predictable) quadratic variation of
.
This predictable quadratic variation is sometimes denoted . This differs from the (regular) quadratic variation which is defined as the sum of the squares of the increments, that is
. Note that this is adapted, but obviously not previsible. The distinction between these two processes is more important in continuous time. There, they are almost surely equal for a continuous local martingale, but not for eg a Poisson process. (For a Poisson process, the PQV is deterministic, indeed linear, while the (R)QV is almost surely equal to the Poisson process itself.) In the discrete time setting, the regular quadratic variation is not relevant very often, while the predictable quadratic variation is useful, precisely because of this decomposition.
Whenever we have random variables which we then centre, there is a standard trick to apply when treating their variance. That is
One consequence is seen by taking an ‘overall’ expectation. Because is a martingale,
(**)
This additive (Pythagorean) property of the square of a martingale is useful in applications where there is reasonably good control on each increment separately.
We can also see this final property without the Doob-Meyer decomposition. For a martingale it is not the case that the increments on disjoint intervals are independent. However, following Williams 12.1 [1], disjoint intervals are orthogonal, in the sense that
whenever . Then, when we square the expression
, and take expectations, all the cross terms vanish, leaving precisely (*).
References
[1] Williams – Probability with Martingales
I also followed the notes I made in 2011/12 while attending Perla Sousi’s course on Advanced Probability, and Arnab Sen’s subsequent course on Stochastic Calculus, though I can’t find any evidence online for the latter now.