I’ve been taking tutorials on the third quarter of the second-year probability course, in which the student have met discrete-time Markov chains for the first time. The hardest aspect of this introduction (apart from the rapid pace – they cover only slightly less material than I did in Cambridge, but in half the time) is, in my opinion, choosing which definition of the Markov property is most appropriate to use in a given setting.
We have the wordy “conditional on the present, the future is independent of the past”, which is probably too vague for any precise application. Then you can ask more formally that the transition probabilities are the same under two types of conditioning, that is conditioning on the whole history, and conditioning on just the current value
(*)
and furthermore this must hold for all sets of values and if we want time-homogeneity (as is usually assumed at least implicitly when we use the word ‘chain’), then these expressions should be functions of
but not n.
Alternatively, one can define everything in terms of the probability of seeing a given path:
where is the initial distribution, and the
s are the entries of the transition matrix P.
Fortunately, these latter two definitions are equivalent, but it can be hard to know how to proceed when you’re asked to show that a given process is a Markov chain. I think this is partly because this is one of the rare examples of a concept that students meet, then immediately find it hard to think of any examples of similar processes which are not Markov chains. The only similar concept I can think of are vector spaces, which share this property mainly because almost everything in first-year mathematics is linear in some regard.
Non-examples of Markov chains
Anyway, during the tutorials I was asking for some suggestions of discrete-time processes on a countable or finite state space which are not Markov chains. Here are some things we came up with:
- Consider a bag with a finite collection of marbles of various colours. Record the colours of marbles sampled repeatedly without replacement. Then the colour of the next marble depends on the set you’ve already seen, not on the current colour. And of course, the process terminates.
- Non-backtracking random walk. Suppose you are on a graph where every vertex has degree at least 2, and in a step you move to an adjacent vertex, chosen uniformly among the neighbours, apart from the one from which you arrived.
- In a more applied setting, it’s reasonable to assume that if we wanted to know the chance it will rain tomorrow, this will be informed by the weather over the past week (say) rather than just today.
Showing a process is a Markov chain
We often find Markov chains embedded in other processes, for example a sequence of IID random variables . Let’s consider the random walk
, where each
with probability p and (1-p). Define the running maximum
, and then we are interested in
, which we claim is a Markov chain, and we will use this as an example for our recipe to show this in general.
We want to show (*) for the process . We start with the LHS of (*)
and then we rewrite as much as possible in terms of previous and current values of Y, and quantities which might be independent of previous values of Y. At this point it’s helpful to split into the cases
and
. We’ll treat the latter for now. Then
so we rewrite as
noting that we substitute for
since that’s in the conditioning. But this is now ideal, since
is actually independent of everything in the conditioning. So we could get rid of all the conditioning. But we don’t really want to do that, because we want to have conditioning on
left. So let’s get rid of everything except that:
Now we can exactly reverse all of the other steps to get back to
which is exactly what we required.
The key idea is that we stuck to the definition in terms of Y, and held all the conditioning in terms of Y, since that what actually determines the Markov property for Y, rearranging the event until it’s in terms of one of the underlying Xs, at which point it’s easy to use independence.
Showing a process is not a Markov chain
Let’s show that is not a Markov chain. The classic mistake to make here is to talk about possible paths the random walk S could take, which is obviously relevant, but won’t give us a clear reason why M is not Markov. What we should instead do is suggest two paths taken by M, which have the same ‘current’ value, but induce transition probabilities, because they place different restrictions on the possible paths taken by S.
In both diagrams, the red line indicates a possible path taken by , and the blue lines show possible paths of S which could induce these.
In the left diagram, clearly there’s only one such path that S could take, and so we know immediately what happens next. Either (with probability p) in which case
, otherwise it’s -1, in which case
.
In the right diagram, there are two possibilities. In the case that , clearly there’s no chance of the maximum increasing. So in the absence of other information, for
, we must have
, and so the chance of this is
.
So although the same transitions are possible, they have different probabilities with different information about the history, and so the Markov property does not hold here.