This is the first in a series of posts describing my work on a Markov chain tool to analyze football. The code is available from github,
Before talking about football, I want to talk about a less complicated problem. It is probably worth saying at this point that I am not a mathematician, nor a computer scientist. What I am is an astronomer and astrophysicist, so obviously I know some math and some programming, but at the same time I’m not necessarily an expert on Markov chains. As with most things, what I know is what I’ve picked up in order to work on research problems, so what I have to say may be obvious, or naive. In any case, writing this out is in large part for my own sake, to help myself clarify my thoughts on the problem; as well as to hopefully be useful people who read this without much knowledge of Markov chains coming in.
The first simple problem I want to talk about, which is pretty much the simplest non-trivial Markov chain problem I can think of, is this; imagine a random walk where there are 5 possible states which I will call . In each step of the process, the walker can move one step to the right, which happens with probability p, or one step to the left, which happens with probability q=1-p. If the walker lands at position +2 or -2 they stay there. Here is what the transition matrix looks like,
To be clear, this is the probability to transition TO the state i, FROM the state j. If I iterate once, i.e., compute (this is the Einstein notation for maytrix multiplication), I get,
You can stare at these and start to see the structure of how you move from state to state as you iterate the Markov chain, but I think the main usefulness is just to illustrate that repeated applications of the transition matrix mix things up in a deterministic and straight-forward (if tedious to enumerate) way. For a simple problem like this you could probably even come up with a closed form solution.
Now let me put in some numbers and keep iterating the transition matrix until it converges. lets try p=1-p=0.5. If I iterate 100 times, and round to zero values less than , I get,
So what exactly does this mean? It means if I start in the state (all the way to the left), I end in the state with probability 1. If I start in the state I end in the state with probability 0.75 and with probability 0.25. If I start in the state , I end in the state with probability 0.5 and with probability 0.5. In other words, reading down each column (j) tells me the probability to end in the state (i) after 100 (which may as well be infinity) transitions.
If I use p=0.501 instead, this is what I get,
If I use p=0.99, I get,
This first post just shows that you can set up a Markov chain that models a random walk and make the states at the end “sinks” (or roach motels; once you enter, you never leave), and after iterating enough times, you will always end up in one of the sinks. In the football application, the sinks are going to be scoring events. So we can imagine that making it all the way to the right is like scoring a touchdown, all the way to the left, like giving up a safety. In the next part I’ll look at the expectation values associated with such a Markov chain.