The following problem was doing the rounds on some email lists in Oxford this week:
“You and a group of colleagues are all invited to pick an integer between 0 and 100. The average x is calculated, and the person whose guess is unique and comes closest to 2x/3 (among the unique guesses) is declared the winner.”
Although it wasn’t specified in the setting, one assumes that if two unique guesses were equally close to 2/3 of the average, then the two participants shared the prize.
In any case, it is interesting to ask the question, what is the optimal strategy?
I don’t know very much about game theory, so I’m going to say some things that I hope are true. I don’t claim that this is the best or even a good way of thinking about the situation. If we break the problem down into two parts, we have two of the more famous problems in game theory. To consider the first part, we remove the requirement that the winner’s guess be unique, and allow the participants to pick any real number between 0 and 100.
It seems to me there are three ways to do this. Let’s say we have K possible winners who guessed closed to 2x/3. We could give each of them the fixed prize, say £1. Or we could give each of them £1/K, or we could choose one at random and give them £1. I claim that the latter two methods are essentially the same, and so will ignore the third possibility.
Note that 2x/3 will definitely be at most 200/3. Suppose I pick a real value y that is greater than 200/3. What would happen if instead I picked 200/3? Well the value of x would decrease slightly. How much it decreases would depend on the number of people playing the game. It is not hard to check that we end up closer to 2x/3 if we pick 200/3 instead of y. Let’s call .
We might say that the strategy “pick ” dominates the strategy “pick y”, because we always do at least as well with the first strategy as the second, regardless of what the other players do, and indeed in some cases do strictly better. Now comes the potentially philosophical bit. We have concluded that it would be irrational for us to pick any number greater than
. We also know that the email was sent to mathematicians in Oxford, whom we might hope we could safely assume were equally rational. So now we are playing the same game as before, only with the bounds that we should pick a number between 0 and
. We can then apply exactly the same argument to show that it would be irrational to pick any number greater than
. This equally applies to all the other agents, and so we continue, showing that for any k, it is irrational to pick any number greater than
. In conclusion, the only rational strategy is to pick 0, and hence everyone will do this.
For the prize structure suggested, this is a Nash equilibrium, since no-one can improve their winnings by changing strategy (with everyone else’s kept the same). This principle is in general referred to as iterated elimination of dominated strategies. The assumption that, not only are players rational, but that all players know that all other players are rational, and all players know that all players know the others rational and so on, is called Common Knowledge of Rationality. This doesn’t seem hugely controversial in this artificial setting, but it is not hard to think of examples where this might not hold. For example, some players might be facing the same situation, but have different utility functions, so especially if there is some randomness in the dynamics, people will have different ‘rational’ opinions concerning strategies, and unless you know their utility functions, it might seem like the other players are not acting rationally.
This argument works equally well with integers. We first eliminate all possibilities greater than 67, and then keep dividing by 2/3, but we have to take the ceiling function to get the new upper bound. As a result, after repeated application of this idea, we conclude that the only rational strategies are to pick 0 or 1.
It can be seen that neither strategy dominates the other. If more than ¾ of the agents choose 1, then choosing 1 is the optimal strategy; if less than ¾ choose 1, then choosing 0 is better. If exactly ¾ choose 1 then everyone wins. There are, however, only two Nash equilibrium, given by everyone choosing 1, and everyone choosing 0. It is tempting to argue that we might expect roughly ½ the agents to choose 1, and so our better option is to choose 0. But we are assuming the other agents are rational, and there is no a priori reason why picking 0 or 1 with probability ½ each should be the rational thing to do.
However, choosing randomly is a possible strategy. In this setting, it becomes harder to talk about Nash equilibria. When there are just two agents, the set of even random strategies for each agent is relatively manageable, but when there is some arbitrary number of players, the set of possible strategies might be very complicated indeed.
The final stage of this integer problem is morally equivalent to playing a game where everyone has to pick 0 or 1, and you win if you are in the majority. For this problem, it is clear that the only Nash equilibria are for everyone to pick either 0 or 1. If the winners share the prize-money equally, then this game has the interesting property that every configuration is Pareto optimal. This means that no-one can improve their winnings without causing someone else’s situation to decrease. When it comes to deciding what to do, with no knowledge of other people’s strategies, we have made little progress. At least here the symmetry between 0 and 1 makes it clear how little progress we can make. The only thing we can say is that if the utility is concave (a generally reasonable assumption), then by conditioning on what everyone else does, an optimal strategy is to pick 0 with probability 1/2.
The problem with saying anything sensible in these sorts of problem with a zero-sum condition (as we effectively have if the winnings are shared) is the symmetry between all the players. Given any strategy, if everyone used that strategy, then everyone would win 1/N in expectation. This applies equally well if we add the condition that the winner has to pick something unique.
The only question we can perhaps say something about occurs if we say that no-one wins the original game if there are no unique integer answers. Then it is worth considering which strategy could everyone adopt so as to minimise the probability of this happening. For the reasons given above, we should indeed force everyone to take the same strategy, which we can interpret as a distribution on the integers. So which distribution on the integers between 1 and k maximises the probability of having at least one unique value?
We also need to know how many people are playing, so let’s say this is N. I don’t know how to solve the general problem, but I can say something if we fix either k or N and then let the other one go to infinity.
If we fix the number of people and let k go to infinity, then we can get arbitrarily close to probability one by using the uniform distribution. Indeed, any sequence of distributions where the probabilities converge uniformly to zero ( ie ) has this property.
The case where k is fixed and the number of players goes to infinity is a bit trickier. Essentially, if we choose any fixed distribution, the event we seek becomes less and less likely as the number of agents grows. It is equivalent to demanding that if we roll a dice a million times, we only see precisely one six. If we replace a million by larger numbers, this probability decays exponentially.
If we want to maximise the probability of getting precise one agent choosing value 1, we observe that the number of agents choosing that value is binomial(N,p), where p is the probability of that value. If p=o(1/N) then asymptotically, with high probability the number of 1s is 0. If p=w(1/N) then asymptotically, with high probability, the number of 1s is . Taking
, the asymptotic distribution of the number of 1s is Poisson with parameter
. We can solve to see which value of
maximises the probability of getting a single 1. It turns out, unsurprisingly after considering the expectation of the corresponding pre-limit binomial distribution, that this maximum is achieved at
.
So note now that if we take the probabilities of picking 1 and 2 both to be 1/N, we get two Poisson random variables asymptotically. For a similar argument to the construction of the Poisson process from independent infinitesimal increments, the covariance of these tends to 0. So I conjecture, and I reckon it is probably not too hard to come up with a formal proof that for large N, the optimal distribution looks like:
or some permutation of that.
Related articles
- Gauss’s lemma (polynomials) (cozilikethinking.wordpress.com)
- An interesting inequality (sirjackshen.wordpress.com)
- Group (kantandcantor.wordpress.com)
- The Smartest Gambler (maxktricks.wordpress.com)
Interesting article. Another thing that this problem (at least the real number version of it) can illustrate quite well is the power of coalitions in games. The Nash Equilibrium solution concept requires that no player can *unilaterally* improve their outcome by changing strategy. But suppose you are playing this game with a group of people where sequential rationality holds, and they are all working solely to increase their own individual payout. Then as above they should all choose 0.
Now you recruit a co-conspirator. You agree that one of you will choose 100, and the other will choose 1. As long as there are 134 or fewer competitors in total, then the conspirator that chose 1 will win. The two conspirators then share the prize.
So in situations where coalitions are possible, it is no longer rational to choose 0. This is where we see the split between non-cooperative and cooperative game theory.
There are several other solution concepts in cooperative game theory that allow us to think about this sort of game. This may be an interesting topic for another post!