The Secretary Problem

This post explores in a direction rather different to my research and recent problems I’ve been talking about. We discuss a problem of optimal stopping theory. In less politically correct times, there were various other names for this including (according to Wikipedia) the marriage problem, the Sultan’s dowry problem and the fussy suitor problem. A much more appealing situation might be someone running an auction or selling a house, who wants the maximum bid, but has to accept or reject bids immediately. In any case, we shall quickly reduce it to a context-free setting.

The problem is as follows. A manager needs to hire a secretary and has n applicants, who are interviewed one at a time. I can’t think of a good reason why this should be the case in this context, but the manager’s only concern is to maximise the probability of picking the best of the n candidates. This would be easy if the choice were made at the end, but in fact the decision whether to accept or reject an applicant has to be made straight after the interview. What is the optimal strategy?

It must be clarified what is going on. In a practical setting, there would be some underlying numerical attribute which could be compared, and so the problem would reduce to what I’ve previously called the Passport Photograph problem. In this setting, after the kth candidate, we only know the relative ranks of the first k candidates to be seen. This in turn massively reduces the possibilities for a strategy. The strategy must be of the form:

Accept the kth candidate if they are the best candidate yet seen with some probability that is a function of k and n. Reject otherwise. In particular, this probability cannot depend on the history of the process – that is of basically no relevance at all since we only know the relative ranks at any given time.

There are two ideas that I want to justify informally for now. It seems reasonable to assume that this probability will either be 0 or 1 for each k and n. For example, one could work backwards in k, from n back to 1 working out the probabilities, and proving at each stage inductively that there is no advantage to a mixed strategy.

Secondly, suppose we decide we will accept the kth candidate if they are the best so far. Well if they are not the best, we pass to the next candidate. I claim that if we are using an optimal strategy, we should also accept the (k+1)st candidate if they are the best so far. This property is called monotonicity for the algorithm. In many senses this is the meat of the problem. If we can prove this, then everything else will be relatively straightforward. For now though, we assume it is true. This means that our optimal strategy for n is precisely defined by a threshold value of k. So we rewrite the strategy as follows:

We observe the first (k-1) candidates. After that, as soon as a candidate is the best so far, we accept them.

Note that this allows for the possibility that we do not choose anyone, so if this feels like a problem, we could always demand we choose the last candidate, even if we know they are not optimal. The question that remains is: what value should k take? Let’s make life easy and take n to be large, so we can assume k=cn for some constant c.

The probability that the second best was in the first cn, but the first was not is c(1-c). On that event, this strategy will definitely give us the best candidate. The probability that the third best was in the first cn, but the first and second were not is c(1-c)^2. Our strategy gives us the best candidate overall if and only if the first candidate appears before the second candidate in what remains of the process. Since we are just sampling from a random permutation, these are equally likely, so the probability that we get the best candidate is

\frac{1}{2} c(1-c)^2.

Continuing, the probability that the best candidate in the first cn is the fourth best overall, and the strategy gives us the best candidate is:

\frac{1}{3}c(1-c)^3.

So to find the optimal c, we have to maximise the sum

\sum_{k\geq 1}\frac{1}{k}c(1-c)^k

=-c\log c.

To find the maximum, we set the derivative equal to zero. Solving, we get c=1/e.

It turns out the most natural way to generalise this problem is to consider n independent random variables taking the values 0 and 1. The aim is to find a strategy that maximises the probability of picking the final 1. An informal analysis proceeds much as in the secretary problem. In this setting, we will work with fixed finite n rather than in some limit. Let us call the {0,1}-valued RVs, I_1,I_2,\ldots,I_n, and set

p_j=\mathbb{P}(I_j=1),\quad q_j=1-p_j,\quad r_j=\frac{p_j}{q_j},

in keeping with Bruss’s original paper dealing with this so-called ‘Odds problem’. Then, as before, we want to find the threshold for k. We should remark that the argument given below is shorter than Bruss’s proof, which uses generating functions, but again we will not address the matter of proving that the optimal strategy should have this threshold form.

First we explain why the secretary problem is a special case of this. Observe that, given no information about the first (k-1) candidates apart from their relative ranks, the probability that the kth candidate is the best so far is independent of the history of the process, and is equal to 1/k. So taking p_k=1/k in the odds problem precisely replicates the secretary problem, since the best candidate is, by construction, the final candidate who had the property of being the best so far.

Now we work out how to calculate the threshold for k. We could define k by:

k=\arg\max_{1\leq j \leq n}\left\{\mathbb{P}(I_k+\ldots+I_n=1)\right\}.

By summing over the possibilities for which of the I_j’s is 1, we get

\mathbb{P}(I_k+\ldots+I_n)=q_kq_{k+1}\ldots q_n\big[\frac{p_k}{q_k}+\ldots+\frac{p_n}{q_n}\big]=q_k\ldots q_n[r_k+\ldots+r_n].

So we ask, for which values of k is:

q_k\ldots q_n[r_k+\ldots+r_n]\geq q_{k+1}\ldots q_n[r_{k+1}+\ldots+r_n] ?

This is equivalent to

q_k\ldots q_n \cdot r_k \geq (1-q_k)q_{k+1}\ldots d_n[r_{k+1}+\ldots+r_n],

and we recall that (1-q_k)=p_k=q_kr_k, hence

\iff q_{k+1}\ldots q_n\ge q_{k+1}\ldots q_n[r_{k+1}+\ldots+r_n],

\iff 1\geq r_{k+1}+\ldots+r_n.

So we conclude that the correct optimal value for the threshold k is the largest value of j such that

r_j+\ldots+r_n < 1.

We can check that this agrees with our answer for the secretary problem since

\frac{1}{cn}+\frac{1}{cn+1}+\ldots+\frac{1}{n}\approx \int_{cn}^n \frac{dx}{x}=-\log c

which is approximately 1 precisely when c is roughly 1/e.

Finally a word on generalisations. Dendievel’s paper which appeared on arXiv yesterday considers a problem of Richard Weber where the RVs instead take values {-1,0,1}, and you want a strategy that successfully selects either the last time the value -1 appears or the last time +1 appears. The case where the probabilities of the event 1 and the event -1 are 1) equal and 2) constant is relatively straightforward by backwards induction as here, and the author deals with the two natural generalisations by removing 1) and 2) in turn. It seems natural that such a method should adapt for RVs taking any finite number of values.

REFERENCES

Bruss – Sum the odds to one and stop (2000)

Bruss – A note on bounds for the odds theorem of optional stopping (2003)

Dendievel – Weber’s optimal stopping problem and generalisations (2013) http://arxiv.org/abs/1309.2860

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s