This post talks about a problem which I’ve mentioned in passing a few times previously. A research problem I’ve been thinking about in the past week involves some careful calculations for a related setup, so I thought it would be worth writing some short notes about the classical problem. As an child who has tried to collect football cards or the sets of toys you find inside cereal packets will know, it always takes ages to pick up the last few cards, since you will mainly be picking up types that you already have.
The classical coupon collector problem is exactly a mathematical statement of this situation. We have a finite set, that we might as well take to be [n]. Then we have a collection of IID random variables which are uniformly distributed on [n]. Let’s call these . We are interested in how many samples we need before we have seen each element of [n]. We could define this finishing time as
We are interested in the distribution of this random variable T. The key initial observation is that we can specify the distribution of the number of samples needed between seeing new objects as geometric. For example, given that we have seen k possible values, the number of Xs until we see a new value is distributed as Geom(n-k/n). This of course has expectation n/(n-k), and so the expectation of T is
We can immediately do a crude approximation of this as
We should ask just how approximate this is, and so shortly we will move to a discussion of the harmonic numbers
First though, we should settle the concentration question. Note that this is concretely different to the question of approximating the harmonic numbers. Concentration asks how tightly the distribution is packed around its mean. Later, we will ask exactly how close the mean is to n log n.
The most crude measure is variance. Note that in the above calculation we used linearity of expectation, which holds generally. We also have a result concerning linearity of variance, which Wikipedia asserts is commonly called Bienayme’s formula, provided the summand random variables are independent. And that is the case here. Recall that the variance of a Geom(p) random variable is , where p+q=1. So we obtain
So the standard deviation is an order of magnitude less than the expectation, hence by Chebyshev or similar, we can get some concentration bounds.
Harmonic Numbers
We return now to the question of how good an estimate log n is for
First, we note that log n must be an underestimate. In the following picture
log n is the area under the black curve between 1 and n, whereas H_n also includes the block between n and n+1, and the red area above the curve. So, in the limit, the estimate is off by at least the total red area (as the first error term decays to zero). The value of this area is defined to be the Euler-Mascheroni constant, a name attributed to the two men who first made a serious attempt to approximate it.
Why should this be finite? Well note that if we translated each red bit so they lay one on top of the other, they would not overlap (apart from boundaries) and be contained within the block with width [1,2] and height 1. So the constant is bounded above by 1. Since the function 1/x is strictly convex, approximating each red sector by a triangle with the same boundary points would be an underestimate, so the constant . The value turns out to be about 0.577.
So thus far we have the approximation
In fact a strong asymptotic statement is
Rather than address that, I will show that, asymptotically:
We treat the upper bound first. Note that . So, just by bounding with areas, noting that we need to take the upper limit of the integral to be (n+1), we get:
For the lower bound, it is required to show that
is strictly greater than the red area contributing to that lies to the right of x=n+1 But for this we can use the same argument as before, by translating all the red areas as far to the left as possible so they all lie on top of each other, and in particular within a box of dimensions 1 x 1/n+1, which is less than the integral.
We conclude this article by relating these harmonic numbers to the Stirling numbers of the first kind, which are a particular example of Bell polynomials. The easiest way to state the result is
To see why this might be, consider the number of permutations of [n+1] with two cycles of length k and n+1-k, where for now we assume these are not equal. Then the number of such permutations is
After checking the case k=n+1/2 if applicable, and making sure all the bounds match up correctly, the result follows.
Related articles
- The Strong Maximum Principle (shinyetemon.wordpress.com)
- Lecture 12: Symbolic Method (icountslowly.wordpress.com)
- Chaos Theory: Tent Map (Part 2) (theworldismysterious.wordpress.com)
- Probability that a uniformly random sequence is already sorted (cs.stackexchange.com)
- Stochastic Algorithms 101 (joftdotme.wordpress.com)