RMM 2017 – Problems 2, 3 and 6

In the previous post, I discussed Problems 1, 4 and 5 from this year’s Romanian Master of Mathematics competition. In this post, I discuss the harder problems (modulo my subjective appreciation of difficulty).

Problem 2

Determine all positive integers n satisfying the following condition: for every monic polynomial P of degree at most n with integer coefficients, there exists a positive integer k \leq n, and (k+1) distinct integers x_1,\ldots,x_{k+1} such that

P(x_1) + P(x_2) + \cdots + P(x_k) = P(x_{k+1}).

Parsing this question deserve at least a moment. Straight after a first reading, I find it worth writing down any key quantifiers which I might forget later. Here, it’s the words at most. If you want to show the statement holds for n=2, you need to investigate monic polynomials with degree zero, one and two. You should also make sure that any instances of x_i really are always distinct.

This matters in competitions! Two of our contestants failed to get the mark for showing n=2 works, precisely because of not checking the linear case, and a third could have lost it for using examples which are sometimes not distinct. On hard papers, one mark actually is the difference between triumph and frustration. And of course it matters outside competitions too, since small cases are exactly what your reader might examine first, to check they understand the problem posed, so it’s not a good place for awkward errors.

I started by trying to show that it couldn’t possibly happen that every polynomial with degree at most n had this property, for some combinatorial reason. For example, that if every set of distinct integers could only be a solution set for a small number of polynomials, then we would end up with not enough polynomials. But I couldn’t make this work at all; every bound ended up heavily in the wrong direction.

The next natural question is, does a typical polynomial of degree at most n have this property? But choosing a typical polynomial is hard, so in fact I asked, do the simplest polynomials of degree at most n have this property? I think the simplest polynomials of degree at most n are \{1,x,x^2,\ldots,x^n\}. Under what circumstances does

x_1^m + \ldots x_k^m = x_{k+1}^m, (1)

have solutions in distinct integers? Famously, when k=2 and m\ge 3 this is a very very hard problem indeed. So the first point is that it though it might be useful to use Fermat’s Last Theorem, it would be foolish to pursue a strategy which, if successful, would have a proof of FLT as a sub-problem. At least, it would be foolish if the aim was to finish this strategy within a few hours.

So my main comment on this question is meta-mathematical. If lots of attempts at general arguments don’t work, there must be some special example that does it. And what properties do I want this special example to have? Maybe one might have thought of this from scratch, but my motivation came from (1) in the case m=p-1. Then, by Fermat’s Little Theorem, all the summands are equal to 1 or 0 modulo p. If k>p, then after discounting any uniform factors of p, we obtain a congruence equation which is, in informal terms,

\left(0\text{ or }1\right)+\ldots+\left(0\text{ or }1\right) \equiv \left(0\text{ or }1\right).

This looks really promising because it’s quite restrictive, but it’s still just a bit annoying: there are quite a few solutions. But it does give us the right idea, which is to find a polynomial P for which P(x)\equiv 1 modulo n. The equation 1+\ldots+1\equiv 1 modulo n has solutions only if the number of summands on the LHS is 1 modulo n. So in this context, this reduces to showing that P is, additionally, injective on the integers, ie that P(x)=P(y) only when x=y.

It’s a nice exercise to show the existence of polynomials which are constant modulo n, and a good problem to work out how to force injectivity. If a polynomial is increasing everywhere, then it is certainly injective, and so the problem ends up being slightly easier in the case where the degree is odd than when the degree is even, but this is a nice conclusion to a nice problem, so I’ll save it for any interested readers to finish themselves.

Problem 3

Let n be an integer greater than 1 and let X be an n-element set. A non-empty collection of subsets A_1,\ldots, A_k of X is tight if the union A_1 \cup \dots \cup A_k is a proper subset of X and no element of X lies in exactly one of the A_is. Find the largest cardinality of a collection of proper non-empty subsets of X, no non-empty subcollection of which is tight.

Note. A subset A of X is proper if A\neq X. The sets in a collection are assumed to be distinct. The whole collection is assumed to be a subcollection.

By Neel Nanda:

If |X|=n, there are 2^n possible subsets, so at first glance the answer could be a variety of things, from a linear to an exponential function of n, each of which would suggest a different approach. So the first step is to conjecture an answer, and by examining small cases it seems impossible to do better than 2n-2. There are several natural constructions for this bound, such as n subsets of size (n-1) and (n-2) subsets of size 1, so we guess this to be our answer (which later turn out to be right!).

From here, a solution is deceptively simple, though empirically the five full solutions in the contest show that it was by no means easy to find. We proceed by induction on the size of X, and want to show that any collection of subsets S has size at least (2n-2). By assumption all subcollections are not tight, so if the union of a subcollection is not the whole set X, then there is an element which appears in exactly one subset. This is a useful result, so we’d like to force a subcollection whose union is not the whole set X.

One way to guarantee that the union of a subcollection is not X is by taking the subcollection of all subsets not containing some element b. So there is some element c which appears in only one subset not containing b. If we choose b so that it’s the element contained in the fewest subsets of S, c is in at least as many subsets of S, but in only one subset not containing b. This means that at most one subset containing b doesn’t contain c. This is useful, because after removing at most 2 subsets (the coefficient of n in 2n-2, importantly!), we now have that every subset in S either contains both b and c or neither. This means that we can replace the pair (b,c) with a new element d, to get a new collection of subsets S’ of a set X’, of size n-1, so by induction |S| \le |S'|+2\le 2n-2.

There is also the case where all subsets contain b, but we can create an equivalent collection of subsets of X \ {b} by removing b from all subsets. So again by induction we are done.

Problem 6

Let ABCD be any convex quadrilateral and let P, Q, R, S be points on the segments AB, BC, CD, and DA, respectively. It is given that the segments PR and QS dissect ABCD into four quadrilaterals, each of which has perpendicular diagonals. Show that the points P, Q, R, S are concyclic.

I thought this problem was extremely hard. The official solution starts with a ‘magic lemma’, that isn’t quite so magic if you then read how it’s used. The overall claim is that PQ, RS and AC are concurrent (or parallel), and this is proved using the fact that the radical axis of the two circles with diameters PQ and RS also passes through this point of concurrency. Hunting for key properties of subsets of points in the diagram is an important skill in hard olympiad geometry, since it exactly reflects how problem-setters produce the problems. All the more so when there is lots of symmetry in the construction. But this is a hard example – there are a lot of potentially relevant subsets of the configuration.

When you’re really stuck with how to get involved in a synthetic configuration, you might consider using coordinates. Some of the UK students have been reading some chapters of a book (Euclidean Geometry in Mathematical Olympiads by Evan Chen. I’ve only had my own copy for a couple of days, but my initial impression is very positive – it fills a gap in the literature in a style that’s both comprehensive and readable.) focusing on various analytic approaches, so James and I felt it was safer to make sure we knew what the best settings were, and how far we could take them.

You almost certainly want the intersection of PR and QS to be your origin. I wanted to set up the configuration using the language of vectors, referenced by (P,Q,R,S). This was because PQ\perp BO and so on, hence \mathbf{b}\cdot (\mathbf{q}-\mathbf{p})=0 and so on. An alternative is to use complex numbers, which makes this condition a bit more awkward, but is more promising for the conclusion. Concyclity is not a natural property in vectors unless you can characterise the centre of the circle, but can be treated via cross-ratios in \mathbb{C}. You also have to decide whether to describe the collinearity of A, B and P by expressing \mathbf{p}=\lambda_{\mathbf{p}} \mathbf{a}+(1-\lambda_{\mathbf{p}})\mathbf{b}, or via something more implicit. There definitely are not four degrees of freedom here, since specifying A certainly defines at most one valid set of (B,C,D), so one is mindful we’ll have to eliminate many variables later. We also have to account for fact that \mathbf{r} is a negative scalar multiple of \mathbf{p}, and it’s not clear whether it’s better to break symmetry immediately, or use this towards the end of a calculation.

The point of writing this was that if your initial thought was ‘this looks promising via coordinate methods’, then I guess I agree. But there’s a difference between looking promising, and actually working, and there are lots of parameterisation options. It’s certainly worth thinking very carefully about which to choose, and in this case, challenging though they were, the synthetic or synthetic-trigonometric methods probably were better.

Antichains in the grid

In the previous post on this topic, we discussed Dilworth’s theorem on chains and antichains in a general partially ordered set. In particular, whatever the size of the largest antichain in a poset, it is possible to partition the poset into exactly that many chains. So for various specific posets, or the directed acyclic graphs associated to them, we are interested in the size of this largest antichain.

The following example turned out to be more interesting than I’d expected. At a conventional modern maths olympiad, there are typically three questions on each paper, and for reasons lost in the mists of time, each student receives an integer score between 0 and 7 per question. A natural question to ask is “how many students need to sit a paper before it’s guaranteed that one will scores at least as highly as another on every question?” (I’m posing this as a straight combinatorial problem – the correlation between scores on different questions will be non-zero and presumably positive, but that is not relevant here.)

The set of outcomes is clearly \{0,1,\ldots,7\}^3, with the usual weak domination partial order inherited from \mathbb{R}^3. Then an antichain corresponds to a set of triples of scores such that no triple dominates another triple. So the answer to the question posed is: “the size of the largest antichain in this poset, plus one.”

In general, we might ask about \{1,2,\ldots,n\}^d, again with the weak domination ordering. This directed graph, which generalises the hypercube as well as our example, is called the grid.

Heuristics for the largest antichain

Retaining the language of test scores on multiple questions is helpful. In the previous post, we constructed a partition of the poset into antichains, indexed by the elements of some maximal chain, by starting with the sources, then looking at everything descended only from sources, and so on. (Recall that the statement that this is possible was referred to as the dual of Dilworth’s theorem.) In the grid, there’s a lot of symmetry (in particular under the mapping x\mapsto n+1-x in every coordinate), and so you end up with the same family of antichains whether you work upwards from the sources or downwards from the sinks. (Or vice versa depending on how you’ve oriented your diagram…) The layers of antichains also have a natural interpretation – each layer corresponds to a given total score. It’s clear a priori why each of these is an antichain. If A scores the same as B overall, but strictly more on the first question, this must be counterbalanced by a strictly lower score on another question.

So a natural guess for the largest antichain is the largest antichain corresponding to some fixed total score. Which total score should this be? It ought to be the middle layer, that is total score \frac{(n+1)d}{2}, or the two values directly on either side if this isn’t an integer. My intuition was probabilistic. The uniform distribution on the grid is achieved by IID uniform distributions in each coordinate, which you can think of as a random walk, especially if you subtract off the mean first. It feels that any symmetric random walk should have mode zero or next-to-zero. Certainly this works asymptotically in a rescaled sense by CLT, and in a slightly stronger sense by local CLT, but we don’t really want asymptotics here.

When I started writing the previous paragraph, I assumed there would be a simple justification for the claim that the middle layer(s) was largest, whether by straight enumeration, or some combinatorial argument, or even generating functions. Perhaps there is, and I didn’t spot it. Induction on d definitely works though, with a slightly stronger hypothesis that the layer sizes are symmetric around the median, and monotone on either side of the median. The details are simple and not especially interesting, so I won’t go into them.

From now on, the hypothesis is that this middle layer of the grid is the largest antichain. Why shouldn’t it, for example, be some mixture of middle-ish layers? (*) Well, heuristically, any score sequence in one layer removes several possibilities from a directly adjacent layer, and it seems unlikely that this effect is going to cancel out if you take some intermediate number of score sequences in the first layer. Also, the layers get smaller as you go away from the middle, so because of the large amount of symmetry (coordinates are exchangeable etc), it feels reasonable that there should be surjections between layers in the outward direction from the middle. The union of all these surjections gives a decomposition into chains.

This result is in fact true, and its proof by Bollobas and Leader, using shadows and compression can be found in the very readable Sections 0 and 1 of [1].

Most of the key ideas to a compression argument are present in the case n=2, for which some notes by Leader can be found here, starting with Proof 1 of Theorem 3, the approach of which is developed over subsequent sections. We treat the case n=2, but focusing on a particularly slick approach that does not generalise as successfully. We also return to the original case d=3 without using anything especially exotic.

Largest antichain in the hypercube – Sperner’s Theorem

The hypercube \{0,1\}^d is the classical example. There is a natural correspondence between the vertices of the hypercube, and subsets of [d]. The ordering on the hypercube corresponds to the ordering given by containment on \mathcal{P}([d]). Almost by definition, the k-th layer corresponds to subsets of size k, and thus includes \binom{d}{k} subsets. The claim is that the size of the largest antichain is \binom{d}{\lfloor d/2 \rfloor}, corresponding to the middle layer if d is even, and one of the two middle layers if d is odd. This result is true, and is called Sperner’s theorem.

I know a few proofs of this from the Combinatorics course I attended in my final year at Cambridge. As explained, I’m mostly going to ignore the arguments using compression and shadows, even though these generalise better.

As in the previous post, one approach is to exhibit a covering family of exactly this number of disjoint chains. Indeed, this can be done layer by layer, working outwards from the middle layer(s). The tool here is Hall’s Marriage Theorem, and we verify the relevant condition by double-counting. Probably the hardest case is demonstrating the existence of a matching between the middle pair of layers when d is odd.

Take d odd, and let d':= \lfloor d/2\rfloor. Now consider any subset S of the d’-th layer \binom{[d]}{d'}. We now let the upper shadow of S be

\partial^+(S):= \{A\in \binom{[d]}{d'+1}\,:\, \exists B\in S, B\subset A\},

the sets in the (d’+1)-th layer which lie above some set in S. To apply Hall’s Marriage theorem, we have to show that |\partial^+(S)|\ge |S| for all choice of S.

We double-count the number of edges in the hypercube from S to \partial^+(S). Firstly, for every element B\in S, there are exactly d’ relevant edges. Secondly, for every element A\in\partial^+(S), there are exactly d’ edges to some element of \binom{[d]}{d'}, and so in particular there are at most d’ edges to elements of S. Thus

d' |S|=|\text{edges }S\leftrightarrow\partial^+(S)| \le d' |\partial^+(S)|,

which is exactly what we require for Hall’s MT. The argument for the matching between other layers is the same, with a bit more notation, but also more flexibility, since it isn’t a perfect matching.

The second proof looks at maximal chains. Recall, in this context, a maximal chain is a sequence \mathcal{C}=B_0\subset B_1\subset\ldots\subset B_d where each B_k:= \binom{[d]}{k}. We now consider some largest-possible antichain \mathcal{A}, and count how many maximal chains include an element A\in\mathcal{A}. If |A|=k, it’s easy to convince yourself that there are \binom{d}{r} such maximal chains. However, given A\ne A'\in\mathcal{A}, the set of maximal chains containing A and the set of maximal chains containing A’ are disjoint, since \mathcal{A} is an antichain. From this, we obtain

\sum_{A\in\mathcal{A}} \binom{d}{|A|} \le d!. (**)

Normally after a change of notation, so that we are counting the size of the intersection of the antichain with each layer, this is called the LYM inequality after Lubell, Yamamoto and Meshalkin. The heuristic is that the sum of the proportions of layers taken up by the antichain is at most one. This is essentially the same as earlier at (*). This argument can also be phrased probabilistically, by choosing a *random* maximal chain, and considering the probability that it intersects the proposed largest antichain, which is, naturally, at most one. Of course, the content is the same as this deterministic combinatorial argument.

Either way, from (**), the statement of Sperner’s theorem follows rapidly, since we know that \binom{d}{|A|}\le \binom{d}{\lfloor d/2\rfloor} for all A.

Largest antichain in the general grid

Instead of attempting a proof or even a digest of the argument in the general case, I’ll give a brief outline of why the previous arguments don’t transfer immediately. It’s pretty much the same reason for both approaches. In the hypercube, there is a lot of symmetry within each layer. Indeed, almost by definition, any vertex in the k-th layer can be obtained from any other vertex in the k-th layer just by permuting the labels (or permuting the coordinates if thinking as a vector).

The hypercube ‘looks the same’ from every vertex, but that is not true of the grid. Consider for clarity the n=8, d=3 case we discussed right at the beginning, and compare the scores (7,0,0) and (2,2,3). The number of maximal chains through (7,0,0) is \binom{14}{7}, while the number of maximal chains through (2,2,3) is \binom{7}{2, 2,3}\binom{14}{4,5,5}, and the latter is a lot larger, which means any attempt to use the second argument is going to be tricky, or at least require an extra layer of detail. Indeed, exactly the same problem arises when we try and use Hall’s condition to construct the optimal chain covering directly. In the double-counting section, it’s a lot more complicated than just multiplying by d’, as was the case in the middle of the hypercube.

Largest antichain in the d=3 grid

We can, however, do the d=3 case. As we will see, the main reason we can do the d=3 case is that the d=2 case is very tractable, and we have lots of choices for the chain coverings, and can choose one which is well-suited to the move to d=3. Indeed, when I set this problem to some students, an explicit listing of a maximal chain covering was the approach some of them went for, and the construction wasn’t too horrible to state.

[Another factor is that it computationally feasible to calculate the size of the middle layer, which is much more annoying in d>3.]

[I’m redefining the grid here as \{0,1,\ldots,n-1\}^d rather than \{1,2,\ldots,n\}^d.]

The case distinction between n even and n odd is going to make both the calculation and the argument annoying, so I’m only going to treat the even case, since n=8 was the original problem posed. I should be honest and confess that I haven’t checked the n odd case, but I assume it’s similar.

So when n is even, there are two middle layers namely \frac{3n}{2}-2, \frac{3n}{2}-1 (corresponding to total score 10 and total score eleven in the original problem). I calculated the number of element in the \frac{3n}{2}-1 layer by splitting based on the value of the first coordinate. I found it helpful to decompose the resulting sum as

\sum_{k=0}^{n-1} = \sum_{k=0}^{\frac{n}{2}-1} + \sum_{k=\frac{n}{2}}^{n-1},

based on whether there is an upper bound, or a lower bound on the value taken by the second coordinate. This is not very interesting, and I obtained the answer \frac{3n^2}{4}, and of course this is an integer, since n is even.

Now to show that any antichain has size at most \frac{3n^2}{4}. Here we use our good control on the chain coverings in the case d=2. We note that there is a chain covering of the (n,d=2) grid where the chains have 2n-1, 2n-3,…, 3, 1 elements (%). We get this by starting with a maximal chain, then taking a maximal chain on what remains etc. It’s pretty much the first thing you’re likely to try.

Consider an antichain with size A in the (n,d=3) grid, and project into the second and third coordinates. The image sets are distinct, because otherwise a non-trivial pre-image would be a chain. So we have A sets in the (n,d=2) grid. How many can be in each chain in the decomposition (%). Well, if there are more than n in any chain in (%), then two must have been mapped from elements of the (n,d=3) grid with the same first coordinate, and so satisfy a containment relation. So in fact there are at most n image points in any of the chains of (%). So we now have a bound of n^2. But of course, some of the chains in (%) have length less than n, so we are throwing away information. Indeed, the number of images points in a given chain is at most

\max(n,\text{length of chain}),

and so the number of image points in total is bounded by

n+\ldots+n+ (n-1)+(n-3)+\ldots+1,

where there are n/2 copies of n in the first half of the sum. Evaluating this sum gives \frac{3n^2}{4}, exactly as we wanted.

References

[1] – Bollobas, Leader (1991) – Compressions and Isoperimetric Inequalities. Available open-access here.

Chains and antichains

I’ve recently been at the UK-Hungary winter olympiad camp in Tata, for what is now my sixth time. As well as doing some of my own work, have enjoyed the rare diversion of some deterministic combinatorics. It seems to be a local variant of the pigeonhole principle that given six days at a mathematical event in Hungary, at least one element from {Ramsay theory, Erdos-Szekeres, antichains in the hypercube} will be discussed, with probability one. On this occasion, all were discussed, so I thought I’d write something about at least one of them.

Posets and directed acyclic graphs

This came up on the problem set constructed by the Hungarian leaders. The original formulation asked students to show that among any 17 positive integers, there are either five such that no one divides any other, or five such that among any pair, one divides the other.

It is fairly clear why number theory plays little role. We assign the given integers to the vertices of a graph, and whenever a divides b, we add a directed edge from the vertex corresponding to a to the vertex corresponding to b. Having translated the given situation into a purely combinatorial statement, fortunately we can translate the goal into the same language. If we can find a chain of four directed edges (hence five vertices – beware confusing use of the word ‘length’ here) then we have found the second possible option. Similarly, if we can find an antichain, a set of five vertices with no directed edges between them, then we have found the first possible option.

It’s worth noting that the directed graph we are working with with is transitive. That is, whenever there is an edge a->b and b->c, there will also be an edge a->c. This follows immediately from the divisibility condition. There are also no directed cycles in the graph, since otherwise there would be a cycle of integers where each divided its successor. But of course, when a divides b and these are distinct positive integers, this means that b is strictly larger than a, and so this relation cannot cycle.

In fact, among a set of positive integers, divisibility defines a partial order, which we might choose to define as any ordering whether the associated directed graph is transitive and acyclic, although obviously we could use language more naturally associated with orderings. Either way, from now on we consider posets and the associated DAGs (directed acyclic graphs) interchangeably.

Dilworth’s theorem

In the original problem, we are looking for either a large chain, or a large antichain. We are trying to prove that it’s not possible to have largest chain size at most four, and largest antichain size at most four when there are 17 vertices, so we suspect there may some underlying structure: in some sense perhaps the vertex set is the ‘product’ of a chain and an antichain, or at least a method of producing antichains from a single vertex.

Anyway, one statement of Dilworth’s theorem is as follows:

Statement 1: in a poset with nm+1 elements, there is either a chain of size n+1, or an antichain of size m+1.

Taking n=m=4 immediately finishes the original problem about families of divisors. While this is the most useful statement here, it’s probably not the original, which says the following:

Statement 2: in a poset, there exists \mathcal{C} a decomposition into chains, and an antichain A such that |\mathcal{C}|=|A|.

Remark 1: Note that for any decomposition into chains and any antichain, we have |\mathcal{C}|\ge |A|, since you can’t have more than one representative from any chain in the antichain. So Statement 2 is saying that equality does actually hold.

Remark 2: Statement 1 follows immediately from Statement 2. If all antichains had size at most m, then there’s a decomposition into at most m chains. But each chain has size n, so the total size of the graph is at most mn. Contradiction.

Unsuccessful proof strategies for Dilworth

Since various smart young people who didn’t know the statement or proof of Dilworth’s theorem attempted to find it (in the form of Statement 1, and in a special case) in finite time conditions, it’s easy to talk about what doesn’t work, and try to gain intellectual value by qualifying why.

  • Forgetting directions: in general one might well attack a problem by asking whether we have more information than we need. But ignoring the directions of the edges is throwing away too much information. After doing this, antichains are fine, but maybe you need to exhibit some undirected ‘chains’. Unless these undirected chains are much longer than you are aiming for, you will struggle to reconstruct directed chains out of them.
  • Where can the final vertex go?: in a classic trope, one might exhibit a directed graph on nm vertices with neither a chain of size n+1 nor an antichain of size m+1. We attempt to argue that this construction is essentially unique, and that it goes wrong when we add an extra vertex. As a general point, it seems unlikely to be easier to prove that exactly one class of configurations has a given property in the nm case, than to prove no configurations has the same property in the nm+1 case. A standalone proof of uniqueness is likely to be hard, or a disguised rehash of an actual proof of the original statement.
  • Removing a chain: If you remove a chain of maximal length, then, for contradiction, what you have left is m(n-1)+1 vertices. If you have a long chain left, then you’re done, although maximality has gone wrong somewhere. So you have an antichain size n in what remains. But it’s totally unclear why it should be possible to extend the antichain with one of the vertices you’ve just removed.

An actual proof of Dilworth (Statement 1), and two consequences

This isn’t really a proof, instead a way of classifying the vertices in the directed graph so that this version of Dilworth. As we said earlier, we imagine there may be some product structure. In particular, we expect to be able to find a maximal chain, and a nice antichain associated to each element of the maximal chain.

dilworth-thmWe start by letting V_0 consist of all the vertices which are sources, that is, have zero indegree. These are minima in the partial ordering setting. Now let V_1 consist of all vertices whose in-neighbourhood is entirely contained in V_0, that is they are descendents only of V_0. Then let V_2 consist of all remaining vertices whose in-neighourhood is entirely contained in V_0\cup V_1 (but not entirely in V_0, otherwise it would have already been treated), and so on. We end up with what one might call an onion decomposition of the vertices based on how far they are from the sources. We end up with V_0,V_1,\ldots,V_k, and then we can find a chain of size k+1 by starting with any vertex in V_k and constructing backwards towards the source. However, this is also the largest possible size of a chain, because every time we move up a level in the chain, we must move from V_i to V_j where j>i.

It’s easy to check that each V_i is an antichain, and thus we can read off Statement 1. A little more care, and probably an inductive argument is required to settle Statement 2.

We have however proved what is often called the dual of Dilworth’s theorem, namely that in a poset there exists a chain C, and a decomposition into a collection \mathcal{A} of antichains, for which |C|=|\mathcal{A}|.

Finally, as promised returning to Erdos-Szekeres, if not to positive integers. We apply Dilworth Statement 1 to a sequence of m^2+1 real numbers a_0,a_1,\ldots,a_{m^2}, with the ordering a_i\rightarrow a_j if i\le j and a_i\le a_j. Chains correspond to increasing subsequences, and antichains to decreasing subsequences, so we have shown that there is either a monotone subsequence of length m+1.

 

Turan’s Theorem

Turan’s theorem gives bounds on the number of edges required in a graph on a fixed number of vertices n to guarantee it contains a complete graph of size r+1. Equivalently, an upper bound on the number of edges in a K_{r+1}-free graph. For some of the applications and proofs, it may be more natural to look instead at the complement graph, for which the theorem becomes a statement about the existence or otherwise of an independent set of size r+1.

Rather than give an expression for the bound immediately, it is more natural to consider the Turan graph T(n,r), the maximal graph on n vertices without a copy of K_{r+1}. This is constructed by dividing the vertices into r classes with ‘as equal size as possible’. That is, some classes have size \lfloor \frac{n}{r}\rfloor and others have size \lfloor \frac{n}{r}\rfloor +1. Then connect any pair of vertices which are not in the same class by an edge. This gives a complete r-partite graph on these classes. Since any collection of r+1 vertices contains at least two in the same class, it can’t contain a K_{r+1}. Note that the complement of the complete r-partite graph is the union of r disjoint complete graphs on the classes.

There are a number of ways to enumerate the edges in T(n,r), and some can get quite complicated quite quickly. After a moderate amount of thought, this is my favourite. Let n=\ell r+k, so T(n,r) has k classes of size (l+1) and (r-k) classes of size l. Pick an ordered pair of vertices uniformly at random. (So picking the same vertices is indeed an option, and is counted twice.) Then the probability they are the same class is

\frac{k}{r}\cdot\frac{\ell+1}{n}+\frac{r-k}{r}\cdot \frac{\ell}{n} = \frac{1}{r}.

So the probability they are in different classes is \frac{r-1}{r}, and we can treat all of the 2n^2 ordered pairs in this way, noting a) that we count everything twice; and b) we know a priori that we don’t have loops, so the fact that we’ve included these in the count doesn’t matter. We end up with the enumeration (1-\frac{1}{r})\frac{n^2}{2} for the edges in T(n,r).

A standard proof

For both proofs, I find it slightly easier to work in the complement graph, where we are aiming for the largest number of edges with an independent set of size (r+1). Suppose we have a graph with the minimal number of vertices such that there’s no independent set of given size. Suppose also that there is an edge joining vertices v and w, such that d(v)> d(w). Then if we change v’s neighbourhood \Gamma(v) so that it becomes the same as \Gamma(w), (that is, we replace v with a copy of w, and maintain the original edge vw), then it is easily checked that we still do not have an independent set of that size, but fewer edges.

Note that by attempting to make the neighbourhoods of connected vertices equal, we are making the graph look more like a union of complete components. We can do a similar trick if we have three vertices u,v,w such that there are edges between u and v and v and w, but not u and w. Then we know the degrees of u,v,w are the same by the previous argument, and so it can again be checked that making \Gamma(u),\Gamma(w) the same as \Gamma(v), and adding the edge uw reduces the number of edges, and maintains the non-existence of the independent set.

The consequence of this is that we’ve shown that the minimum can only be attained when presence of edges is an equivalence relation (ignoring reflexivity). Thus the minimum is only attained for a union of at most r complete graphs. Jensen (or any root-mean-square type inequality) will then confirm that the true minimum is attained when the sizes of the r components are as equal as possible.

A probabilistic proof

The following probabilistic proof is courtesy of Alon and Spencer. The motivation is that in the (equality) case of a union of complete graphs, however we try to build up a maximal independent set, we always succeed. That is, it doesn’t matter how we choose which vertex (unconnected to those we already have) to add next – we will always get a set of size r. This motivates a probabilistic proof, as an argument in expectation will have equality in the equality case, which is always good.

Anyway, we build up an independent set in a graph by choosing uniformly at random a vertex which is not connected to any we have so far, until this set of vertices is empty. It makes sense to settle the randomness at the start, so give the vertices a uniform random labelling on [n], and at each stage, choose the independent vertex with minimal label.

Thus, a vertex v will be chosen for the independent set if, and only if, it has a smaller label than all of its neighbours, that is, with probability \frac{1}{1+d(v)}. So the expected size of the independent set constructed in this fashion is

\sum_{v\in V(G)} \frac{1}{1+d(v)}\ge \frac{V}{1+\bar d} = \frac{V}{1+\frac{2E}{V}}.

One can chase through the expressions to get the bound we want back.

Olympiad example

The reason I was thinking about Turan’s theorem was a problem which the UK IMO squad was discussing. It comes from an American selection test (slightly rephrased): given 100 points in the plane, what is the largest number of pairs of points with \ell_1 distance in (1,2]?

The key step is to think about how large a collection of points can have this property pairwise. It is easy to come up with an example of four points which work, and seemingly impossible to come up with an example with five points. To prove this, I found it easiest to place a point at the origin, then explicitly work with coordinates relative the basis (1,1),(1,-1) for fairly obvious reasons in this metric.

Anyway, once you are convinced that you can’t have five points with this property pairwise, you are ready to convert into a graph-theoretic statement. Vertices correspond to points, and edges link pairs of points whose distance is in (1,2] as required. We know from the previous paragraph that there is no copy of K_5 here, so Turan’s theorem bounds the number of edges, ie the number of suitable pairs.

It also tells us under what sort of circumstances the bound is attained, and from this, it’s natural to split the 100 points into four groups of 25, for example by taking four points which satisfy the condition pairwise (eg a diamond around the origin), and placing each group very near one of the points.

Extensions and other directions

The existence of a complete subgraph is reminiscent of Ramsey theory, which in one case is a symmetric version of Turan’s theorem. In Turan, we are adding enough edges to force a complete subgraph, while in the original form of Ramsey theory, we are asking how large the graph needs to be to ensure that for any edge configuration, either the original graph or the complement graph includes a complete subgraph. It makes a lot more sense to phrase this in terms of colours for the purpose of generalisation.

A natural extension is to ask about finding copies of fixed graphs H other than the complete graph. This is the content of the Erdos-Stone theorem. I’d prefer to say almost nothing rather than be vague, but the key difference is that the bound is asymptotic in the number of vertices rather than exact. Furthermore, the asymptotic proportion of vertices depends on the chromatic number of H, which tells you how many classes r are required to embed H in a (large if necessary) r-partite graph. So it is perhaps unsurprising that the limiting proportions end up matching the proportion of edges in the Turan graphs, namely r-1/r as r varies, which leaves the exact scaling open to further investigation in the case where H is bipartite (hence has chromatic number 2).

EGMO 2016 Paper II

Continuing from yesterday’s account of Paper I, this is a discussion of my thoughts about Paper II of EGMO 2016, happening at the moment in Busteni, Romania. This is not an attempt to describe official solutions, but rather to describe the thought process (well, a thought process) of someone tackling each question. I hope it might be interesting or useful, but for students, it will probably be more useful after at least some engagement with the problems. These are excellent problems, and reading any summary of solutions means you miss the chance to hunt for them yourself.

In actual news, you can follow the scoreboard as it is updated from Romania here. Well done to the UK team on an excellent performance, and hope everyone has enjoyed all aspects of the competition!

Question 4

Circles \omega_1,\omega_2 with the same radius meet at two points X_1,X_2. Circle \omega is externally tangent to \omega_1 at T_1, and internally tangent to \omega_2 at T_2. Prove that lines X_1T_1,X_2T_2 meet on \omega.

Thought 1: I’m not the biggest fan of geometry ever, but I thought this looked like a nice problem, because it’s only really about circles, so I figured it probably wouldn’t require anything too exotic.

Thought 2: I bet lots of people try inversion. But the equal radius condition means I’m probably happy with the circles as they are. I hope lots of people don’t try to place the diagram in some co-ordinate system, even if it possible to do it sensibly (eg by making \omega the reference circle).

Thought 3: The labelling of X_1,X_2 is unrelated to the rest of the indexing. So the intersection of X_1T_2,X_2T_1 should also lie on \omega, and possibly has some relationship (antipodal?) to the point I actually need to find out. But I can’t think of any reason why it’s easier to prove two points lie on a circle than just one, so let’s leave this as a thought rather than an idea.

Idea 1: I drew a terrible diagram on the back of a draft of my abstract, and for once, this was actually kind of helpful. Forget about radii being equal, one of them wasn’t even a circle. Anyway, while drawing in the later points, I was struggling to make it look convincingly like all the lengths which were supposed to be equal were in fact equal. So the idea was: almost all the segments in the diagram (once I’ve defined the circle centres O_{\omega_1} etc) have one of two lengths (the radii of \omega_1,\omega – red and green-ish in the diagram below), and with this in mind I can forget about the circles. We’ve got a rhombus, which is even better than a parallelogram, which is itself a really useful thing to have in a configuration. Another consequence of the proliferation of equal lengths is that almost all triangles are isosceles, and we know that similarity of isosceles triangles is particularly easy, because you only have to match up one angle.

Idea 2: How to prove it? We have to prove that two lines and a circle concur. This is where I actually need to stop and think for a moment: I could define the point where the lines meet and try to show it’s on the circle, or intersect one line with the circle, and show it’s on the other line. Idea 1 basically says I’m doing the problem using lengths, so I should choose the way that fits best with lengths.

20160414_093104

If I define the point P where X_2T_2 meets the circle (this was easier to draw in my diagram), then I know PO_\omega=T_2 O_\omega and so on. Then there were loads of isosceles triangles, and some of them were similar, which led to more parallel lines, and from this you could reverse the construction in the other direction to show that P also lay on the other line.

Question 5

Let k, n be integers such that k\ge 2,\, k\le n\le 2k-1. Place rectangular k x 1 or 1 x k tiles on an n x n chessboard in the natural way with no overlap until no further tile can be placed. Determine the minimum number of tiles that such an arrangement may contain.

Idea 1: It took me a while to parse the question. Minimum over what? I rephrased it in my head as: “to show the answer is eg n+5, I need to show that whenever you place n+4 tiles legally, you can’t add another. I also need to show that you can place n+5 such that you can’t add another.” This made life a lot easier.

Thought 1: What goes wrong if you take n=2k and beyond? Well, you can have two horizontal tiles on a given row. I’m not really sure how this affects the answer, but the fact that there is still space constraint for n<2k is something I should definitely use.

Diversion: I spent a while thinking that the answer was 4 and it was a very easy question. I spent a bit more time thinking that the answer was n, and it was a quite easy question, then realised that neither my construction nor my argument worked.

Thought 2: can I do the cases n=k,or 2k-1 or k+1? The answers were yes, unsure, and yes. The answer to k+1, which I now felt confident was actually four, was helpful, as it gave me a construction for k+2, …, 2k-1 that seemed good, even though it was clearly not optimal for 2k-1. Therefore, currently my potential answer has three regimes, which seemed unlikely, but this seemed a good moment to start trying to prove it was optimal. From now on, I’m assuming I have a configuration from which you can’t add another block.

20160414_100244

Idea 2: About this diagram, note that once I’ve filled out the top-left (k+1)x(k+1) sub-board in this way, there are still lots of ways to complete it, but I do have to have (n-k-1) horizontal and (n-k-1) vertical tiles roughly where I’ve put them. Why? Because I can’t ‘squeeze in’ a vertical tile underneath the blue bit, and I can’t squeeze in a horizontal tile to the right of the blue bit. Indeed, whenever I have a vertical block, there must be vertical blocks either to the left or to the right (*) (or possibly both if we’re near the middle). We need to make this precise, but before doing that, I looked back at where the vertical blocks were in the proposed optimum, and it turns out that all but (k-1) columns include a vertical block, and these (k-1) columns are next to each other.

This feels like a great idea for two reasons: 1) we’ve used the fact that n<2k at (*). 2) this feels very pigeonhole principle-ish. If we had fewer tiles, we’d probably have either at least k columns or least k rows without a vertical (or, respectively, horizontal) tile. Say k columns don’t include a vertical tile – so long as they are next to each other (which I think I know) we can probably include a horizontal tile somewhere in there.

So what’s left to do? Check the previous sentence actually works (maybe it’s full of horizontal tiles already?), and check the numerics of the pigeonhole bound. Also work out how the case n=2k-1 fits, but it seems like I’ve had some (/most) of the good ideas, so I stopped here.

Question 6

EGMO2016 Q6

I don’t actually want to say very much about this, because I didn’t finish off all the details. I want to talk briefly in quite vague terms about what to do if you think this problem looks scary. I thought it looked a bit scary because it looked similar to two number theoretic things I remember: 1) primes in arithmetic progressions. This is very technical in general, but I can remember how to do 3 mod 4 fairly easily, and 1 mod 4 with one extra idea; 2) if a square-free number can be written as a sum of two squares, this controls its factors modulo 4.

Vague Ideas: It seemed unlikely that this would involve copying a technical argument, so I thought about why I shouldn’t be scared. I think I shouldn’t be scared of the non-existence part. Often when I want to show there are no integer solutions to an equation, I consider showing there are no solutions modulo some base, and maybe this will be exactly what I do here. I’ll need to convert this statement about divisibility into an equation (hopefully) and check that n\equiv 3,4 modulo 7 doesn’t work.

For the existence of infinitely many solutions, maybe I’d use Chinese Remainder Theorem [1], or I’ll reduce it to something that I know has lots of solutions (eg Pythagoras), or maybe I can describe some explicit solutions?

Actual Idea 1: We’ve got n^2+m | n^4, but this is a very inefficient statement, since the RHS is a lot larger than the LHS, so to be useful I should subtract off a large multiple of the LHS. Difference of two squares is a good thing to try always, or I could do it manually. Either way, I get n^2+m | m^2 which is genuinely useful, given I know m=1,2, …, 2n, because the RHS is now comparable in size to the LHS, so I’ve narrowed it down from roughly n^2 possibilities to just three:

n^2+m=m^2,\quad 2(n^2+m)=m^2,\quad 3(n^2+m)=m^2. (*)

I’m going to stop now, because we’ve turned it into a completely different problem, which might be hard, but at least in principle this is solvable. I hope we aren’t actually scared of (*), since it looks like some problems we have solved before. I could handle one of these in a couple of lines, then struggled a bit more with the other pair. I dealt with one by recourse to some theory, and the final one by recourse to some theory after a lot of rearranging which I almost certainly got wrong, but I think I made an even number of mistakes rather than an odd number because I got the correct solution set modulo 7. Anyway, getting to (*) felt like the majority of the ideas, and certainly removed the fear factor of the Q6 label, so to fit the purpose of this discussion I’ll stop here.

[1] During one lunch in Lancaster, we were discussing why Chinese Remainder Theorem is called this. The claim was that an ancient Chinese general wanted to know the size of their army but it was too big to count, so had them arrange themselves into columns of various sizes, and counted the remainders. The general’s views on the efficiency of this algorithm remain lost in the mists of time.

EGMO 2016 Paper I

We’ve just our annual selection and training camp for the UK IMO team in Cambridge, and I hope it was enjoyed by all. I allotted myself the ‘graveyard slot’ at 5pm on the final afternoon (incidentally, right in the middle of this, but what England fan could have seen that coming in advance?) and talked about random walks on graphs and the (discrete) heat equation. More on that soon perhaps.

The UK has a team competing in the 5th European Girls Mathematical Olympiad (hereafter EGMO 2016) right now in Busteni, Romania. The first paper was sat yesterday, and the second paper is being sat as I write this. Although we’ve already sent a team to the Romania this year (where they did rather well indeed! I blame the fact that I wasn’t there.), this feels like the start of the olympiad ‘season’. It also coincides well with Oxford holidays, when, though thesis deadlines loom, I have a bit more free time for thinking about these problems. Anyway, last year I wrote a summary of my thoughts and motivations when trying the EGMO problems, and this seemed to go down well, so I’m doing the same this year. My aim is not to offer official solutions, or even outlines of good solutions, but rather to talk about ideas, and how and why I decided whether they did or didn’t work. I hope some of it is interesting.

You can find the paper in many languages on the EGMO 2016 website. I have several things to say about the geometry Q2, but I have neither enough time nor geometric diagram software this morning, so will only talk about questions 1 and 3. If you are reading this with the intention of trying the problems yourself at some point, you probably shouldn’t keep reading, in the nicest possible way.

Question 1

[Slightly paraphrased] Let n be an odd positive integer and x_1,\ldots,x_n\ge 0. Show that

\min_{i\in[n]} \left( x_i^2+x_{i+1}^2\right) \le \max_{j\in[n]} 2x_jx_{j+1},

where we define x_{n+1}=x_1 cyclically in the natural way.

Thought 1: this is a very nice statement. Obviously when i and j are equal, the inequality holds the other way round, and so it’s interesting and surprising that constructing a set of pairs of inequalities in the way suggested gives a situation where the ‘maximum minimum’ is at least the ‘minimum maximum’.

Thought 2: what happens if n is actually even? Well, you can kill the right-hand-side by taking at least every other term to be zero. And if n is even, you really can take every other term to be even, while leaving the remaining terms positive. So then the RHS is zero and the LHS is positive.

The extension to this thought is that the statement is in danger of not holding if there’s a lot of alternating behaviour. Maybe we’ll use that later.

Idea 1: We can write

2(x_i^2+x_{i+1}^2)=(x_i+x_{i+1})^2 + |x_i-x_{i+1}|^2, \quad 4x_ix_{i+1}=(x_i+x_{i+1})^2 - |x_i-x_{i+1}|^2,

which gives insight into ‘the problem multiplied by 2’. This was an ‘olympiad experience’ idea. These transformations between various expressions involving sums of squares turn out to be useful all the time. Cf BMO2 2016 question 4, and probably about a million other examples. As soon as you see these expressions, your antennae start twitching. Like when you notice a non-trivial parallelogram in a geometry problem, but I digress. I’m not sure why I stuck in the absolute value signs.

This was definitely a good idea, but I couldn’t find a way to make useful deductions from it especially easily. I tried converting the RHS expression for i (where LHS attains minimum) into the RHS expression for any j by adding on terms, but I couldn’t think of a good way to get any control over these terms, so I moved on.

Idea 2: An equality case is when they are all equal. I didn’t investigate very carefully at this point whether this might be the only equality case. I started thinking about what happens if you start with an ‘equal-ish’ sequence where the inequality holds, then fiddle with one of the values. If you adjust exactly one value, then both sides might stay constant. It seemed quite unlikely that both would vary, but I didn’t really follow this up. In any case, I didn’t feel like I had very good control over the behaviour of the two sides if I started from equality and built up to the general case by adjusting individual values. Or at least, I didn’t have a good idea for a natural ordering to do this adjustment so that I would have good control.

Idea 3: Now I thought about focusing on where the LHS attains this minimum. Somewhere, there are values (x,y) next to each other such that x^2+y^2 is minimal. Let’s say x\le y. Therefore we know that the element before x is at least y, and vice versa, ie we have

\ldots, \ge y, x, y, \ge x,\ldots.

and this wasn’t helpful, because I couldn’t take this deduction one step further on the right. However, once you have declared the minimum of the LHS, you are free to make all the other values of x_i smaller, so long as they don’t break this minimum. Why? Because the LHS stays the same, and the RHS gets smaller. So if you can prove the statement after doing this, then the statement was also true before doing this. So after thinking briefly, this means that you can say that for every i, either x_{i-1}^2+x_i^3 or x_i^2+x_{i+1}^2 attains this minimum.

Suddenly this feels great, because once we know at least one of the pairs corresponding to i attains the minimum, this is related to parity of n, which is in the statement. At this point, I was pretty confident I was done. Because you can’t partition odd [n] into pairs, there must be some i which achieves a minimum on both sides. So focus on that.

Let’s say the values are (x,y,x) with x\le y. Now when we try to extend in both directions, we actually can do this, because the values alternate with bounds in the right way. This key is to use the fact that the minimum x^2+y^2 must be attained at least every other pair. (*) So we get

\ldots, \le x,\ge y,x,y,x,\ge y,\le x,\ldots.

But it’s cyclic, so the ‘ends’ of this sequence join up. If n\equiv 1 modulo 4, we get \ge y,\ge y next to each other, which means the RHS of the statement is indeed at least the LHS. If n\equiv 3 modulo 4, then we get \le x,\le x next to each other, which contradicts minimality of x^2+y^2 unless x=y. Then we chase equality cases through the argument (*) and find that they must all be equal. So (after checking that the case x\ge y really is the same), we are done.

Thought 3: This really is the alternating thought 2 in action. I should have probably stayed with the idea a bit longer, but this plan of reducing values so that equality was achieved often came naturally out of the other ideas.

Thought 4: If I had to do this as an official solution, I imagine one can convert this into a proof by contradiction and it might be slightly easier, or at least easier to follow. If you go for contradiction, you are forcing local alternating behaviour, and should be able to derive a contradiction when your terms match up without having to start by adjusting them to achieve equality everywhere.

Question 3

Let m be a positive integer. Consider a 4m x 4m grid, where two cells are related to each other if they are different but share a row or a column. Some cells are coloured blue, such that every cell is related to at least two blue cells. Determine the minimum number of blue cells.

Thought 1: I spent the majority of my time on this problem working with the idea that the answer was 8m. Achieved by taking two in each row or column in pretty much any fashion, eg both diagonals. This made me uneasy because the construction didn’t take advantage of the fact that the grid size was divisible by 4. I also couldn’t prove it.

Thought 2: bipartite graphs are sometimes useful to describe grid problems. Edges correspond to cells and each vertex set to row labels or column labels.

Idea 1: As part of an attempt to find a proof, I was thinking about convexity, and why having exactly two in every row was best, so I wrote down the following:

Claim A: No point having three in a row.

Claim B: Suppose a row has only one in it + previous claim => contradiction.

In Cambridge, as usual I organised a fairly comprehensive discussion of how to write up solutions to olympiad problems. The leading-order piece of advice is to separate your argument into small pieces, which you might choose to describe as lemmas or claims, or just separate implicitly by spacing. This is useful if you have to do an uninteresting calculation in the middle of a proof and don’t want anyone to get distracted, but mostly it’s useful for the reader because it gives an outline of your argument.

My attempt at this problem illustrates an example of the benefit of doing this even in rough. If your claim is a precise statement, then that’s a prompt to go back and separately decide whether it is actually true or not. I couldn’t prove it, so started thinking about whether it was true.

Idea 2: Claim A is probably false. This was based on my previous intuition, and the fact that I couldn’t prove it or get any handle on why it might be true. I’d already tried the case m=1, but I decided I must have done it wrong so tried it again. I had got it wrong, because 6 is possible, and it wasn’t hard from here (now being quite familiar with the problem) to turn this into a construction for 6m in the general case.

Idea 3: This will be proved by some sort of double-counting argument. Sometimes these arguments turn on a convexity approach, but when the idea is that a few rows have three blue cells, and the rest have one, this now seemed unlikely.

Subthought: Does it make sense for a row to have more than three blue cells? No. Why not? Note that as soon as we have three in a row, all the cells in that row are fine, irrespective of the rest of the grid. If we do the problem the other way round, and have some blues, and want to fill out legally the largest possible board, why would we put six in one row, when we could add an extra row, have three in each (maintaining column structure) and be better off than we were before. A meta-subthought is that this will be impossible to turn into an argument, but we should try to use it to inform our setup.

Ages and ages ago, I’d noticed that you could permute the rows and columns without really affecting anything, so now seemed a good time to put all the rows with exactly one blue cell at the top (having previously established that rows with no blue cell were a disaster for achieving 6m), and all the columns with one blue cell at the left. I said there were r_1,c_1 such rows and columns. Then, I put all the columns which had a blue cell in common with the r_1 rows next to the c_1 columns I already had. Any such column has at least three blues in it, so I said there were c_3 of these, and similarly r_3 rows. The remaining columns and rows might as well be r_0,c_0 and hopefully won’t matter too much.

From here, I felt I had all the ingredients, and in fact I did, though some of the book-keeping got a bit fiddly. Knowing what you are aiming for and what you have means there’s only one way to proceed: first expressions in terms of these which are upper bounds for the number of columns (or twice the number of columns = rows if you want to keep symmetry), and lower bounds in terms of these for the number of blue cells. I found a noticeable case-distinction depending on whether r_1\le 3c_3 and c_1\le 3r_3. If both held or neither held, it was quite straightforward, and if exactly one held, it got messy, probably because I hadn’t set things up optimally. Overall, fiddling about with these expressions occupied slightly more time than actually working out the answer was 6m, so I don’t necessarily have a huge number of lessons to learn, except be more organised.

Afterthought 2: Thought 2 said to consider bipartite graphs. I thought about this later while cycling home, because one can’t (or at least, I can’t) manipulate linear inequalities in my head while negotiating Oxford traffic and potholes. I should have thought about it earlier. The equality case is key. If you add in the edges corresponding to blue cells, you get a series of copies of K_{1,3}, that is, one vertex with three neighbours. Thus you have three edges for every four vertices, and everything’s a tree. This is a massively useful observation for coming up with a very short proof. You just need to show that there can’t be components of size smaller than 4. Also, I bet this is how the problem-setter came up with it…

Birthday Coincidences and Poisson Approximations

This morning, Facebook was extremely keen to remind me via every available medium that four of my friends celebrate their birthday today. My first thought was that I hope they all enjoy their day, and my second thought was to ask what the chance of this was. I have about 200 Facebook friends, and so this struck me as an unlikely occurrence. But this problem has form, and it felt worthwhile to try some calculations to see if my intuition was well-founded.

rainbowfishcake_compressed

Siméon Denis Poisson celebrated his 234th birthday on 21st June this year.

The classical birthday problem

The starting point is the question: how many friends do you have to have before you expect to start seeing anyone sharing a birthday? There are a ridiculous number of articles about this on the web already, so I will say little, except that I don’t want to call this the ‘birthday paradox’, because it’s not a paradox at all. At best it might be counter-intuitive, but then the moral should be to change our intuition for this type of problem.

Throughout, let’s discount February 29th, as this doesn’t add much. So then, to guarantee having a shared pair of birthdays, you need to have 366 friends. But if you have a mere 23 friends, then the probability of having some pair that share a birthday is slightly greater than a half. The disparity between these two numbers leads to the counter-intuition. Some people might find it helpful to think that instead of counting friends, we should instead be counting pairs of friends, but I don’t personally find this especially helpful.

For me, thinking about the calculation in very slightly more generality is helpful. Here, and throughout, let’s instead take N to be the number of days in a year, and K the number of friends, or kids in the class if you prefer. Then, as usual, it is easier to calculate the probability that no two share a birthday (that is, that all the birthdays are distinct) than the probability that some two share a birthday. We could think of the number of ways to pick the set of birthdays, or we could look at the kids one-at-a-time, and demand that their birthday is not one of those we’ve already seen. Naturally, we get the same answer, that is

\frac{^N P_K}{N^K} = 1\cdot \frac{N-1}{N}\cdot\ldots \frac{N-K+1}{N}.

We’ve assumed here that all birthdates are equally likely. We’ll come back to this assumption right at the end. For now, let’s assume that both N and K are large, and we’ll try to decide roughly how large K has to be in relation to N for this answer to be away from 0 and 1. If we pair opposite terms up, we might approximate this by

(\frac{N-\frac{K}{2}}{N})^K = (1-\frac{K}{2N})^K\approx e^{-K^2/2N}.

In fact, AM-GM says that this is an overestimate, and a bit more care can be used to show that this is a good-approximation to first order. So we see that if K=\Theta(\sqrt{N}) for large N, we get a non-trivial limit.

Challenges for four-way shared birthdays

So the original problem I posed is harder, because there isn’t (unless I’m missing something) a natural way to choose birthdays one-at-a-time, or describe the set of suitable birthday sets. There are two major obstacles in a calculation such as this. Firstly, the overlap of people, that is we might have five or more birthdays overlapping; secondly, the overlap of days, that is we might have several days with four (or more) birthdays. We’ll end up worrying more about the second situation.

We start by eliminating both problems, by asking for the probability that exactly four friends are born on January 1st. The general form of this probability is \frac{\binom{K}{4} }{N^4} \cdot (\frac{N-1}{N})^{K-4}. Now, if K\ll N, this final term should not be significant. Removing this is not exactly the same as specifying the probability that at least four birthdays are on January 1st. But in fact this removal turns a lower bound (because {exactly four}<{at least four}) into an upper (in fact a union) bound. So if the factor being removed is very close to one, we can use whichever expression is more convenient.

In the real life case of N=365, K=200, this term is not negligible. But accounting for this, we get that the probability of exactly four birthdays on 1st January is ~0.0021. Our upper bound on the probability of at least four is ~0.0036.

But now that we know the probability for a given day, can we calculate (1-0.0021)^{365} to estimate the probability that we never have four-overlap? When we did our previous iterative calculation, we were using independence of the different kids’ birthdays. But the event that we have four-overlap on January 1st is not quite independent of the event that we have four-overlap on January 2nd. Why? Well if we know at least four people were born on January 1st, there are fewer people left (potentially) to be born on January 2nd. But maybe this dependence is mild enough that we can ignore it?

We can, however, use some moment estimates. The expected number of days with four-overlap is 365\cdot 0.0021 \approx 0.77. So the probability that there is at least one day with four-overlap is at most ~0.77.

But we really want a lower bound. So, maybe we can borrow exactly the second-moment argument we tried (there for isolated vertices in the random graph) in the previous post? Here, the probability that both January 1st and January 2nd are four-overlapping is

\frac{\binom{K}{4}\binom{K-4}{4}}{N^8}\cdot (\frac{N-2}{N})^{K-8}\approx 4.3\times 10^{-6}.

From this, we can evaluate the expectation of the square of the number of days with four-overlap, and thus find that the variance is ~0.74. So we use Chebyshev, calling this number of days #D for now:

\mathbb{P}(\# D=0)\le \mathbb{P}(|\#D - \mathbb{E}\# D|^2 \ge (\mathbb{E}\# D)^2 ) \le \frac{\mathrm{Var} \# D}{(\mathbb{E} \#D)^2}.

In our case, this unfortunately gives us an upper bound greater than 1 on this probability, and thus a lower bound of zero on the probability that there is at least one day with four-overlap. Which isn’t especially interesting…

Fairly recently, I spoke about the Lovasz Local Lemma, which can be used to find lower bounds on the probabilities of intersections of events, many of which are independent (in a particular precise sense). Perhaps this might be useful here? The natural choice of ‘bad event’ is that particular 4-sets of people share a birthday. There are \binom{K}{4} such events, and each is independent of the collection of \binom{K-4}{4} disjoint events. Thus we can consider using LLL if e\cdot (\binom{K}{4}-\binom{K-4}{4})\cdot 0.0021 \le 1. Unfortunately, this difference of binomial coefficients is large in our example, and so in fact the LHS has order 10^3.

Random number of friends – coupling to a Poisson Process

All of these methods failed because without independence we had to use estimates which were really not tight at all. But we can re-introduce independence if we remove the constraints on the model. Suppose instead of demanding I have K friends, I instead demand that I have a random number of friends, with distribution Poisson(K). Now it is reasonable to assume that for each day, I have a Poisson(K/365) friends with that birthday, independently for each day.

If we end up having exactly K friends with this random choice, then the distribution of the number of 4-overlap days is exactly the same as in the original setup. However, crucially, if we end up having at most K friends with this random choice, the distribution of the number of 4-overlap days is stochastically dominated by the original distribution. So instead let’s assume we have Poisson(L) friends, where L<K, and see how well we can do. For definiteness, we’ll go back to N=365, K=200 now. Let’s say X is the distribution of birthdays in the original model, and \Xi for the distribution of birthdays in the model with a random number of friends

Then

\mathbb{P}(\exists \ge 4\text{-overlap in }\Xi) = 1- \mathbb{P}(\mathrm{Po}(L/365)\le 3)^365. (*)

Now we can write the crucial domination relation as

\mathbb{P}(\exists \ge 4\text{-overlap in }X)\ge \mathbb{P}( \exists \ge 4\text{-overlap in }\Xi \,|\, |\Xi|\le 200),

and then use an inequality version of the law of total probability to bound further as

\ge \frac{ \mathbb{P}(\exists \ge 4\text{-overlap in }\Xi) - \mathbb{P}(|\Xi|>200)}{\mathbb{P}(|\Xi|\le 200)}.

This is a function of L, and in principle we could find its maximum, perhaps as N\rightarrow\infty. Here, though, let’s just take L=365/2 and see what happens. For (*) we get ~0.472.

To estimate \mathbb{P}(\mathrm{Po}(365/2)>200), observe that this event corresponds to 1.4 standard deviations above the mean, so we can approximate using quantiles of the normal distribution, via the CLT. (Obviously this isn’t completely precise, but it could be made precise if we really wanted.) I looked up a table, and this probability is, conveniently for calculations, roughly 0.1. Thus we obtain a lower bound of \frac{0.472-0.1}{0.9}. Allowing for the fairly weak estimates at various points, we still get a lower bound of around 0.4. Which is good, because it shows that my intuition wasn’t right, but that I was in the right ball-park for it being a ‘middle-probability event’.

Remarks and References

– The reason for doing the upper bound for the probability of exact 4-overlap is that the same argument for at-least-4-overlap would have given an upper bound of 1. However, this Poisson Process coupling is also a much better method for obtaining an upper bound on either event.

– Birthdays are not uniformly distributed through the year. The deviation is strong enough that even from the set of birth frequencies (rather than the sequence of birth frequencies), we can reject a null hypothesis of uniformity. Early September is pretty close to the maximum. Two comments: 1) this is the time of year where small variations in birth date have a big effect on education, especially in primary school; 2) we are 37 weeks into the year…

– It is known that 187 friends is the first time the probability of having at-least-4-overlap is greater than ½. You can find the full sequence on OEIS as A014088. I used to have about 650 Facebook friends, before I decided that I’d prefer instead the pleasant surprise of finding out what old acquaintances were up to when I next spoke to them. In this case, the median of the distribution of the largest number sharing a birthday would be seven.

– Eric Weisstein’s article on Mathworld is, in my opinion, the best resource for a mathematician on the first two pages of Google hits by at least two orders of magnitude. In the notation of this article, we were calculating P_4(n=200,d=365). There are also some good general asymptotics, or at least recipes for asymptotics, in equations (17) and (18).

– The paper Methods for Studying Coincidences by Diaconis and Mosteller is, as one might expect, extremely readable, and summarises many results and applications, including several generalisations.

Lovasz Local Lemma

At our training and selection camp in Tonbridge in May, I gave a session about the use of probabilistic methods to solve olympiad combinatorics questions. Such an approach will normally be relevant to a problem where it is required to show the existence of a structure satisfying a particular property. We might consider constructing a structure randomly, and then try to show that the probability that our construction has the required property is non-zero.

There are several possible approaches to this, but often for the property we seek, we can describe a family of ‘bad events’ A_1,\ldots,A_n, and then we have the property if none of the bad events hold. That is, we are looking for general conditions under which we can show that \mathbb{P}(\Cap_{i=1}^n A_i^c)>0.

We have two cases where this is easy to do.

1) If all the A_is are independent, then so long as all \mathbb{P}(A_i)<1, we have our result.

2) If the probability of the bad events have sum less than 1, then we can use a union bound

\mathbb{P}(\cup A_i)\le \sum_{i=1}^n \mathbb{P}(A_i) <1,

to conclude what we want.

In Tonbridge we also discussed first-moment methods, where we show that the expected number of bad events is less than 1, meaning there must be some elements of the probability space where the number of bad events is zero. In this article, we’re looking in a different direction. We’ll try to interpolate between situation 1) and 2), to make some concrete comment on the situation where the probabilities of the bad events are small, but not small enough to use the union bound, and where the events are not quite independent, but are in some sense almost independent.

The first thing we need to do is think about what it means for a family of events to be independent. Suppose we toss a fair coin twice, and consider the events:

A:= {first coin is H}, B:={second coin is T}, C:={exactly one H}.

So, it is easy to see that each pair of events is independent, but if we know that A and B hold, then also C holds (and indeed this stays true under cyclic re-ordering). So C is not independent of the family of events {A,B}. Rather than give a formal definition, I’ll say instead that an event B is said to be independent of the family of events \{A_1,\ldots,A_5\} if it is independent of

A_3

A_1\cap A_2

A_3\cap A_4^c\cap A_5^c,

and so on. I hope it’s clear from this what I mean. Slogan: no knowledge about which of the A_i do or don’t hold gives information about B.

Now we return to our original setup. We want that each A_i is independent of lots of the rest, and so we choose for each i\in[n] a dependency set D_i\subset [n] of indices, such that A_i is independent of the family of events \{A_j: j\in [n]\backslash D_i\}. It is tempting to interpret this immediately as saying that A_i depends on each event with index in D_i. This will normally be fine in practice, but we should emphasise that there is a lot of freedom to choose D_i, and the genuinely important condition is that A_i is independent of the family given by the rest of the indices.

[*Health warning*: The language I am using seems sensible for anyone learning about this for the first time, but is very slightly non-standard. Instead of dependency sets, the classical notation is to consider a dependency digraph on [n], with an edge i->j whenever j\in D_i.]

The symmetric version of the Lovasz Local Lemma says: suppose \mathbb{P}(A_i)\le p and we can choose D_i as described so that |D_i|\le d for each I. Then, if epd\le 1, we may conclude \mathbb{P}(\Cap_{i=1}^n A_i^c)>0.

We’ll come back to the proof, which is best seen in a slightly more general formulation. First, let’s get to grips with the notation, by reassuring ourselves that this really does lie between the union bound and the independence case.

If the events are independent, then clearly we may take D_i=\{i\} for each i, that is d=1, so we may draw the desired conclusion so long as p\le 1/e, which is fine, but a lot less convincing than p<1. Similarly, for the union bound, we have said nothing about the dependency relationships, and so we have to take D_i=[n] for each i. So we may draw the desired conclusion provided p\le 1/ne, which is obviously again a factor of e less than what we would have had with a union bound itself.

Now we’ll see how this might be useful when applied, for example, to a probabilistic construction for the lower bound on the Ramsey number R(k). From Erdos’s argument, we know R(k)\ge (1+o(1)) 2^{k/2}, and we will earn an extra factor of k\sqrt{2}/e. An extra factor of k/e can also be obtained by an alteration argument, but this extra factor of \sqrt{2} makes this LLL method one of the strongest available for this problem.

Recall that for a lower bound, we are trying to find examples of 2-edge-colourings of a large complete graph K_n, such that there is no monochromatic copy of a K_k. We consider the uniform independent random edge-colouring. It makes sense that a bad event A_S should be the presence of a monochromatic complete graph induced on a set S\subset [n], of size k. Since there are two potential colours for the monochromatic induced K_k, we have \mathbb{P}(A_S)=2^{1-\binom{k}{2}}. Then we take the dependency set D_S of A_S to include all those k-sets which share an edge with S, that is |S\cap T|\ge 2. We think about which vertices might contribute to the shared edge, and which make up the remainder to conclude |D_S|\le \binom{k}{2}\binom{n-2}{k-2}.

So now, whenever e\cdot 2^{1-\binom{k}{2}}\binom{k}{2}\binom{n-2}{k-2}\le 1, as a consequence of LLL we can conclude that with positive probability the random colouring gives a suitable example, that is R(k)>n. After fiddling around with Stirling’s formula in a fashion that isn’t hugely interesting, we can conclude R(k)\ge (1+o(1)) \frac{k\sqrt{2}}{2} 2^{k/2}.

The prompt for this article was a discussion during our Malaysian training camp of methods applicable to IMO 2014 Q6. If you want to know just how applicable LLL is, I suggest you have a think yourself. It’s not a direct application – so some thought is involved. Maybe as an appetiser, here are some more elementary exercises, which I’ve borrowed from examples on Po-Shen Loh’s olympiad handouts, and Wikipedia, though I doubt the latter is the original source:

1) 11n points on a circle are coloured with n colours, such that each colour is used exactly 11 times. Show that it’s possible to choose one point of each colour such that no pair are adjacent.

2) A finite graph is given, and all degrees are between 50 and 100. Find some finite C such that you can always colour the vertices of such a graph so that the neighbourhood of any vertex includes at least 20 colours.

Finally, we discuss the more general statement of LLL, and explain how the proof works.

General Lovasz Local Lemma: Suppose there exist x_i\in [0,1) such that \mathbb{P}(A_i)\le x_i \prod_{j\in D_i\backslash\{i\}} (1-x_j) (*). Then \mathbb{P}(\Cap A_i^c)\ge \prod (1-x_i)>0.

Deducing the symmetric form from the general form is not hard. Condition (*) is motivated by the proof. We want to be able to say that no matter which other events and their unions, complements etc we condition on, we still have an upper bound for the probability of A_i. This bound will be x_i. In particular, we want to show that the probability of bad event A_i does not get too high, even if we know that lots of other bad events have not occurred.

The proof proceeds by showing \mathbb{P}(A_i | \Cap_{j\in S}A_j^c)\le x_i for all i, by induction on |S|. For the inductive step, you split S=S_1\cup S_2 where S_1=S\cap D_i, S_2=S\cap D_i^c. If S_1=\varnothing, you are done straight away, by the assumption (*) and independence of A_i and the events not indexed by D_i. Otherwise, you can use the inductive hypothesis on S_2, and repeated Bayes’ theorem to show what you want in a series of steps that have a lot of notation, but aren’t hugely difficult.

ISL14 N6 – Sums of Squares of Intervals

I wasn’t able to post this at the time because of the embargo on discussing this particular question until the end of IMO 2015.

I’m currently enjoying the slow descent into summer in the delightful surroundings of Tonbridge School where we are holding our final camp to select the UK team for this year’s IMO. The mechanism for this selection is a series of four exams following the format of the IMO itself. We take many of the questions for these tests from the shortlist of questions proposed but not used at the previous year’s competition. In this post, I’m going to talk about one of these questions. Obviously, since lots of other countries do the same thing, there is an embargo on publication or public-facing discussion of these problems until after the next competition, so this won’t appear for a couple of months.

The statement of the problem is the following:

IMO Shortlist 2014, N6: Let a_1<a_2<\ldots<a_n be pairwise coprime positive integers with a_1 a prime at least n+2. On the segment I=[0,a_1a_2\ldots a_n] of the real line, mark all integers that are divisible by at least one of the numbers a_1,\ldots,a_n. These points split I into a number of smaller segments. Prove that the sum of the squares of the lengths of these segments is divisible by a_1.

I marked our students’ attempts at this problem and spoke to them about it afterwards. In particular, the official solution we were provided with, and which we photocopied and gave to our students contains some good theory and some magic. On closer inspection, the magic turns out to be unnecessary, and actually distracts a little from the theory. Anyway, this is my guided tour of the problem.

We begin with two items from the rough work that some students submitted. Someone treated the case n=2, which is not quite trivial, but not hard at all, and remarked, sensibly, that n=3 seemed to have become hard. Packaged with this was the implicit observation that for the purpose of the question posed, a_1 plays a completely different role to the other quantities. While this seems obvious when presented in isolation, it’s a good thing to say explicitly so that we avoid wasting time imagining symmetries which are definitely not present later. We’ll sometimes refer to a_1 as p, whenever we are using its primality rather than its other roles.

The second idea coming out of rough work was to consider what happens when an interval gets broken into two by a ‘new point’, whatever that means. In particular, what happens to the sum of the squares of the interval lengths modulo p? The way the student had set up this procedure suggested that the calculations would be easy in the case where the point being added was a multiple of p=a_1. This ended up being a red herring, because there’s no real reason why we would want to add the multiples of a_1 at the end.

It does, however, suggest an inductive argument, if instead you add the points which are multiples of a_n, but not of any of \{a_1,\ldots,a_{n-1}\}, because the latter would already be present. It might seem like induction is illegal, since there is the condition a_1\ge n+2 already present in the statement. However, once a_1 is fixed, if the condition holds for a particular n, then clearly it holds for all smaller n (!), and so we can use induction to build up to the value of n required in the statement.

Now we are happy this might work in principle, let’s do it in practice. We are adding all these multiples of a_n so we want to have some way to index them. One choice is to use their actual values, or the quotient by a_n, ie all the integers up to \prod_{i=1}^{n-1}a_i which are coprime to a_n. But maybe after thinking about it, there will be a more helpful way to index them.

We are only interested in non-trivial splittings of intervals. We might initially be concerned that an existing interval might be split into more than two sub-intervals by the addition of the multiples of a_n, but the ordering suggested in the statement means this can’t happen. In general, let the initial size of the interval be K, and the sizes of the two sub-intervals be x and y for reasons which will become clear. Then the change in contribution to the total sum of interval lengths squared is given by

K^2 \quad \mapsto \quad x^2+y^2= K^2 - 2xy,

using that K=x+y. So the total change when we add all the multiples of a_{n} will be the sum of 2xy over all the intervals which get divided into two.

Now we have the challenge of evaluating this sum, or at least showing that it is divisible by p. Given a multiple a of a_1, what will x be?

Well, on the left of a, the distance to the nearest multiple of a_k is the remainder r_k of a on division by a_k, essentially by definition. So x will be the minimal such remainder as k varies. Similarly, y will be \min_k (a_k-r_k). We forget about the minus sign and the factor of 2 forever, since it doesn’t make any difference unless p=2, in which case we are already done, and so we now have our sum:

\sum_{a\text{ a new multiples of }a_n} \left( \min_i(r_i)\right) \left[ \min_j(a_j-r_j) \right].

Now we return to the question of how to index the new multiples of a_n. But the question’s easier now as we have some clues as to what might be useful. Each of the summands involves the remainders modulo the smaller a_is, so can we use these as part of the indexing?

Yes, of course we can. There’s a bijection between the possible sequences of remainders, and the new multiples of a_n. This is basically the statement of the Chinese Remainder Theorem. Our index set should then be

r_1,\ldots,r_{n-1} \in [1,a_1-1]\times\ldots\times [1,a_{n-1}-1].

Remember that the fact that they are not allowed to be zero is because we only care now about new points being added.

Even now though, it’s not obvious how to compute this sum, so the natural thing to try is to invert our view of what we are summing over. That is, to consider how many of the index sequences result in particular values of x and y. Given x and y at least one, each remainder r_k must lie in [x,a_k-y], ie there are a_i-(x+y)+1 values it is allowed to take. So our guess for the number of such indices might be \prod_{i=1}^{n-1} (a_i-(x+y)+1), which is indeed what I wrote up on the board before Neel pointed out that some of the sequences I’ve allowed do not actually attain this minimum.

Fortunately this is easily fixed. If the minimum for x is not attained, we know that r_k\in[x+1,a_k-y], and similarly for y. So we can apply the inclusion-exclusion principle to depth two to say that the number we actually want is

\prod_{i=1}^{n-1} (a_i-(x+y)+1) - 2\prod_{i=1}^{n-1} (a_i-(x+y)) + \prod_{i=1}^{n-1} (a_i-(x+y)-1).

This is a polynomial in (x+y), so we’ll call in P(x+y). We’ll come back to this expression. We still need to tidy up our overall sum, in particular by deciding what bounds we want on x and y. Thinking about the intervals of width p, it is clear that the bounds should be 1\le x,y and x+y\le p. So we currently have our sum as

\sum_{1\le x,y,\,x+y\le p} xy P(x+y),.

If you’ve got to this stage, I hope you would feel that this seems really promising. You’ve got to show that some sum of polynomial-like things over a reasonably well-behaved index set is divisible by p. If you don’t know general theorems that might confirm that this is true, once you’ve got statements like this, you might well guess then verify their existence.

First though, we need to split up this xy term, since our sum is really now all about x+y, which we’ll call z.

\sum_{2\le z\le p} P(z) \sum_{x+y=z,x,y\ge 1} xy.

Once we’ve had our thought that we mainly care about polynomials mod p at this stage, we might even save ourselves some work and just observe that \sum_{x+y=z}xy is a cubic in z. This is definitely not hard to check, using the formulae for triangle numbers and sums of squares. It’s also reasonable to conclude that the value of this is 0 for z=1, so we can change the indices on the sum so it doesn’t look like we’re missing a term. So we have

\sum_{z=1}^p P(z) Q(z),

where Q is some cubic, which we could work out if we really wanted to. But what’s the degree of P? P is made of a sum of 3 polynomials whose degree is clear because they are a product of linear factors. So each of these has degree (n-1), but in fact we can clearly see that the leading term cancels, and also with a tiny bit more care that the second order terms cancel too, so P has degree at most n-3.

[Tangent: students might well ask why it is convention to take the degree of the identically zero polynomial to be -\infty rather than zero. The fact that the above statement is true even when n=2 is a good justification for this.]

[Tangent 2: In my initially erroneous evaluation of the number of remainder sequences, without the inclusion-exclusion argument, I indeed ended up with P having degree n-1. I also worked out Q as being quadratic though, so my overall necessary condition was only one off from what was required, showing that two wrongs are not as good as two rights, but marginally better than only one wrong.]

Anyway, we are taking a sum of polynomials of degree at most n \le p-2. This is the point where one might get confused if you have been specifying everything in too much detail. The fact that I’ve said nothing about the polynomials apart from their degree is predicated on the fact that I now know I only need to only their degrees for what I want, but even if you had tracked everything very explicitly, you might still guess what to do at this stage.

It’s presented as a lemma in the official solution that if you have a polynomial of degree at most p-2, then \sum_{z=1}^p P(z)\equiv 0 modulo p. There are several ways to see this, of which a proof by induction is easiest to believe, but probably least intuitive. It’s clear what goes wrong for degree p-1 by FLT, so you probably want to look at the set \{z^k: z\in[1,p-1]\} modulo p. It won’t necessarily be all the residues modulo p, but thinking about primitive roots, or the roots of unity makes it fairly clear why this would be true. [Essentially any argument that turns multiplication into addition in a well-behaved way clears this up.]

The most important point is that this lemma is a) easy-ish and b) a natural thing to try given where you might have got in the question at this stage. From here you’re now basically done.

I don’t think this is an easy question at all, since even with lots of guiding through the steps there are a lot of tangents one might get drawn down, especially investigating exact forms of polynomials which it turns out we only need vague information about. Nonetheless, there are lots of lessons to be learned from the argument, which definitely makes it a good question at this level in my opinion.

EGMO 2015

It’s been a while since I last wrote anything substantial. There have been some posts in the pipeline, but mainly I’ve been working hard on technical things that don’t translate very well into blog posts, and when I haven’t been working on those, have felt like doing non-maths.

Anyway, among other things which happened recently were the UK’s IMO selection camp in Cambridge during the last week of March, and the fourth European Girls’ Mathematical Olympiad in Belarus this past weekend. At the former, I was quite busy organising various things, and so gave a session based on some of my favourite problems about points and lines that I’ve used in the past. My hope was that with each problem in turn the students would actively invest time in thinking about whether the ideas each other were having seemed likely to be profitable. The aim was that being critical about your approach is a necessary skill once you start having lots of ideas for approaches.

This is hard to practise at school just by reading around, whether regarding competition material or generally. In the competition world, official solutions often contain unmotivated magic. This is reasonable, since they are supposed to be a minimal elementary demonstration of the problem’s validity. Motivation is a luxury which space and time sometimes doesn’t permit. And the solutions you find on, for example, Mathlinks often give the impression that the solvers know how to do 25,000 specific types of problem, and the sole task is to identify which type the latest one belongs to. Relating problems to ones you’ve seen before is important, but can hide, or at least dilute the actual ideas in some cases. Knowing that a specific inequality is a special case of a big machine allows you to claim a ‘solution’ but doesn’t help you identify the relevant ideas.

Later in the camp, a conversation arose concerning to what extent the younger staff found these elementary methods and problems easier now that they had experienced various levels of higher education in mathematics than when they were at school. It’s a complicated question, and I don’t think there’s a simple answer. I think the students might suspect that doing a university degree teaches you ‘advanced techniques’ which immediately simplify some of these problems. In rare examples this can be the case, but the majority of the time, I personally feel the only advantage I have is perhaps better instincts for whether something is or isn’t going to work.

Probably a good test would be Euclidean geometry. Adult olympiad staff typically come in two flavours: those who used to be good at geometry and feel out of practice; and those who weren’t good at geometry and certainly had no inclination to remedy this after they left school. I’m probably closer to the first category and I definitely feel out of practice, but also have minimal inclination to remedy this. Nonetheless, on the rare occasions I try these questions (and it’s not going to be for 4.5 hours at this stage…) my progress rate is probably comparable to when I was in sixth form. I’ve no idea how to turn this into a more concrete assessment, but there must be something about doing abstract maths all the time that prevents you forgetting how to do this, so presumably it should be at least as helpful in the types of problem with non-zero overlap with things I actually do. I won’t discuss geometry in the rest of this post, but I did also enjoy the geometry questions – it’s simply that I feel anything I have to say about them would be less useful than saying nothing.

In any case, I enjoyed trying the problems from Belarus in between bouts of typing, and thought I would write some brief comments about how I decided whether my ideas were good or not. To achieve this, I’ve looked at my rough, and will tell you the ideas which didn’t work, as well as some of the ones which did. I’ve paraphrased the statements slightly to avoid having too many LaTeX tags.

WARNING: what follows will spoil questions {2,4,5} if you haven’t already looked at them, but would like to.

Question 2 – A domino is a 2 × 1 or 1 × 2 tile. Determine in how many ways exactly n^2 dominoes can be placed without overlapping on a 2n × 2n chessboard so that every 2 × 2 square contains at least two uncovered unit squares which lie in the same row or column.

The wording of the question prompted me to consider splitting the board naturally into n^2 2 x 2 squares. I then thought about this ‘at least’ in the question, and concluded that for these 2 x 2 squares, it should be ‘exactly’.

I tried doing an unusual colouring, when I coloured half the black squares green, and half blue and tried to show that either only greens or only blues were covered, but this was clearly not true, or fixable. I then tried to do something similar for the other set of 2 x 2 squares (those whose vertices have (odd, odd) coordinates). Roughly speaking, if too few of the outer cells on the original board are covered, you can’t achieve the bounds on these odd inner squares. But this didn’t really give any useful information.

However, it did get me thinking about how what happens in the far top-left affects the rest of the board, and from there most of the ideas came quickly. I described a 2 x 2 square as N, E, S, W depending on which ‘half’ of the square was covered. Then if a square is N, all the squares directly above it must be also be N (*).

I then made two mistakes, and if we’re looking for reasons where experience is helpful, it was probably here, as I spotted them fairly quickly, rather than wasting ages and ages.

First, I decided that either all squares were {N,E} or all were {S,W} which seemed plausible when I had been focusing on the top-left. This gave an answer of 2 \binom{2n}{n}, but after a bit more thought is clearly not symmetric enough.

Second, I thought condition (*) might be the only constraint, along with similar ones for E, S, W naturally too. I tried to count these using a similar method of enumerating lines between these regions, and I realised I didn’t know how to do these sensibly, for example if it looked like this:

EGMO15 Q2 diagram (3)

This led to another bit of meta-maths that I probably wouldn’t have considered if I was 16. Namely, that the idea of counting these monotone lines across the 2n x 2n grid was too nice not to be useful. Also, if I couldn’t see a way to adapt it for this more complicated setup, my setup was probably wrong. This thought was useful, and then by thinking about the interface between {N,E} and {S,W}, then the other way round, it made sense that the configuration could be parameterised by two monotone lines between different pairs of corners, giving an answer of \binom{2n}{n}^2.

So, if it’s possible to give a reason why I could do this, it’s probably because I had an arsenal of ways to check an answer for plausibility, which saved wasting time on something wrong, and also because I trusted that the answer would be nice, which saved wasting time on something which was also wrong, and would probably have been very complicated to resolve.

Question 4 – Determine whether there exists an infinite sequence a_1,a_2,\ldots of positive integers satisfying a_{n+2}=a_{n+1}+\sqrt{a_{n+1}+a_n}.

So, my first reaction was ‘no way’. Why? Because everything’s determined by the first two terms, and I couldn’t think of any reason why a cool choice of the first two terms would force all of the sums a_{n+1}+a_n to be perfect squares. It seemed just about possible that we could arbitrarily large finite sequences with this property. (Though this also turns out to be impossible.)

I imagine many contestants might have felt similarly and spent a while playing round with quadratic residues to get a sense for exactly how hard it is to make this work for the initial terms. But I was suspicious of the form of the recurrence. We know that if it had been defined linearly, then the terms would grow exponentially, but it’s natural to ask roughly how fast they grow in this example, even relaxing the restriction that the terms be integers.

The first observation was that they are indeed (strictly) growing! But how fast? Are there actually enough squares that every a_{n+1}+a_n can be a different one? Note that the squares themselves satisfy a similar recurrence relation (N+1)^2 = N^2 + 2\sqrt{N^2} + 1 > N^2 + \sqrt{ N^2 + N^2}. So this seemed like a very good idea, and my instinct was that this should work, and I felt glad that I hadn’t pursued the quadratic residues approach.

From here we were basically home. I asked whether the sequence grows faster than the sequence (\frac{n^2}{2}), and the answer was no as

a_{n+2}\le a_{n+1}+ \sqrt{2 a_{n+1}} \le (\sqrt{a_{n=1}} + \frac{1}{\sqrt 2})^2,

so if (after translating indices) a_n = \frac{x^2}{2}, we have a_{n+1}\le \frac{(x+1)^2}{2}. This is clearly a problem or at best a very tight constraint if all the \{a_n+a_{n+1}\} have to be perfect squares, so even though we aren’t completely finished, I am confident I can be in one or two lines, with a bit of care. Fiddling with small values of n looked like it would work, or showing that looking at a large enough initial subsequence that the effect of the first two terms dissipated, we must by the pigeonhole principle have a_{n+2}+a_{n+1}=(k+1)^2,\, a_{n+1}+a_n=k^2, which is enough by a parity argument, using the original statement.

This final bit looks and feels a bit messy, but by this stage it really is just finding any argument to justify why a sequence which grows at most as fast as n^2 can’t actually be n^2 eventually.

Probably the reason I did this quickly was because I trusted my instinct for ‘no’, and also because there seemed a quick way to qualify *roughly* how this sequence grew. Sensibly approximating things, and deciding whether it’s appropriate to approximate them is definitely something I feel I’ve got better at during my PhD, so I guess this was helpful, then just try and throw back the important condition that the elements were integers at the very end.

Question 5 – Anastasia partitions the integers [1,2m] into pairs. Boris tries to choose one integer from each pair so that the sum is n. Given n and m, prove Anastasia can select pairs so that Boris can’t succeed.

So I wasted some thought time by imagining that n was fixed and trying to come up with ideas for the choice of pairs which might avoid n. It struck me that there was no reason why a a typical (uniformly chosen) pairing should avoid any particular n unless this value was particularly big or small.

How big or how small? Well Boris can always choose the bigger element of a pair, so the way to make the minimum maximum is to pair as (1,2), (3,4), … , (2m-1,2m). Conveniently, this also achieves the maximum minimum. These can be calculated as m^2,m(m+1) respectively. Suddenly this seems great, because we’ve actually solved the problem for a huge range of n, ie everything not within between these extrema.

The key step here was to start considering special pairings, chosen to give a reduced set of possible sums. Once we’ve had this idea, it makes sense to consider other different special pairings. The reason we got a small set of possible sums is that there’s lots of overlap. We can achieve lots of overlap by looking at the difference between elements in a pair, and making as many of these the same as possible. For, consider pairs (a,a+d), (b,b+d). Then it’s the same to take a and b+d as to take a+d and b, so we get the same sums in lots of different ways.

The other way to have as many differences the same as possible is to go for (1,m+1), (2,m+2), … , (m,2m). Conveniently, we can parameterise the sums now because at each choice, we decide whether to add an extra m or not, so the sum must be 1+2+…+m, plus some multiple of m. So we can defeat Boris, except when n is \binom{m}{2}+km.

This is a good point to stop because what followed was basically book-keeping. We only have to consider a couple of specific cases when m is odd, and one when m is even, that happen not to fall into either of the categories we can now deal with. It wasn’t too hard to corrupt the examples we already have to deal with these.

The key here was responding to initial failure by trying to find any control at all over n. Perhaps given enough time I would have started trying special pairings? Equally, I might have tried small examples, which would not really have been hugely helpful for this question. In any case, trying to eliminate very small or very large n luckily worked well, as a) I didn’t need to use the word ‘very’ twice in the end; and b) the idea of looking at choices of pairings to minimise the number of sums quickly gave other useful deductions.

I also really enjoyed Question 3, though was suspicious that I’d used a bound a great deal weaker than one in the question. Along the way, I was trying something confusing and not especially useful that led me into some interesting theory about Latin squares, which I might write something about later in the week.