# EGMO 2019

Last week, we held our annual IMO training and selection camp in the lovely surroundings of Trinity College, Cambridge. Four of our students have subsequently spent this week in Kiev, for the ninth edition of the prestigious European Girls’ Mathematical Olympiad.

The UK team, none of whom had attended the competition before, and all of whom remain eligible to return at least once, did extremely well, placing fourth out of the official European countries, and earning three silver medals, and one gold medal, with complete or almost-complete solutions to all six problems between them. More details are available on the competition website.

In this post, I’m going to discuss some of the non-geometry problems. As a special treat, the discussion of Question 6 is led by Joe Benton, which is only fitting, since he wrote it. You can find the first day’s three problems here, and the second day’s problems here. The geometry problems are treated in a separate post.

Problem One

Triple equalities are hard to handle directly, so generally one starts with a single equality, for example the left hand one $a^2b+c = b^2c+a$, after noting that the setup is cyclic (but not symmetric) in the variables, under cycling $a\to b \to c\to a$.

Some quick notes:

• The given equations are inhomogeneous, meaning that not every term has the same degree in terms of {a,b,c}. In complicated equations, normally we want to cancel terms, and we certainly can’t cancel terms with different degrees! So a good first idea is to find a way to make the equations homogeneous, for example using the condition, which is itself inhomogeneous. For example, one could replace $c$ with $c(ab+bc+ca)$, and it’s clear this might lead to some cancellation. In this case, the algebra is more straightforward if one rearranges and factorises first to obtain $a(1-ab)=c(1-bc)$, from which the condition gives $a(bc+ca)=c(ab+ac)$.
• Shortly after this, we find $a^2c=ac^2$, which means $a=c$ unless at least one of these terms is equal to zero.
• This case distinction should not be brushed over lightly as a tiny detail. This is the sort of thing that can lose you significant marks, especially if the omitted case turns out to be more involved than the standard case.
• This is a good example of planning your final write-up. The case distinction $\{a,c\ne 0\}, \{a\text{ or }c=0\}$ is really clumsy. What about b? We have already said that the setup is cyclic in (a,b,c), and it’s annoying to throw this aspect away. It really would be much much to have the overall case distinction at the start of your argument, for example into $\{a,b,c \ne 0\}$ and $\{a\text{ or }b\text{ or }c=0\}$, because then in the latter case you could assume that $a=0$ without loss of generality, then see what that implied for the other variables.
• In this setting, one really should check that the claimed solutions satisfy both the condition and the triple-equality. This is a complicated family of simultaneous equations, and even if you’re lucky enough to have settled on an argument that is reversible, it’s much easier to demonstrate that everything is satisfied as desired than actually to argue that the argument is reversible.

Problem Two

Maybe I speak for a number of people when I say I’m a little tired of domino tiling problems, so will leave this one out.

Problem Five

I liked this problem. A lot. It did feel like there were potential time-wasting bear traps one might slip into, and perhaps these comments are relevant:

• This feels like quite a natural problem to ask. In particular, the task specified by (A)+(B) is much more elegant than the RHS of (C). So unless you have immediate insight for the RHS of (C), it makes sense to start with the task, and aim for any bound similar to (C).
• There are a lot of transformations one could make, for example, repeatedly removing n from one of the $a_n$s, or re-ordering them conveniently somehow, which shouldn’t affect any solution. You can also attack the RHS of (C) in a number of ways (perhaps with a case split for n odd or even?). As with involved but easy angle chases discussed in the companion post, it’s probably better to hold this in mind than instantly to do all of these, since it will just obscure the argument if you end up not using the result of the simplification!
• One should always try small examples. Here, it’s worth trying examples large enough to get a sense of what’s happening, because I think the crucial observation is that (unless you were very very unlucky) there are lots of suitable sequences $B=(b_i)$. (*)
• Certainly the case where the $(a_i)$s themselves form a complete n-residue system is ‘easiest’ and certainly straightforward. One might say that the case where the $(a_i)$ are equal (or at least congruent) is ‘hardest’ (in that $(b_i-a_i)$ might have to take the largest values in some sense), but is also straightforward. There’s certainly the option of trying to rigorise this, but I don’t know whether this can be made to work.

# MOG 2018

I’m teaching a lecture course at Technion this coming semester, which is going to limit opportunities for travel somewhat for the next three months or so. It was therefore nice to take advantage of the new ultra-cheap Luton-Tel Aviv connection on Wizzair to make a whistlestop weekend trip to Cambridge. It was lovely to catch up with friends, colleagues and former students, including some people who fit into all those categories.

Among my various mathematical diversions over the weekend, about thirty of us spent most of Sunday marking the 2018 edition of the UK’s Mathematical Olympiad for Girls (MOG). You can find the paper without solutions here, and the paper with official solutions and commentary here.

I have had several blog posts on probabilistic, competitive and educational topics queued up over this summer which has alternated between extreme busyness and lethargy, but this felt like a good opportunity to break the drought and both start and finish a post on the same day!

I’m going to write some thoughts about Question 4, mostly about part a), which I hope might be useful to future students who perhaps find this post while preparing for next year’s edition of the competition. Here’s the statement:

Each of 100 houses in a row are to be painted white or yellow. The residents are quite particular and request that no three neighbouring houses are all the same colour.

a) Explain why no more than 67 houses can be painted yellow.

b) In how many ways may the houses be painted if exactly 67 are painted yellow?

Of all the questions on the paper, based on the fraction of the scripts which I saw, I suspect Q4a) is the one for which the most candidates will have scored significantly fewer points than they’d been hoping for.

I think this illustrates some useful guidance for maths competitions, and also illuminates some differences between maths and other subjects which candidates might not have thought about.

Many students wrote that because three houses in a row cannot be yellow, the way to get as many yellow houses as possible is to use a YYWYYW… pattern. Since 100 is not a multiple of three, you have to work out what to do with the final house, after you’ve coloured 99 using the pattern, and sometimes you can colour it yellow, and sometimes you can’t.

Finding and proving results about maxima

Before focusing on the qualities of this argument, it’s worth returning to the question statement. It’s in two parts, and (assuming the answer to the second part isn’t “none“) then the overall goal is to find the maximum number of yellow houses, and to count the number of ways to achieve this maximum number, presumably by characterising them somehow. If the question had just asked us to find the maximum, and had not told us what it is, we would have to:

• Decide on a value M, and show that there is a configuration with exactly M yellows.
• Show that no value >M of yellows is possible.

Note that, apart from deciding on the value of M, which really has to come at the start of the argument, you can do these steps in either order. And if you want to find the number of configurations achieving the maximum, you have to:

• Decide on a value M, and count the number of configurations with exactly M yellows, and check that this is positive!
• Show that no value >M of yellows is possible.

Again, you can perform these steps in either order. If the question is posed like this (with M unknown), you have no idea as you are attempting the paper whether finding the value of M is supposed to be easy or supposed to be a really significant part of the attack on the problem, so starting your solution with a sentence like “I’m going to show that M is the maximum” and continuing by giving your proof an exoskeleton like “First, I’ll show that M+1 (or >M) is impossible. [Do this. Then leave a gap.] Now, I’ll count the number of configurations with M yellows.” This leaves no ambiguity that you are doing the problem in the right way, and for someone reading it, it might be the case that the validity of your argument is almost instantly clear once the rough outline is established.

In any case, on MOG Q4, the question has been split into two parts deliberately, and you have been told what the value of the maximum is. In fact, the order is the opposite to what I’ve proposed in the bullet points above. But this just serves to emphasise that, even when or especially when the maximum is already given, the order of attack is not important. The two bullet points might be completely unrelated mathematical arguments, but they are certainly structurally unrelated.

Structuring your approach to MOG Q4

But, to return to MOG Q4, the first part is asking you to show that 68 or more yellow houses is not possible under the rules. The second part is where you are supposed to establish what all the 67 yellow configurations are, and on this occasion it may turn out that you re-use ideas established in part a) during the enumeration of part b).

Anyway, I’m going to repeat an earlier paragraph. Many students wrote that because three houses in a row cannot be yellow, the way to get as many yellow houses as possible is to use a YYWYYW… pattern. Since 100 is not a multiple of three, you have to work out what to do with the final house, after you’ve coloured 99 using the pattern, and sometimes you can colour it yellow, and sometimes you can’t.

Now that we’ve established some general principles, we can see why some students may have got themselves into knots that weren’t directly relevant to the question.

• The blue comment “sometimes you can colour it yellow, and sometimes you can’t” is just not relevant because at this stage we are trying to show that 68 (or more) is not possible. If at any point it becomes a discussion of whether one class of configurations leads to 66 or 67, this can’t be relevant to showing that 68 is not possible.
• The green sentence “because three houses in a row cannot be yellow, the way to get as many yellow houses as possible is to use a YYWYYW… pattern” is, as written, true, but dangerous. Because it’s not precise. As becomes clear in part b), there are actually lots of ways to get as many yellow houses as possible, and while all of them have this pattern most of the time, it is certainly not the case that you use this pattern for the first 99, and then make a decision about the final house. (Since this only gives one configuration.)
• The fact that 100 is not a multiple of 3 is crucial to the problem. If it had asked about 99 houses, you would have had more room just to work with a breakdown into 33 groups of three houses, and some of the argument would have been simpler. So an argument saying that 100/3 = 66.66…, which we can round up to 67 is not really explaining what’s going on. Of course we have to have a whole number of houses, but why you round up rather than down is all about the rules of colouring.
• Finally, the notion of an ‘iterated maximum’ is a generally erroneous argument. In such an argument, you argue that the maximum way to attack a subproblem is X, and once you have X, clearly the maximal way to finish is Y, so the maximum total is X+Y. This is wrong, because of course it’s possible there might be a non-maximum way X*>X to attack the subproblem, from which you can finish in Y*<<Y, so the total ends up being lower overall.
• In this example, the problem is not that the green statement is an erroneous iterated maximum. It’s just sounds like one. It can easily be made into a non-erroneous argument. But as soon as you start with a pattern, but then consider how to add something onto the end of it, you have essentially fallen into this trap unless you very carefully explain why taking the maximum for the first 99 houses plus the maximum for the final house must give a maximum overall.

Because the condition is a local one (meaning it tells you about rules governing two or three adjacent houses, not a global one affecting all houses in the street), it’s good to use a local argument. In the example:

(YYW)(YYW)…(YY*W)(YW*Y)…(YWY)(Y),

notice that it is not true that in every group of three consecutive houses, two are Yellow. (Look between the *asterisks*.) However, it is true that in every group of three consecutive houses, at most two are Yellow, and so in particular if we split as shown above, there are at most two yellows in all the three-brackets, and at most one at the end, giving at most 67 in total. The fact that the pattern changes from YYW to YWY does not affect this argument.

If you set it up this way, then this gives you a good headstart on part b). You might notice that the pattern can change from YYW to YWY (as in the previous example), but not the other way round. And what about WYY? Can this ever come up? You need to justify both aspects of this, and then clarify how to count the number of places where the pattern could change, having now established that all configurations have this form.

The question setters produced a beautiful paper, and it was clear that one of their goals was to use multiple parts sensibly so as to suggest entrance routes into the problem. Many many students successfully used the factorisations discussed in the first parts of Q1, and even those who didn’t succeed seemed mostly to be trying to find differences of squares, rather than prime factors out of nowhere.

This principle applied elsewhere though, and in particular on Q3. I think some students thought Q3a) was supposed to involve three calculations, but in fact it required just giving a combinatorial justification for why the set of objects counted by b can be split into two, where one subset is counted by c and one by d. This argument, which might have been familiar to some through knowledge of Pascal’s triangle, applies equally well at other squares in the box, and also in three dimensions. If you don’t have the gap in the middle of the house, you can immediately apply this argument to Ghastly in part b). But in fact, even with the gap, you can make this work, because you are only really counting across *places you could have been just before in your path*, and the gap means that sometimes this is smaller than it would be without the gap.

There were other ways to do the question, such as counting all paths, and all paths through the gap, but all attempts using a b=c+d+e argument seemed quick and successful.

A couple of other tips to bear in mind on these sorts of questions.

• When you’re deep in a calculation, it’s easy to forget that you started with a problem of some kind, and you should always go back to check that your answer makes sense in the context of the problem! Even if fits the algebra, you should not have any of the following (among many other examples that can emerge after a small conceptual or calculation error):
• A negative radius for a circle
• A non-integer number of configurations
• A probability >1
• You should also check for potential symmetries. While it’s dangerous to use symmetry excessively until you’re sure the situation genuinely is symmetric, it can be your friend in reducing errors. For example, maybe you found the final count in the far corner for Ghastly the ghost to be eg 18+16+12? But the three adjacent-to-far-corner cells are clearly equivalent from the starting point. Any difference between them can only be a feature of the argument you are trying. So they should have the same path count, and probably you made a small error in one of them.
• It’s also worth checking your answers feel roughly right for magnitude. In particular, again on Q3 , if you get a very large answer for part b), you should try to check against other options. How many total paths are there? (Ie, ignoring the gap.) If your answer is much bigger than this, you must have made a small mistake, probably multiplying where you should have been adding. It might well, for example, not feel right if you end up showing that the number of paths is greater than the number of atoms in the universe. This applies equally well for Q4. Note that 67! is an extremely large number, and is larger than 2^100, which is the total number of colourings without observing any of the consecutive rules, let alone the maximum condition.
• As a final point, it was good to see that many candidates were making a bona fide effort to get as many marks as possible. I would say, however, that just writing down a speculative answer is often less helpful than a single sentence explaining what you are trying to do. Pretty much any comment on patterns and a division into cases for Q4 would be useful, even if it wasn’t completely correct, whereas a guess at eg $\binom{67}{33}$ or similar for the final answer is probably less valuable, and draws attention to what you haven’t done, rather than any progress that you really have made.

Anyway, I hope that was interesting at some level to some girls potentially interested in taking MOG or any other challenging mathematics in the future.

# BMO1 2017 – Questions 5 and 6

The first round of the British Mathematical Olympiad was sat yesterday. The questions can be found here and video solutions here. My comments on the first four questions are in the previous post.

Overall, I didn’t think any of the questions on this paper were unusually difficult by the standard of BMO1, but I found everything slightly more time-consuming than typical. I thought Question 5 was a great problem, and I tried lots of things unsuccessfully, first, and so wanted to discuss it in slightly more technical language. For Question 6 I made a decisive mistake, which I’ll explain, and which cost a lot of time. But in general, my point is that the back end of the paper was a little fiddlier than normal, and required longer written solutions, and perhaps many students might have had less time than expected to attack them anyway after details earlier in the paper.

Question Five

As I said before, I thought this question was quite challenging. Not because the solution is particularly exotic or complicated, but because there were so many possible things that might have worked. In my opinion it would not have been out of place at the start of an IMO paper, because it’s perfectly possible to have enough good ideas that eliminating the ones that don’t work takes an hour, or hours. Even though it slightly spoils the flow of the solution, I’m particularly trying to emphasise the tangents that didn’t work, mostly for reassurance to anyone who spent a long time struggling.

I was thinking about this question in terms of a 2Nx2N board, where N is even, and for the given question equal to 100. I spent a while thinking that the bound was 8N-4, corresponding to taking the middle two rows and the middle two columns, but not the 2×2 square which is their intersection. If you think of a comb as a ‘handle’ of 1xN cells, with an extra N/2 alternating cells (say, ‘teeth’) bolted on, then it’s clear this construction works because there’s never space to fit in a handle, let alone the teeth.

I couldn’t prove that this was optimal though. A standard way to prove a given bound K was optimal would be to produce a tiling on the board with K combs, where every cell is included in exactly one comb. But this is clearly not possible in this situation, since the number of cells in a comb (which is 150) does not divide the total number of cells on the board.

Indeed, the general observation that if you take a comb and a copy of the comb rotated by 180, the teeth of the second comb can mesh perfectly with the teeth of the first comb to generate a 3xN unit. I wasted a moderate amount of time pursuing this route.

[Note, it will be obvious in a minute why I’m writing ‘shaded’ instead of ‘coloured’.]

But in motivating the construction, I was merely trying to shade cells so that they intersected every possible 1xN handle, and maybe I could prove that it was optimal for this. In fact, I can’t prove it’s optimal because it isn’t optimal – indeed it’s clear that a handle through one of the middle rows intersects plenty of shaded cells, not just one. However, with this smaller problem in mind, it didn’t take long to come up with an alternative proposal, namely splitting the board into equal quarters, and shading the diagonals of each quarter, as shown.

It seems clear that you can’t fit in a 1xN handle, and any sensible tiling with 1xN handles contains exactly one shaded cell, so this shading (with 4N shaded cells) is optimal. But is it optimal for a comb itself?

Consider a shading which works, so that all combs include a shaded cell. It’s clear that a comb is contained within a 2xN block, and in such a 2xN block, there are four possible combs, as shown.

You need to cover all these combs with some shading somewhere. But if you put the shaded cell on a tooth of comb A, then you haven’t covered comb B. And if you put the shaded cell on the handle of comb A, then you haven’t covered one of comb C and comb D. You can phrase this via a colouring argument too. If you use four colours with period 2×2, as shown

then any comb involves exactly three colours, and so one of them misses out the colour of the shaded cell. (I hope it’s clear what I mean, even with the confusing distinction between ‘shaded’ and ‘coloured’ cells.)

Certainly we have shown that any 2xN block must include at least two shaded cells. And that’s pretty much it. We have a tiling with 2N copies of a 2xN block, and with at least two shaded cells in each, that adds to at least 4N shaded cells overall.

Looking back on the method, we can identify another way to waste time. Tiling a board, eg a chessboard with dominos is a classic motif, which often relies on clever colouring. So it’s perhaps lucky that I didn’t spot this colouring observation earlier. Because the argument described really does use the local properties of how the combs denoted A-D overlap. An attempt at a global argument might start as follows: we can identify 2N combs which don’t use colour 1, and tile this subset of the grid with them, so we need to shade at least 2N cells from colours {2,3,4}. Similarly for sets of colours {1,3,4}, {1,2,4}, and {1,2,3}. But if we reduce the problem to this, then using roughly 2N/3 of each colour fits this global requirement, leading to a bound of 8N/3, which isn’t strong enough. [1]

Question Six

A word of warning. Sometimes it’s useful to generalise in problems. In Q5, I was thinking in terms of N, and the only property of N I used was that it’s even. In Q4, we ignored 2017 and came back to it at the end, using only the fact that it’s odd. By contrast, in Q2, the values did turn out to be important for matching the proof bounds with a construction.

You have to guess whether 300 is important or not here. Let’s see.

I have a natural first question to ask myself about the setup, but some notation is useful. Let $a_1,a_2,\ldots,a_{300}$ be the ordering of the cards. We require that $\frac{a_1+\ldots+a_n}{n}$ is an integer for every $1\le n\le 300$. Maybe the values of these integers will be important, so hold that thought, but for now, replace with the divisibility statement that $n | a_1+\ldots+a_n$.

I don’t think it’s worth playing with small examples until I have a better idea whether the answer is 5 or 295. So the natural first question is: “what does it mean to have $(a_1,\ldots,a_{n-1})$ such that you can’t pick a suitable $a_n$?”

It means that there is no integer k in $\{1,\ldots,300\}\backslash\{a_1,\ldots,a_{n-1}\}$ such that $n\,\big|\,(a_1+\ldots+a_{n-1})+k$, which for now we write as

$k\equiv -(a_1+\ldots+a_{n-1})\,\mod n.$

Consider the congruence class of $-(a_1+\ldots+a_{n-1})$ modulo n. There are either $\lfloor \frac{300}{n}\rfloor$ or $\lceil \frac{300}{n}\rceil$ integers under consideration in this congruence class. If no such k exists, then all of the relevant integers in this congruence class must appear amongst $\{a_1,\ldots,a_{n-1}\}$. At this stage, we’re trying to get a feel for when this could happen, so lower bounds on n are most relevant. Therefore, if we get stuck when trying to find $a_n$, we have

$\lfloor \frac{300}{n} \rfloor\text{ or }\lceil \frac{300}{n}\rceil \le n-1,$ (*)

which is summarised more succinctly as

$\lfloor \frac{300}{n} \rfloor \le n-1.$ (**)

[Note, with this sort of bounding argument, I find it helpful to add intermediate steps like (*) in rough. The chance of getting the wrong direction, or the wrong choice of $\pm 1$ is quite high here. Of course, you don’t need to include the middle step in a final write-up.]

We can check that (**) is false when $n\le 17$ and true when $n\ge 18$. Indeed, both versions of (*) are true when $n\ge 18$.

So we know the minimum failure length is at least 17. But is there a failing sequence of length 17? At a meta-level, it feels like there should be. That was a very natural bounding argument for 17 (which recall corresponds to $n=18$), and it’s easy to believe that might be part of an official solution. If we achieve equality throughout the argument, that’s most of the way to a construction as well. It won’t be so easy to turn this argument into a construction for $n\ge 19$ because there won’t be equality anywhere.

We have to hope there is a construction for $n=18$. What follows is a description of a process to derive (or fail to derive) such a construction. In a solution, one would not need to give this backstory.

Anyway, in such a construction, let $\alpha\in\{1,2,\ldots,18\}$ describe the congruence class modulo 18 which is exhausted by $\{a_1,\ldots,a_{17}\}$. I’m going to hope that $\alpha=18$ because then the calculations will be easier since everything’s a multiple of 18. We haven’t yet used the fact that for a problem, we need $\alpha\equiv-(a_1+\ldots+a_{n-1})$. We definitely have to use that. There are 16 multiples of 18 (ie relevant integers in the congruence class), so exactly one of the terms so far, say $a_j$, is not a multiple of 18. But then

$0 \equiv 0+\ldots+0+a_j+0+\ldots+0,$

which can’t happen. With a bit of experimentation, we find a similar problem making a construction using the other congruence classes with 16 elements, namely $\alpha\in \{13,14,\ldots,18\}$.

So we have to tackle a different class. If $\alpha\le 12$ then our sequence must be

$\alpha,18+\alpha,2\times 18 +\alpha, \ldots, 16\times 18 + \alpha,$

in some order. In fact, let’s add extra notation, so our sequence is

$(a_1,\ldots,a_{17}) = (18\lambda_1+ \alpha,\ldots,18\lambda_{17}+\alpha),$

where $(\lambda_1,\ldots,\lambda_{17})$ is a permutation of {0,…,16}. And so we require

$k \,\big|\, 18(\lambda_1+\ldots+\lambda_k) + k\alpha,$ (%)

for $1\le k\le 17$. But clearly we can lop off that $k\alpha$, and could ignore the 18. Can we find a permutation $\lambda$ such that

$k \,\big|\, \lambda_1+\ldots+\lambda_k.$

This was where I wasted a long time. I played around with lots of examples and kept getting stuck. Building it up one term at a time, I would typically get stuck around k=9,10. And I had some observations that in all the attempted constructions, the values of $\frac{\lambda_1+\ldots+\lambda_k}{k}$ were around 8 and 9 too when I got stuck.

I became convinced this subproblem wasn’t possible, and decided that would be enough to show that n=18 wasn’t a possible failure length. I was trying to show the subproblem via a parity argument (how must the $a_i$s alternate odd/even to ensure all the even partial sums are even) but this wasn’t a problem. Then I came up with a valid argument. We must have

$\lambda_1+\ldots+\lambda_{17}=136= 16\times 8 + 8\quad\text{and}\quad 16\,\big|\,\lambda_1+\ldots+\lambda_{16},$

which means $\lambda_1+\ldots+\lambda_{16}$ must be 128 = 15×8 + 8, ie $\lambda_{17}=8$. But then we also have $15\,\big|\, \lambda_1+\ldots+\lambda_{15}$, which forces $latex\lambda_{16}=8$ also. Which isn’t possible.

If this then hadn’t wasted enough time, I then tried to come up with a construction for n=19, for which there are lots more variables, and took a lot more time, and seemed to be suffering from similar problems, just in a more complicated way. So I became convinced I must have made a mistake, because I was forced down routes that were way too complicated for a 3.5 hour exam. Then I found it…

What did I do wrong? I’ll just say directly. I threw away the 18 after (%). This made the statement stronger. (And in fact false.) Suppose instead I’d thrown away a factor of 9 (or no factors at all, but it’s the residual 2 that’s important). Then I would be trying to solve

$k\,\big|\,2(\lambda_1+\ldots+\lambda_k).$

And now if you experiment, you will notice that taking $\lambda_1=0,\lambda_2=1,\lambda_3=2,\ldots$ seems to work fine. And of course, we can confirm this, using the triangle number formula for the second time in the paper!

This had wasted a lot of time, but once that thought is present, we’re done, because we can go straight back and exhibit the sequence

$(a_1,\ldots,a_{17}) = (1, 18+1,2\times 18 +1,\ldots, 16\times 18 +1).$

Then the sum so far is congruent to -1 modulo 18, but we have exhausted all the available integers which would allow the sum of the first 18 terms to be a multiple of 18. This confirms that the answer to the question as stated is 17.

At the start, I said that we should be cautious about generalising. In the end, this was wise advice. We definitely used the fact that 18 was even in the stage I over-reduced the first time. We also used the fact that there was at least one value of $\alpha$ with an ‘extra’ member of the congruence class. So I’m pretty sure this proof wouldn’t have worked with 288 = 16×18 cards.

Footnotes

[1] – If shading were a weighted (or continuous or whatever you prefer) property, ie that each cell has a quantity of shading given by a non-negative real number, and we merely demand that the total shading per comb is at least one, then the bound 8N/3 is in fact correct for the total shading. We could look at a 2xN block, and give 1/3 shading to one cell of each colour in the block. Alternatively, we could be very straightforward and apply 2/3N shading to every cell in the grid. The fact that shading has to be (in this language) zero or one, imposes meaningful extra constraints which involve the shape of the comb.

# EGMO 2017 – Paper One – Geometric subconfigurations

I’ve recently been in Cambridge, running the UK’s annual training and selection camp for the International Mathematical Olympiad. My memories of living and studying in Cambridge are very pleasant, and it’s always nice to be back.

Within olympiad mathematics, the UK has traditionally experienced a weakness at geometry. By contrast to comparable nations, for example those from Eastern Europe, our high school curriculum does not feature much Euclidean geometry, except for the most basic of circle theorems and angle equalities, which normally end up as calculation exercises, rather than anything more substantial. So to arrive at the level required to be in with a chance of solving even the easier such questions at international competitions, our students have to do quite a lot of work for themselves.

I’ve spent a bit of time in the past couple of years thinking about this, and how best to help our students achieve this. The advice “go away and do as many problems as you can, building up to IMO G1, then a bit further” is probably good advice, but we have lots of camps and correspondence training, and I want to offer a bit more.

At a personal level, I’m coming from a pragmatic point of view. I don’t think Euclidean geometry is particularly interesting, even though it occasionally has elegant arguments. My main concern is taming it, and finding strategies for British students (or anyone else) to tame it too [1].

Anyway, I’m going to explain my strategy and thesis as outlined at the camp, then talk about Question 1 from EGMO 2017, a competition held in Zurich this year, the first paper of which was sat earlier today (at time of writing). The UK sent a strong team of four girls, and I’m looking forward to hearing all about their solutions and their adventures, but later. I had intended to talk about the other two questions too, but I can’t think of that much to say, so have put this at the end.

My proposed strategy

Before explaining my proposed strategy, let me discuss a couple of standard approaches that sometimes, but rarely, work at this level:

• Angle chase (or length chase) forwards directly from the configuration. Consider lots of intersection points of lines. Consider angles and lengths as variables, and try to find relations.
• Exactly as above, but working back from the conclusion.
• Doing both, and attempting to meet in the middle.

The reason why this doesn’t work is that by definition competitions are competitive, and all participants could probably do this. For similar reasons competition combinatorics problems tend not to reduce instantly to an exhaustive search.

It’s also not very interesting. I’m certainly unlikely to set a problem if it’s known to yield to such an approach. When students do try this approach, common symptoms and side-effects involve a lot of chasing round conditions that are trivially equivalent to conditions given in the statement. For example, if you’re given a cyclic quadrilateral, and you mark on opposing complementary angles, then chase heavily, you’ll probably waste a lot of time deducing other circle theorems which you already knew.

So actually less is more. You should trust that if you end up proving something equivalent to the required conclusion, you’ll notice. And if you are given a cyclic quadrilateral, you should think about what’s the best way to use it, rather than what are all the ways to use it.

On our selection test, we used a problem which essentially had two stages. In the first stage, you proved that a particular quadrilateral within the configuration was cyclic; and in the second stage, you used this to show the result. Each of these stages by themselves would have been an easy problem, suitable for a junior competition. What made this an international-level problem was that you weren’t told that these were the two stages. This is where a good diagram is useful. You might well guess from an accurate figure that TKAD was cyclic, even if you hadn’t constructed it super-accurately with ruler and compasses.

So my actual strategy is to think about the configuration and the conclusion separately, and try and conjecture intermediate results which might be true. Possibly such an intermediate result might involve an extra point or line. This is a standard way to compose problems. Take a detailed configuration, with some interesting properties within it, then delete as much as possible while keeping the properties. Knowing some standard configurations will be useful for this. Indeed, recognising parts of the original diagram which resemble known configurations (possibly plus or minus a point or line) is a very important first step in many settings.

Cyclic quadrilaterals and isosceles triangles are probably the simplest examples of such configurations. Think about how you often use properties of cyclic quadrilaterals without drawing in either the circle or its centre. The moral is that you don’t need every single thing that’s true about the configuration to be present on the diagram to use it usefully. If you know lots of configurations, you can do this sort of thing in other settings too. Some configurations I can think up off the top of my head include: [2]

• Parallelograms. Can be defined by corresponding angles, or by equal opposite lengths, or by midpoint properties of the centre. Generally if you have one of these definitions, you should strongly consider applying one of the other definitions!
• The angle bisector meets the opposite perpendicular bisector on the circumcircle.
• Simson’s line: the feet of the three perpendiculars from a point to the sides (extended if necessary) of a triangle are collinear precisely when the point is on the circumcircle.
• The incircle touch point and the excircle touch point are reflections of each other in the corresponding midpoint. Indeed, all the lengths in this diagram can be described easily.
• The spiral similarity diagram.
• Pairs of isogonal conjugates, especially altitudes and radii; and medians and symmedians.

Note, all of these can be investigated by straightforward angle/length-chasing. We will see how one configuration turned out to be very useful in EGMO. In particular, the configuration is simple, and its use in the problem is simple, but it’s the idea to focus on the configuration as often as possible that is key. It’s possible but unlikely you’d go for the right approach just by angle-analysis alone.

EGMO 2017 Question 1

Let ABCD be a convex quadilateral with <DAB=<BCD=90, and <ABC > <CDA. Let Q and R be points on segments BC and CD, respectively, such that line QR intersects lines AB and AB at points P and S, respectively. It is given that PQ=RS. Let the midpoint of BD be M, and the midpoint of QR be N. Prove that the points M, N, A and C lie on a circle.

First point: as discussed earlier, we understand cyclic quadrilaterals well, so hopefully it will be obvious once we know enough to show these four points are concyclic. There’s no point guessing at this stage whether we’ll do it by eg opposite angles, or by power of a point, or by explicitly finding the centre.

But let’s engage with the configuration. Here are some straightforward deductions.

• ABCD is cyclic.
• M is the centre.

We could at this stage draw in dozens of equal lengths and matching angles, but let’s not do that. We don’t know yet which ones we’ll need, so we again have to trust that we’ll use the right ones when the time comes.

What about N? If we were aiming to prove <AMC = <ANC, this might seem tricky, because we don’t know very much about this second angle. Since R and Q are defined (with one degree of freedom) by the equal length condition, it’s hard to pin down N in terms of C. However, we do know that N is the midpoint opposite C in triangle QCR, which has a right angle at C. Is this useful? Well, maybe it is, but certainly it’s reminiscent of the other side of the diagram. We have four points making up a right-angled triangle, and the midpoint of the hypotenuse here, but also at (A,B,D,M). Indeed, also at (C,B,D,M). And now also at (C,Q,R,N). This must be a useful subconfiguration right?

If you draw this subdiagram separately, you have three equal lengths (from the midpoint to every other point), and thus two pairs of equal angles. This is therefore a very rich subconfiguration. Again, let’s not mark on everything just yet – we trust we’ll work out how best to use it later.

Should we start angle-chasing? I think we shouldn’t. Even though we have now identified lots of potential extra pairs of equal angles, we haven’t yet dealt with the condition PQ=RS at all.

Hopefully as part of our trivial equivalences phase, we said that PQ=RS is trivially equivalent to PR=QS. Perhaps we also wrote down RN=NQ, and so it’s also trivially equivalent to PN=NS. Let’s spell this out: N is the midpoint of PS. Note that this isn’t how N was defined. Maybe this is more useful than the actual definition? (Or maybe it isn’t. This is the whole point of doing the trivial equivalences early.)

Well, we’ve already useful the original definition of N in the subconfiguration (C,Q,R,N), but we can actually also use the subconfiguration (A,P,S,N) too. This is very wordy and makes it sound complicated. I’ve coloured my diagram to try and make this less scary. In summary, the hypotenuse midpoint configuration appears four times, and this one is the least obvious. If you found it, great; if not, I hope this gave quite a lot of motivation. Ultimately, even with all the motivation, you still had to spot it.

Why is this useful? Because a few paragraphs earlier, I said “we don’t know very much about this second angle <ANC”. But actually, thanks to this observation about the subconfiguration, we can decompose <ANC into two angle, namely <ANP+<QNC which are the apex angle in two isosceles triangles. Now we can truly abandon ourselves to angle-chasing, and the conclusion follows after a bit of work.

I’m aware I’ve said it twice in the prelude, and once in this solution, but why not labour my point? The key here was that spotting that a subconfiguration appeared twice led you to spot that it appeared a further two times, one of which wasn’t useful, and one of which was very useful. The subconfiguration itself was not complicated. To emphasise its simplicity, I can even draw it in the snow:

Angle-chasing within the configuration is easy, even with hiking poles instead of a pen, but noticing it could be applied to point N was invaluable.

Other questions

Question 2 – My instinct suggested the answer was three. I find it hard to explain why. I was fairly sure they wouldn’t have asked if it was two. Then I couldn’t see any reason why k would be greater than 3, but still finite. I mean, is it likely that $k=14$ is possible, but $k=13$ is not.

In any case, coming up with a construction for $k=3$ is a nice exercise, and presumably carried a couple of marks in the competition. My argument to show $k=2$ was not possible, and most arguments I discussed with others were not overwhelmingly difficult, but didn’t really have any key steps or insight, so aren’t very friendly in a blog context, and I’ll probably say nothing more.

Question 3 – Again, I find it hard to say anything very useful, because the first real thing I tried worked, and it’s hard to motivate why. I was confused how the alternating turn-left / turn-right condition might play a role, so I ignored it initially. I was also initially unconvinced that it was possible to return to any edge in any direction (ie it must escape off to infinity down some ray), but I was aware that both of these were too strong a loosening of the problem to be useful, in all likelihood.

Showing that you can go down an edge in one direction but not another feels like you’re looking for some binary invariant, or perhaps a two-colouring of the directed edges. I couldn’t see any way to colour the directed edges, so I tried two-colouring the faces, and there’s only one way to do this. Indeed, on the rare occasions (ahem) I procrastinate, drawing some lines then filling in the regions they form in this form is my preferred doodle. Here’s what it looks like:

and it’s clear that if the path starts with a shaded region on its right, it must always have a shaded region on its right. As I say, this just works, and I find it hard to motivate further.

A side remark is that it turns out that my first loosening is actually valid. The statement remains true with arbitrary changes of direction, rather than alternating changes. The second loosening is not true. There are examples where the trajectory is periodic. I don’t think they’re hugely interesting though, so won’t digress.

Footnotes

[1] – “To you, I am nothing more than a fox like a hundred thousand other foxes. But if you tame me, then we shall need each other. To me, you will be unique in all the world. To you, I shall be unique in all the world,” said the Fox to the Little Prince. My feelings on taming Euclidean geometry are not this strong yet.

[2] – Caveat. I’m not proposing learning a big list of standard configurations. If you do a handful of questions, you’ll meet all the things mentioned in this list several times, and a few other things too. At this point, your geometric intuition for what resembles what is much more useful than exhaustive lists. And if you’re anxious about this from a pedagogical point of view, it doesn’t seem to me to be a terribly different heuristic from lots of non-geometry problems, including in my own research. “What does this new problem remind me of?” is not unique to this area at all!

# Antichains in the grid

In the previous post on this topic, we discussed Dilworth’s theorem on chains and antichains in a general partially ordered set. In particular, whatever the size of the largest antichain in a poset, it is possible to partition the poset into exactly that many chains. So for various specific posets, or the directed acyclic graphs associated to them, we are interested in the size of this largest antichain.

The following example turned out to be more interesting than I’d expected. At a conventional modern maths olympiad, there are typically three questions on each paper, and for reasons lost in the mists of time, each student receives an integer score between 0 and 7 per question. A natural question to ask is “how many students need to sit a paper before it’s guaranteed that one will scores at least as highly as another on every question?” (I’m posing this as a straight combinatorial problem – the correlation between scores on different questions will be non-zero and presumably positive, but that is not relevant here.)

The set of outcomes is clearly $\{0,1,\ldots,7\}^3$, with the usual weak domination partial order inherited from $\mathbb{R}^3$. Then an antichain corresponds to a set of triples of scores such that no triple dominates another triple. So the answer to the question posed is: “the size of the largest antichain in this poset, plus one.”

In general, we might ask about $\{1,2,\ldots,n\}^d$, again with the weak domination ordering. This directed graph, which generalises the hypercube as well as our example, is called the grid.

Heuristics for the largest antichain

Retaining the language of test scores on multiple questions is helpful. In the previous post, we constructed a partition of the poset into antichains, indexed by the elements of some maximal chain, by starting with the sources, then looking at everything descended only from sources, and so on. (Recall that the statement that this is possible was referred to as the dual of Dilworth’s theorem.) In the grid, there’s a lot of symmetry (in particular under the mapping $x\mapsto n+1-x$ in every coordinate), and so you end up with the same family of antichains whether you work upwards from the sources or downwards from the sinks. (Or vice versa depending on how you’ve oriented your diagram…) The layers of antichains also have a natural interpretation – each layer corresponds to a given total score. It’s clear a priori why each of these is an antichain. If A scores the same as B overall, but strictly more on the first question, this must be counterbalanced by a strictly lower score on another question.

So a natural guess for the largest antichain is the largest antichain corresponding to some fixed total score. Which total score should this be? It ought to be the middle layer, that is total score $\frac{(n+1)d}{2}$, or the two values directly on either side if this isn’t an integer. My intuition was probabilistic. The uniform distribution on the grid is achieved by IID uniform distributions in each coordinate, which you can think of as a random walk, especially if you subtract off the mean first. It feels that any symmetric random walk should have mode zero or next-to-zero. Certainly this works asymptotically in a rescaled sense by CLT, and in a slightly stronger sense by local CLT, but we don’t really want asymptotics here.

When I started writing the previous paragraph, I assumed there would be a simple justification for the claim that the middle layer(s) was largest, whether by straight enumeration, or some combinatorial argument, or even generating functions. Perhaps there is, and I didn’t spot it. Induction on d definitely works though, with a slightly stronger hypothesis that the layer sizes are symmetric around the median, and monotone on either side of the median. The details are simple and not especially interesting, so I won’t go into them.

From now on, the hypothesis is that this middle layer of the grid is the largest antichain. Why shouldn’t it, for example, be some mixture of middle-ish layers? (*) Well, heuristically, any score sequence in one layer removes several possibilities from a directly adjacent layer, and it seems unlikely that this effect is going to cancel out if you take some intermediate number of score sequences in the first layer. Also, the layers get smaller as you go away from the middle, so because of the large amount of symmetry (coordinates are exchangeable etc), it feels reasonable that there should be surjections between layers in the outward direction from the middle. The union of all these surjections gives a decomposition into chains.

This result is in fact true, and its proof by Bollobas and Leader, using shadows and compression can be found in the very readable Sections 0 and 1 of [1].

Most of the key ideas to a compression argument are present in the case n=2, for which some notes by Leader can be found here, starting with Proof 1 of Theorem 3, the approach of which is developed over subsequent sections. We treat the case n=2, but focusing on a particularly slick approach that does not generalise as successfully. We also return to the original case d=3 without using anything especially exotic.

Largest antichain in the hypercube – Sperner’s Theorem

The hypercube $\{0,1\}^d$ is the classical example. There is a natural correspondence between the vertices of the hypercube, and subsets of $[d]$. The ordering on the hypercube corresponds to the ordering given by containment on $\mathcal{P}([d])$. Almost by definition, the k-th layer corresponds to subsets of size k, and thus includes $\binom{d}{k}$ subsets. The claim is that the size of the largest antichain is $\binom{d}{\lfloor d/2 \rfloor}$, corresponding to the middle layer if d is even, and one of the two middle layers if d is odd. This result is true, and is called Sperner’s theorem.

I know a few proofs of this from the Combinatorics course I attended in my final year at Cambridge. As explained, I’m mostly going to ignore the arguments using compression and shadows, even though these generalise better.

As in the previous post, one approach is to exhibit a covering family of exactly this number of disjoint chains. Indeed, this can be done layer by layer, working outwards from the middle layer(s). The tool here is Hall’s Marriage Theorem, and we verify the relevant condition by double-counting. Probably the hardest case is demonstrating the existence of a matching between the middle pair of layers when d is odd.

Take d odd, and let $d':= \lfloor d/2\rfloor$. Now consider any subset S of the d’-th layer $\binom{[d]}{d'}$. We now let the upper shadow of S be

$\partial^+(S):= \{A\in \binom{[d]}{d'+1}\,:\, \exists B\in S, B\subset A\},$

the sets in the (d’+1)-th layer which lie above some set in S. To apply Hall’s Marriage theorem, we have to show that $|\partial^+(S)|\ge |S|$ for all choice of S.

We double-count the number of edges in the hypercube from $S$ to $\partial^+(S)$. Firstly, for every element $B\in S$, there are exactly d’ relevant edges. Secondly, for every element $A\in\partial^+(S)$, there are exactly d’ edges to some element of $\binom{[d]}{d'}$, and so in particular there are at most d’ edges to elements of S. Thus

$d' |S|=|\text{edges }S\leftrightarrow\partial^+(S)| \le d' |\partial^+(S)|,$

which is exactly what we require for Hall’s MT. The argument for the matching between other layers is the same, with a bit more notation, but also more flexibility, since it isn’t a perfect matching.

The second proof looks at maximal chains. Recall, in this context, a maximal chain is a sequence $\mathcal{C}=B_0\subset B_1\subset\ldots\subset B_d$ where each $B_k:= \binom{[d]}{k}$. We now consider some largest-possible antichain $\mathcal{A}$, and count how many maximal chains include an element $A\in\mathcal{A}$. If $|A|=k$, it’s easy to convince yourself that there are $\binom{d}{r}$ such maximal chains. However, given $A\ne A'\in\mathcal{A}$, the set of maximal chains containing A and the set of maximal chains containing A’ are disjoint, since $\mathcal{A}$ is an antichain. From this, we obtain

$\sum_{A\in\mathcal{A}} \binom{d}{|A|} \le d!.$ (**)

Normally after a change of notation, so that we are counting the size of the intersection of the antichain with each layer, this is called the LYM inequality after Lubell, Yamamoto and Meshalkin. The heuristic is that the sum of the proportions of layers taken up by the antichain is at most one. This is essentially the same as earlier at (*). This argument can also be phrased probabilistically, by choosing a *random* maximal chain, and considering the probability that it intersects the proposed largest antichain, which is, naturally, at most one. Of course, the content is the same as this deterministic combinatorial argument.

Either way, from (**), the statement of Sperner’s theorem follows rapidly, since we know that $\binom{d}{|A|}\le \binom{d}{\lfloor d/2\rfloor}$ for all A.

Largest antichain in the general grid

Instead of attempting a proof or even a digest of the argument in the general case, I’ll give a brief outline of why the previous arguments don’t transfer immediately. It’s pretty much the same reason for both approaches. In the hypercube, there is a lot of symmetry within each layer. Indeed, almost by definition, any vertex in the k-th layer can be obtained from any other vertex in the k-th layer just by permuting the labels (or permuting the coordinates if thinking as a vector).

The hypercube ‘looks the same’ from every vertex, but that is not true of the grid. Consider for clarity the n=8, d=3 case we discussed right at the beginning, and compare the scores (7,0,0) and (2,2,3). The number of maximal chains through (7,0,0) is $\binom{14}{7}$, while the number of maximal chains through (2,2,3) is $\binom{7}{2, 2,3}\binom{14}{4,5,5}$, and the latter is a lot larger, which means any attempt to use the second argument is going to be tricky, or at least require an extra layer of detail. Indeed, exactly the same problem arises when we try and use Hall’s condition to construct the optimal chain covering directly. In the double-counting section, it’s a lot more complicated than just multiplying by d’, as was the case in the middle of the hypercube.

Largest antichain in the d=3 grid

We can, however, do the d=3 case. As we will see, the main reason we can do the d=3 case is that the d=2 case is very tractable, and we have lots of choices for the chain coverings, and can choose one which is well-suited to the move to d=3. Indeed, when I set this problem to some students, an explicit listing of a maximal chain covering was the approach some of them went for, and the construction wasn’t too horrible to state.

[Another factor is that it computationally feasible to calculate the size of the middle layer, which is much more annoying in d>3.]

[I’m redefining the grid here as $\{0,1,\ldots,n-1\}^d$ rather than $\{1,2,\ldots,n\}^d$.]

The case distinction between n even and n odd is going to make both the calculation and the argument annoying, so I’m only going to treat the even case, since n=8 was the original problem posed. I should be honest and confess that I haven’t checked the n odd case, but I assume it’s similar.

So when n is even, there are two middle layers namely $\frac{3n}{2}-2, \frac{3n}{2}-1$ (corresponding to total score 10 and total score eleven in the original problem). I calculated the number of element in the $\frac{3n}{2}-1$ layer by splitting based on the value of the first coordinate. I found it helpful to decompose the resulting sum as

$\sum_{k=0}^{n-1} = \sum_{k=0}^{\frac{n}{2}-1} + \sum_{k=\frac{n}{2}}^{n-1},$

based on whether there is an upper bound, or a lower bound on the value taken by the second coordinate. This is not very interesting, and I obtained the answer $\frac{3n^2}{4}$, and of course this is an integer, since n is even.

Now to show that any antichain has size at most $\frac{3n^2}{4}$. Here we use our good control on the chain coverings in the case d=2. We note that there is a chain covering of the (n,d=2) grid where the chains have 2n-1, 2n-3,…, 3, 1 elements (%). We get this by starting with a maximal chain, then taking a maximal chain on what remains etc. It’s pretty much the first thing you’re likely to try.

Consider an antichain with size A in the (n,d=3) grid, and project into the second and third coordinates. The image sets are distinct, because otherwise a non-trivial pre-image would be a chain. So we have A sets in the (n,d=2) grid. How many can be in each chain in the decomposition (%). Well, if there are more than n in any chain in (%), then two must have been mapped from elements of the (n,d=3) grid with the same first coordinate, and so satisfy a containment relation. So in fact there are at most n image points in any of the chains of (%). So we now have a bound of $n^2$. But of course, some of the chains in (%) have length less than n, so we are throwing away information. Indeed, the number of images points in a given chain is at most

$\max(n,\text{length of chain}),$

and so the number of image points in total is bounded by

$n+\ldots+n+ (n-1)+(n-3)+\ldots+1,$

where there are n/2 copies of n in the first half of the sum. Evaluating this sum gives $\frac{3n^2}{4}$, exactly as we wanted.

References

[1] – Bollobas, Leader (1991) – Compressions and Isoperimetric Inequalities. Available open-access here.

# Chains and antichains

I’ve recently been at the UK-Hungary winter olympiad camp in Tata, for what is now my sixth time. As well as doing some of my own work, have enjoyed the rare diversion of some deterministic combinatorics. It seems to be a local variant of the pigeonhole principle that given six days at a mathematical event in Hungary, at least one element from {Ramsay theory, Erdos-Szekeres, antichains in the hypercube} will be discussed, with probability one. On this occasion, all were discussed, so I thought I’d write something about at least one of them.

Posets and directed acyclic graphs

This came up on the problem set constructed by the Hungarian leaders. The original formulation asked students to show that among any 17 positive integers, there are either five such that no one divides any other, or five such that among any pair, one divides the other.

It is fairly clear why number theory plays little role. We assign the given integers to the vertices of a graph, and whenever a divides b, we add a directed edge from the vertex corresponding to a to the vertex corresponding to b. Having translated the given situation into a purely combinatorial statement, fortunately we can translate the goal into the same language. If we can find a chain of four directed edges (hence five vertices – beware confusing use of the word ‘length’ here) then we have found the second possible option. Similarly, if we can find an antichain, a set of five vertices with no directed edges between them, then we have found the first possible option.

It’s worth noting that the directed graph we are working with with is transitive. That is, whenever there is an edge a->b and b->c, there will also be an edge a->c. This follows immediately from the divisibility condition. There are also no directed cycles in the graph, since otherwise there would be a cycle of integers where each divided its successor. But of course, when a divides b and these are distinct positive integers, this means that b is strictly larger than a, and so this relation cannot cycle.

In fact, among a set of positive integers, divisibility defines a partial order, which we might choose to define as any ordering whether the associated directed graph is transitive and acyclic, although obviously we could use language more naturally associated with orderings. Either way, from now on we consider posets and the associated DAGs (directed acyclic graphs) interchangeably.

Dilworth’s theorem

In the original problem, we are looking for either a large chain, or a large antichain. We are trying to prove that it’s not possible to have largest chain size at most four, and largest antichain size at most four when there are 17 vertices, so we suspect there may some underlying structure: in some sense perhaps the vertex set is the ‘product’ of a chain and an antichain, or at least a method of producing antichains from a single vertex.

Anyway, one statement of Dilworth’s theorem is as follows:

Statement 1: in a poset with nm+1 elements, there is either a chain of size n+1, or an antichain of size m+1.

Taking n=m=4 immediately finishes the original problem about families of divisors. While this is the most useful statement here, it’s probably not the original, which says the following:

Statement 2: in a poset, there exists $\mathcal{C}$ a decomposition into chains, and an antichain $A$ such that $|\mathcal{C}|=|A|$.

Remark 1: Note that for any decomposition into chains and any antichain, we have $|\mathcal{C}|\ge |A|$, since you can’t have more than one representative from any chain in the antichain. So Statement 2 is saying that equality does actually hold.

Remark 2: Statement 1 follows immediately from Statement 2. If all antichains had size at most m, then there’s a decomposition into at most m chains. But each chain has size n, so the total size of the graph is at most mn. Contradiction.

Unsuccessful proof strategies for Dilworth

Since various smart young people who didn’t know the statement or proof of Dilworth’s theorem attempted to find it (in the form of Statement 1, and in a special case) in finite time conditions, it’s easy to talk about what doesn’t work, and try to gain intellectual value by qualifying why.

• Forgetting directions: in general one might well attack a problem by asking whether we have more information than we need. But ignoring the directions of the edges is throwing away too much information. After doing this, antichains are fine, but maybe you need to exhibit some undirected ‘chains’. Unless these undirected chains are much longer than you are aiming for, you will struggle to reconstruct directed chains out of them.
• Where can the final vertex go?: in a classic trope, one might exhibit a directed graph on nm vertices with neither a chain of size n+1 nor an antichain of size m+1. We attempt to argue that this construction is essentially unique, and that it goes wrong when we add an extra vertex. As a general point, it seems unlikely to be easier to prove that exactly one class of configurations has a given property in the nm case, than to prove no configurations has the same property in the nm+1 case. A standalone proof of uniqueness is likely to be hard, or a disguised rehash of an actual proof of the original statement.
• Removing a chain: If you remove a chain of maximal length, then, for contradiction, what you have left is m(n-1)+1 vertices. If you have a long chain left, then you’re done, although maximality has gone wrong somewhere. So you have an antichain size n in what remains. But it’s totally unclear why it should be possible to extend the antichain with one of the vertices you’ve just removed.

An actual proof of Dilworth (Statement 1), and two consequences

This isn’t really a proof, instead a way of classifying the vertices in the directed graph so that this version of Dilworth. As we said earlier, we imagine there may be some product structure. In particular, we expect to be able to find a maximal chain, and a nice antichain associated to each element of the maximal chain.

We start by letting $V_0$ consist of all the vertices which are sources, that is, have zero indegree. These are minima in the partial ordering setting. Now let $V_1$ consist of all vertices whose in-neighbourhood is entirely contained in $V_0$, that is they are descendents only of $V_0$. Then let $V_2$ consist of all remaining vertices whose in-neighourhood is entirely contained in $V_0\cup V_1$ (but not entirely in $V_0$, otherwise it would have already been treated), and so on. We end up with what one might call an onion decomposition of the vertices based on how far they are from the sources. We end up with $V_0,V_1,\ldots,V_k$, and then we can find a chain of size k+1 by starting with any vertex in $V_k$ and constructing backwards towards the source. However, this is also the largest possible size of a chain, because every time we move up a level in the chain, we must move from $V_i$ to $V_j$ where j>i.

It’s easy to check that each $V_i$ is an antichain, and thus we can read off Statement 1. A little more care, and probably an inductive argument is required to settle Statement 2.

We have however proved what is often called the dual of Dilworth’s theorem, namely that in a poset there exists a chain C, and a decomposition into a collection $\mathcal{A}$ of antichains, for which $|C|=|\mathcal{A}|$.

Finally, as promised returning to Erdos-Szekeres, if not to positive integers. We apply Dilworth Statement 1 to a sequence of $m^2+1$ real numbers $a_0,a_1,\ldots,a_{m^2}$, with the ordering $a_i\rightarrow a_j$ if $i\le j$ and $a_i\le a_j$. Chains correspond to increasing subsequences, and antichains to decreasing subsequences, so we have shown that there is either a monotone subsequence of length m+1.

# EGMO 2016 Paper I

We’ve just our annual selection and training camp for the UK IMO team in Cambridge, and I hope it was enjoyed by all. I allotted myself the ‘graveyard slot’ at 5pm on the final afternoon (incidentally, right in the middle of this, but what England fan could have seen that coming in advance?) and talked about random walks on graphs and the (discrete) heat equation. More on that soon perhaps.

The UK has a team competing in the 5th European Girls Mathematical Olympiad (hereafter EGMO 2016) right now in Busteni, Romania. The first paper was sat yesterday, and the second paper is being sat as I write this. Although we’ve already sent a team to the Romania this year (where they did rather well indeed! I blame the fact that I wasn’t there.), this feels like the start of the olympiad ‘season’. It also coincides well with Oxford holidays, when, though thesis deadlines loom, I have a bit more free time for thinking about these problems. Anyway, last year I wrote a summary of my thoughts and motivations when trying the EGMO problems, and this seemed to go down well, so I’m doing the same this year. My aim is not to offer official solutions, or even outlines of good solutions, but rather to talk about ideas, and how and why I decided whether they did or didn’t work. I hope some of it is interesting.

You can find the paper in many languages on the EGMO 2016 website. I have several things to say about the geometry Q2, but I have neither enough time nor geometric diagram software this morning, so will only talk about questions 1 and 3. If you are reading this with the intention of trying the problems yourself at some point, you probably shouldn’t keep reading, in the nicest possible way.

Question 1

[Slightly paraphrased] Let n be an odd positive integer and $x_1,\ldots,x_n\ge 0$. Show that

$\min_{i\in[n]} \left( x_i^2+x_{i+1}^2\right) \le \max_{j\in[n]} 2x_jx_{j+1},$

where we define $x_{n+1}=x_1$ cyclically in the natural way.

Thought 1: this is a very nice statement. Obviously when i and j are equal, the inequality holds the other way round, and so it’s interesting and surprising that constructing a set of pairs of inequalities in the way suggested gives a situation where the ‘maximum minimum’ is at least the ‘minimum maximum’.

Thought 2: what happens if n is actually even? Well, you can kill the right-hand-side by taking at least every other term to be zero. And if n is even, you really can take every other term to be even, while leaving the remaining terms positive. So then the RHS is zero and the LHS is positive.

The extension to this thought is that the statement is in danger of not holding if there’s a lot of alternating behaviour. Maybe we’ll use that later.

Idea 1: We can write

$2(x_i^2+x_{i+1}^2)=(x_i+x_{i+1})^2 + |x_i-x_{i+1}|^2, \quad 4x_ix_{i+1}=(x_i+x_{i+1})^2 - |x_i-x_{i+1}|^2,$

which gives insight into ‘the problem multiplied by 2’. This was an ‘olympiad experience’ idea. These transformations between various expressions involving sums of squares turn out to be useful all the time. Cf BMO2 2016 question 4, and probably about a million other examples. As soon as you see these expressions, your antennae start twitching. Like when you notice a non-trivial parallelogram in a geometry problem, but I digress. I’m not sure why I stuck in the absolute value signs.

This was definitely a good idea, but I couldn’t find a way to make useful deductions from it especially easily. I tried converting the RHS expression for i (where LHS attains minimum) into the RHS expression for any j by adding on terms, but I couldn’t think of a good way to get any control over these terms, so I moved on.

Idea 2: An equality case is when they are all equal. I didn’t investigate very carefully at this point whether this might be the only equality case. I started thinking about what happens if you start with an ‘equal-ish’ sequence where the inequality holds, then fiddle with one of the values. If you adjust exactly one value, then both sides might stay constant. It seemed quite unlikely that both would vary, but I didn’t really follow this up. In any case, I didn’t feel like I had very good control over the behaviour of the two sides if I started from equality and built up to the general case by adjusting individual values. Or at least, I didn’t have a good idea for a natural ordering to do this adjustment so that I would have good control.

Idea 3: Now I thought about focusing on where the LHS attains this minimum. Somewhere, there are values (x,y) next to each other such that $x^2+y^2$ is minimal. Let’s say $x\le y$. Therefore we know that the element before x is at least y, and vice versa, ie we have

$\ldots, \ge y, x, y, \ge x,\ldots.$

and this wasn’t helpful, because I couldn’t take this deduction one step further on the right. However, once you have declared the minimum of the LHS, you are free to make all the other values of $x_i$ smaller, so long as they don’t break this minimum. Why? Because the LHS stays the same, and the RHS gets smaller. So if you can prove the statement after doing this, then the statement was also true before doing this. So after thinking briefly, this means that you can say that for every i, either $x_{i-1}^2+x_i^3$ or $x_i^2+x_{i+1}^2$ attains this minimum.

Suddenly this feels great, because once we know at least one of the pairs corresponding to i attains the minimum, this is related to parity of n, which is in the statement. At this point, I was pretty confident I was done. Because you can’t partition odd [n] into pairs, there must be some i which achieves a minimum on both sides. So focus on that.

Let’s say the values are (x,y,x) with $x\le y$. Now when we try to extend in both directions, we actually can do this, because the values alternate with bounds in the right way. This key is to use the fact that the minimum $x^2+y^2$ must be attained at least every other pair. (*) So we get

$\ldots, \le x,\ge y,x,y,x,\ge y,\le x,\ldots.$

But it’s cyclic, so the ‘ends’ of this sequence join up. If $n\equiv 1$ modulo 4, we get $\ge y,\ge y$ next to each other, which means the RHS of the statement is indeed at least the LHS. If $n\equiv 3$ modulo 4, then we get $\le x,\le x$ next to each other, which contradicts minimality of $x^2+y^2$ unless x=y. Then we chase equality cases through the argument (*) and find that they must all be equal. So (after checking that the case $x\ge y$ really is the same), we are done.

Thought 3: This really is the alternating thought 2 in action. I should have probably stayed with the idea a bit longer, but this plan of reducing values so that equality was achieved often came naturally out of the other ideas.

Thought 4: If I had to do this as an official solution, I imagine one can convert this into a proof by contradiction and it might be slightly easier, or at least easier to follow. If you go for contradiction, you are forcing local alternating behaviour, and should be able to derive a contradiction when your terms match up without having to start by adjusting them to achieve equality everywhere.

Question 3

Let m be a positive integer. Consider a 4m x 4m grid, where two cells are related to each other if they are different but share a row or a column. Some cells are coloured blue, such that every cell is related to at least two blue cells. Determine the minimum number of blue cells.

Thought 1: I spent the majority of my time on this problem working with the idea that the answer was 8m. Achieved by taking two in each row or column in pretty much any fashion, eg both diagonals. This made me uneasy because the construction didn’t take advantage of the fact that the grid size was divisible by 4. I also couldn’t prove it.

Thought 2: bipartite graphs are sometimes useful to describe grid problems. Edges correspond to cells and each vertex set to row labels or column labels.

Idea 1: As part of an attempt to find a proof, I was thinking about convexity, and why having exactly two in every row was best, so I wrote down the following:

Claim A: No point having three in a row.

Claim B: Suppose a row has only one in it + previous claim => contradiction.

In Cambridge, as usual I organised a fairly comprehensive discussion of how to write up solutions to olympiad problems. The leading-order piece of advice is to separate your argument into small pieces, which you might choose to describe as lemmas or claims, or just separate implicitly by spacing. This is useful if you have to do an uninteresting calculation in the middle of a proof and don’t want anyone to get distracted, but mostly it’s useful for the reader because it gives an outline of your argument.

My attempt at this problem illustrates an example of the benefit of doing this even in rough. If your claim is a precise statement, then that’s a prompt to go back and separately decide whether it is actually true or not. I couldn’t prove it, so started thinking about whether it was true.

Idea 2: Claim A is probably false. This was based on my previous intuition, and the fact that I couldn’t prove it or get any handle on why it might be true. I’d already tried the case m=1, but I decided I must have done it wrong so tried it again. I had got it wrong, because 6 is possible, and it wasn’t hard from here (now being quite familiar with the problem) to turn this into a construction for 6m in the general case.

Idea 3: This will be proved by some sort of double-counting argument. Sometimes these arguments turn on a convexity approach, but when the idea is that a few rows have three blue cells, and the rest have one, this now seemed unlikely.

Subthought: Does it make sense for a row to have more than three blue cells? No. Why not? Note that as soon as we have three in a row, all the cells in that row are fine, irrespective of the rest of the grid. If we do the problem the other way round, and have some blues, and want to fill out legally the largest possible board, why would we put six in one row, when we could add an extra row, have three in each (maintaining column structure) and be better off than we were before. A meta-subthought is that this will be impossible to turn into an argument, but we should try to use it to inform our setup.

Ages and ages ago, I’d noticed that you could permute the rows and columns without really affecting anything, so now seemed a good time to put all the rows with exactly one blue cell at the top (having previously established that rows with no blue cell were a disaster for achieving 6m), and all the columns with one blue cell at the left. I said there were $r_1,c_1$ such rows and columns. Then, I put all the columns which had a blue cell in common with the $r_1$ rows next to the $c_1$ columns I already had. Any such column has at least three blues in it, so I said there were $c_3$ of these, and similarly $r_3$ rows. The remaining columns and rows might as well be $r_0,c_0$ and hopefully won’t matter too much.

From here, I felt I had all the ingredients, and in fact I did, though some of the book-keeping got a bit fiddly. Knowing what you are aiming for and what you have means there’s only one way to proceed: first expressions in terms of these which are upper bounds for the number of columns (or twice the number of columns = rows if you want to keep symmetry), and lower bounds in terms of these for the number of blue cells. I found a noticeable case-distinction depending on whether $r_1\le 3c_3$ and $c_1\le 3r_3$. If both held or neither held, it was quite straightforward, and if exactly one held, it got messy, probably because I hadn’t set things up optimally. Overall, fiddling about with these expressions occupied slightly more time than actually working out the answer was 6m, so I don’t necessarily have a huge number of lessons to learn, except be more organised.

Afterthought 2: Thought 2 said to consider bipartite graphs. I thought about this later while cycling home, because one can’t (or at least, I can’t) manipulate linear inequalities in my head while negotiating Oxford traffic and potholes. I should have thought about it earlier. The equality case is key. If you add in the edges corresponding to blue cells, you get a series of copies of $K_{1,3}$, that is, one vertex with three neighbours. Thus you have three edges for every four vertices, and everything’s a tree. This is a massively useful observation for coming up with a very short proof. You just need to show that there can’t be components of size smaller than 4. Also, I bet this is how the problem-setter came up with it…

# Lovasz Local Lemma

At our training and selection camp in Tonbridge in May, I gave a session about the use of probabilistic methods to solve olympiad combinatorics questions. Such an approach will normally be relevant to a problem where it is required to show the existence of a structure satisfying a particular property. We might consider constructing a structure randomly, and then try to show that the probability that our construction has the required property is non-zero.

There are several possible approaches to this, but often for the property we seek, we can describe a family of ‘bad events’ $A_1,\ldots,A_n$, and then we have the property if none of the bad events hold. That is, we are looking for general conditions under which we can show that $\mathbb{P}(\Cap_{i=1}^n A_i^c)>0$.

We have two cases where this is easy to do.

1) If all the $A_i$s are independent, then so long as all $\mathbb{P}(A_i)<1$, we have our result.

2) If the probability of the bad events have sum less than 1, then we can use a union bound

$\mathbb{P}(\cup A_i)\le \sum_{i=1}^n \mathbb{P}(A_i) <1,$

to conclude what we want.

In Tonbridge we also discussed first-moment methods, where we show that the expected number of bad events is less than 1, meaning there must be some elements of the probability space where the number of bad events is zero. In this article, we’re looking in a different direction. We’ll try to interpolate between situation 1) and 2), to make some concrete comment on the situation where the probabilities of the bad events are small, but not small enough to use the union bound, and where the events are not quite independent, but are in some sense almost independent.

The first thing we need to do is think about what it means for a family of events to be independent. Suppose we toss a fair coin twice, and consider the events:

A:= {first coin is H}, B:={second coin is T}, C:={exactly one H}.

So, it is easy to see that each pair of events is independent, but if we know that A and B hold, then also C holds (and indeed this stays true under cyclic re-ordering). So C is not independent of the family of events {A,B}. Rather than give a formal definition, I’ll say instead that an event B is said to be independent of the family of events $\{A_1,\ldots,A_5\}$ if it is independent of

$A_3$

$A_1\cap A_2$

$A_3\cap A_4^c\cap A_5^c$,

and so on. I hope it’s clear from this what I mean. Slogan: no knowledge about which of the $A_i$ do or don’t hold gives information about B.

Now we return to our original setup. We want that each $A_i$ is independent of lots of the rest, and so we choose for each $i\in[n]$ a dependency set $D_i\subset [n]$ of indices, such that $A_i$ is independent of the family of events $\{A_j: j\in [n]\backslash D_i\}$. It is tempting to interpret this immediately as saying that $A_i$ depends on each event with index in $D_i$. This will normally be fine in practice, but we should emphasise that there is a lot of freedom to choose $D_i$, and the genuinely important condition is that $A_i$ is independent of the family given by the rest of the indices.

[*Health warning*: The language I am using seems sensible for anyone learning about this for the first time, but is very slightly non-standard. Instead of dependency sets, the classical notation is to consider a dependency digraph on [n], with an edge i->j whenever $j\in D_i$.]

The symmetric version of the Lovasz Local Lemma says: suppose $\mathbb{P}(A_i)\le p$ and we can choose $D_i$ as described so that $|D_i|\le d$ for each I. Then, if $epd\le 1$, we may conclude $\mathbb{P}(\Cap_{i=1}^n A_i^c)>0$.

We’ll come back to the proof, which is best seen in a slightly more general formulation. First, let’s get to grips with the notation, by reassuring ourselves that this really does lie between the union bound and the independence case.

If the events are independent, then clearly we may take $D_i=\{i\}$ for each i, that is d=1, so we may draw the desired conclusion so long as $p\le 1/e$, which is fine, but a lot less convincing than p<1. Similarly, for the union bound, we have said nothing about the dependency relationships, and so we have to take $D_i=[n]$ for each i. So we may draw the desired conclusion provided $p\le 1/ne$, which is obviously again a factor of e less than what we would have had with a union bound itself.

Now we’ll see how this might be useful when applied, for example, to a probabilistic construction for the lower bound on the Ramsey number R(k). From Erdos’s argument, we know $R(k)\ge (1+o(1)) 2^{k/2}$, and we will earn an extra factor of $k\sqrt{2}/e$. An extra factor of k/e can also be obtained by an alteration argument, but this extra factor of $\sqrt{2}$ makes this LLL method one of the strongest available for this problem.

Recall that for a lower bound, we are trying to find examples of 2-edge-colourings of a large complete graph $K_n$, such that there is no monochromatic copy of a $K_k$. We consider the uniform independent random edge-colouring. It makes sense that a bad event $A_S$ should be the presence of a monochromatic complete graph induced on a set $S\subset [n]$, of size k. Since there are two potential colours for the monochromatic induced $K_k$, we have $\mathbb{P}(A_S)=2^{1-\binom{k}{2}}$. Then we take the dependency set $D_S$ of $A_S$ to include all those k-sets which share an edge with S, that is $|S\cap T|\ge 2$. We think about which vertices might contribute to the shared edge, and which make up the remainder to conclude $|D_S|\le \binom{k}{2}\binom{n-2}{k-2}$.

So now, whenever $e\cdot 2^{1-\binom{k}{2}}\binom{k}{2}\binom{n-2}{k-2}\le 1$, as a consequence of LLL we can conclude that with positive probability the random colouring gives a suitable example, that is $R(k)>n$. After fiddling around with Stirling’s formula in a fashion that isn’t hugely interesting, we can conclude $R(k)\ge (1+o(1)) \frac{k\sqrt{2}}{2} 2^{k/2}$.

The prompt for this article was a discussion during our Malaysian training camp of methods applicable to IMO 2014 Q6. If you want to know just how applicable LLL is, I suggest you have a think yourself. It’s not a direct application – so some thought is involved. Maybe as an appetiser, here are some more elementary exercises, which I’ve borrowed from examples on Po-Shen Loh’s olympiad handouts, and Wikipedia, though I doubt the latter is the original source:

1) 11n points on a circle are coloured with n colours, such that each colour is used exactly 11 times. Show that it’s possible to choose one point of each colour such that no pair are adjacent.

2) A finite graph is given, and all degrees are between 50 and 100. Find some finite C such that you can always colour the vertices of such a graph so that the neighbourhood of any vertex includes at least 20 colours.

Finally, we discuss the more general statement of LLL, and explain how the proof works.

General Lovasz Local Lemma: Suppose there exist $x_i\in [0,1)$ such that $\mathbb{P}(A_i)\le x_i \prod_{j\in D_i\backslash\{i\}} (1-x_j)$ (*). Then $\mathbb{P}(\Cap A_i^c)\ge \prod (1-x_i)>0$.

Deducing the symmetric form from the general form is not hard. Condition (*) is motivated by the proof. We want to be able to say that no matter which other events and their unions, complements etc we condition on, we still have an upper bound for the probability of $A_i$. This bound will be $x_i$. In particular, we want to show that the probability of bad event $A_i$ does not get too high, even if we know that lots of other bad events have not occurred.

The proof proceeds by showing $\mathbb{P}(A_i | \Cap_{j\in S}A_j^c)\le x_i$ for all i, by induction on $|S|$. For the inductive step, you split $S=S_1\cup S_2$ where $S_1=S\cap D_i$, $S_2=S\cap D_i^c$. If $S_1=\varnothing$, you are done straight away, by the assumption (*) and independence of $A_i$ and the events not indexed by $D_i$. Otherwise, you can use the inductive hypothesis on $S_2$, and repeated Bayes’ theorem to show what you want in a series of steps that have a lot of notation, but aren’t hugely difficult.

# Local to Global in Point Set Combinatorics

I’ve spent the last few days in Cambridge, helping to run the first camp to select the UK team for the International Mathematical Olympiad. We’ve had two 4.5 hour exams, replicating what six of the students will face in the final competition, which this year is held in Cape Town, and also lots of problem-solving sessions, introducing the students to different areas of mathematics relevant to the olympiad. For various reasons, I ended up choosing to discuss combinatorial problems involving points and lines with both sets of students. This is an area which is very popular with problem setters (witness for example Question Two at last year’s IMO in Colombia) but which, because of a lack of any real ‘theory’, is often not treated at these camps.

Anyway, 90 minutes is not a lot of time to solve a large number of problems, so I was interested in the notion of using the time to try to teach the students how to have the right sort of ideas to deal with these. Obviously if there were an easy way to teach people to have good ideas, mathematically or otherwise, life would be a whole lot more straightforward. So, given we can’t do that, the next best thing is to have lots of ideas of any kind, and to get practice at choosing which ones seem best reliably and quickly.

We have two problems to consider that are very similar. I’ll state the first one:

Q1: a strip of width w is the set of all points which lie on or between two parallel lines that are a distance w apart. Let S be a set of n>2 points such that any three can be covered by a strip of width 1. Prove that S can be covered by a strip of width 2.

This sort of thing comes up all the time in olympiad combinatorics and beyond. We have some condition that is local, in the sense that it only affects triples of points, but we want to show that the same condition works for all the points together.

The first step is to be clear about what we are looking for. We are looking for a strip, but we have also been given lots of strips, namely the strips covering each triple of points. So the natural thing to do is to construct the strip we are looking for from the strips we have.

The next observation offered was that we might want to combine two strips of width 1 to get a strip of width 2. This is certainly an idea, but a quick check makes you realise it isn’t a very promising idea. Why? Well unless two strips are parallel, it takes an infinite-width strip to cover their union. Ok, so although that wasn’t useful in itself, it now encourages us to focus on constructing the strip we are looking for from a single one of the strips we are given.

It’s not yet clear exactly how we are going to construct the desired strip from the strip we are given. We should be alive to the possibility that 2 might be a weak bound, and actually $1+\epsilon$ would suffice. Or we might have tried some examples. One thing we might have is a strong example of where a strip of width 2 is necessary, and where we can’t add any points which don’t fit in this strip without breaking the first condition.

Maybe this doesn’t help so much. So perhaps we should think not about how to construct the big strip from the little strip, but instead, how to choose the little strip from the large set we have at our disposal. It is at this stage that I feel it is really important to use all the power at our disposal. In many combinatorics problems we have dealing with a discrete set like [n], where the roles of the labels are interchangeable, and so it can sometimes to be difficult to pick out a particular element. For example, it doesn’t matter for induction whether we take [n-1] then add {n} or start with [2,n] and add {1}. When the construction is geometric, we should use the geometry to make life easier in the construction. For example, if we want to induct, we could add the left-most point in some co-ordinate axes, or some point on the convex hull.

In our particular example, we are still trying to decide which strip to work with. If we consider the triple of points which is in some sense hardest to cover with a strip, this might be a sensible thing to work with. Now we need to decide what it means to be hard to cover a triangle with a strip. We might observe that if the triangle has an altitude of height h, we can certainly cover the triangle with a strip of width h. Further playing around convinces us that the minimum width of a strip required to cover a triangle is given by the shortest altitude. So our intuition that it’s hard to cover a triangle if it is large has been made a bit more concrete. We might also observe that we don’t have much flexibility for a strip if one of the lengths of the triangle is very long. For fixed strip width, as the length of the triangle gets large, the range of angles the strip could sit at (if it is possible at all) gets small.

This motivates considering the least convenient triangle as the one with maximal width and maximal height of altitude. We might try both orders, but certainly considering the triangle ABC such that d(A,B) is maximal among the set of points, and d(AB, C) is maximal once A,B are determined, is useful. The fact that we can cover ABC with one of the strips tells us from the above that $d(AB,C)\le 1$, and suddenly we are almost done, as we now have a strong clue for what the strip we are looking for should be. If we take all the points x such that $d(AB,x)\le 1$, we have a strip of width 2, and by construction of C we must have included all points of the set.

The key ingredient here was using what we already had, realising it probably wasn’t going to be helpful to attempt to use *all* of what we already had, then coming up with a sensible way to choose which pre-existing strip to develop.

With this in mind, the next question seems a lot easier:

Q2: Given n points in the plane such that any three lie within some circle of radius 1, prove that *all* the points lie within some circle of radius 1.

Circulating round the room, there were lots of pairs of compasses out and various diagrams of interlocking circles snaking around pieces of A4. But after discussing Q1, suddenly all seems a lot more clear. Similarly to the previous case, we have a lot of circles, and we are looking for a circle. Conveniently, the circle we are looking for is even the same size as the circles we have.

So we need to choose which circle we already have is likely to be most useful. Suppose we have three points very close together. Then we know very little about the circle containing those points – it could be almost anything. By contrast, if we have three points which form a triangle with circumradius equal to 1, then there is in fact no choice about what the covering circle could be. So this is most likely to be useful. So we conjecture that we should consider the triangle with largest circumradius. In fact, this is a problem: if the triangle is obtuse, then the circumradius might actually be greater than 1, but the points still satisfy the condition. In that case, it makes sense to consider the circle with diameter given by the longest side of the triangle.

With that clarification, we now have a collection of circles with radii all $\le 1$, which cover the triangles. We conjecture that the circle with largest radius might cover all the points. This turns out to be correct, and not too difficult to prove. Both these problems are in some sense extremal. Only here the tricky part was deciding what to extremise. But all were agreed that with some meta-thinking, we were able to decide fairly accurately which ideas were leading towards progress, and following them led fairly rapidly to this extremal property.

# The Combinatorial Nullstellensatz

I’ve been taking a TCC course this term on Additive Combinatorics, delivered via video link from Bristol by Julia Wolf. At some point once the dust of this term has settled, I might write some things about the ideas of the course I’ve found most interesting, in particular the tools of discrete Fourier analysis to get a hold on some useful combinatorial properties of subsets of $\mathbb{Z}/n\mathbb{Z}$ for example.

For this post, I want to talk instead about a topic that was merely mentioned in passing, the Combinatorial Nullstellensatz. The majority of this post is based on Alon’s original paper, which can be found here, and Chapter 9 of Tao and Vu’s book Additive Combinatorics. My aim is to motivate the theorem, give a proof, introduce one useful application from additive combinatorics, and solve Q6 from IMO 2007 as a direct corollary.

What does Nullstellensatz mean? Roughly speaking, it seems to mean ‘a theorem specifying the zeros’. We will be specifying the zeros of a polynomial. We are comfortable with how the zeros of a complex-valued polynomial of one variable behave. The number of zeros is given precisely by the degree of the polynomial (allowing appropriately for multiplicity). It is generally less clear how we might treat the zeros of a polynomial of many variables. The zero set is likely to be some surface, perhaps of dimension one less than the number of variables. In particular, it no longer really makes sense to talk about whether this set is finite or not. The Combinatorial Nullstellensatz gives us some control over the structure of this set of zeros.

The idea behind the generalisation is to view the Fundamental Theorem of Algebra as a statement not about existence of roots, but rather about (combinatorial) existence of non-roots. That is, given a polynomial P(x) of degree n, for any choice of (n+1) complex numbers, at least one of them is not a root of P. This may look like a very weak statement in this context, where we only expect finitely many roots anyway, but in a multivariate setting it is much more intuitively powerful.

Recall that the degree of a monomial is given by the sum of the exponents of the variables present. So the degree of $4x^2 y^3 z$ is 6. The degree of a polynomial is then given by the largest degree of a monomial in that polynomial. A polynomial $P(x_1,\ldots,x_n)$ over a field F with degree d might have lots of monomial terms of degree d. Suppose one of these monomials is $x_1^{d_1}\ldots x_n^{d_n}$, where $\sum d_i=d$. Then one version of the Combinatorial Nullstellensatz asserts that whenever you take subsets of the base field $S_i\subset F$ with $|S_i|\ge d_i+1$, then there is a point with $x_i\in S_i$ such that $P(x_1,\ldots,x_n)=0$.

In other words, you can’t have a box (ie product of sets) of dimension $d_1+1 \times d_2+1 \times\ldots\times d_n+1$ on which the polynomial is zero.

Unsurprisingly, the proof proceeds by induction on the number of variables. Alon’s result proceeds via a more general theorem giving information about the possibility of writing multinomial polynomials as linear combinations of polynomials in one variable.

We would like to start this induction by fixing the x_n co-ordinate, then viewing P as a polynomial in $x_1,\ldots,x_{n-1}$ only. One problem with this approach is that the largest degree monomials in P are not necessarily still the largest degree monomials in P with x_n fixed. So we need to apply a division algorithm argument.

I’m going to miss some steps so as to keep this of suitable blog post length. The key idea is to apply the division algorithm to P with respect to the simplest polynomial that is zero on all of $S_n$, which we define as:

$g(x_n)=\prod_{s_n\in S_n}(x_n-s_n).$

We can decompose as

$P(x_1,\ldots,x_n)=q_n(x_1,\ldots,x_n)g(x_n)+\sum_{j=0}^{|S_n|-1}r_{n,j}(x_1,\ldots,x_{n-1})x_n^j.$

So now we ask where the term $x_1^{d_1}\ldots x_n^{d_n}$ is coming from, bearing in mind that $d_n<|S_n|$. The lower order terms in g cannot contribute to this, as  they cannot be of maximal degree. Also, the first term in $q_n(\mathbf{x})g(x_n)$ cannot contribute as the exponent of $x_n$ is too large. So the term in question must be coming from $r_{n,d_n}(x_1,\ldots,x_{n-1})x_n^{d_n}$. So now we can apply the induction hypothesis to the polynomial $r_{n,d_n}$ to find $x_1\in S_1,\ldots, x_{n-1}\in S_{n-1}$ such that $r_{n,d_n}(x_1,\ldots,x_{n-1}$ is non-zero. With these values, we can view the remainder as a polynomial in $x_n$ of degree $|S_n|>d_n$, and so there is an $x_n\in S_n$ such that

$\sum_{j=0}^{|S_n|}r_{n,j}(x_1,\ldots,x_{n-1})x_n^j)\neq 0.$

This concludes the proof by induction.

I want to discuss two relatively simple applications. The first is the Cauchy-Davenport Theorem, which one might view as the first non-trivial theorem in additive combinatorics, giving a bound on the size of a sumset.

Theorem (Cauchy-Davenport): Given A, B non-empty subsets of $Z_p$ for p a prime, then

$|A+B|\geq \min\{p,|A|+|B|-1\}.$

( $A+B:=\{c: c=a+b,a\in A,b\in B\}$ )

Note that the result isn’t especially surprising. Providing some sort of ordering to the elements of A and B might be a sensible way to proceed. Certainly if they were sets in $\mathbb{Z}$, this would give a proof immediately.

Proof: Only the case |A|+|B| <= p is interesting. Following Alon’s argument, suppose that |A+B| <= |A|+|B|-2, and let C=A+B. Set $f(x,y)=\prod_{c\in C}(x+y-c)$, so f(a,b)=0 for all $a\in A,b\in B$.

Then the coefficient of $x^{|A|-1}y^{|B|-1}$ in f is $\binom{|A|+|B|-2}{|A|-1}$ as we have to choose which of the terms in the product supply an x and which supply a y. This is non-zero (in $Z_p$ recall) since the upper integer is less than p. The Combinatorial Nullstellensatz then gives a contradiction.

My second example is from the IMO in Vietnam which I attended. I spent a lot of time thinking about this problem, but made no progress.

IMO 2007 Question 6: Let n be a positive integer. Consider

$S=\{(x,y,z) | x,y,z\in \{0,1,\ldots,n\}, x+y+z>0\}$

as a set of $(n+1)^3-1$ points in 3D space. Determine the smallest number of planes, the union of which contains S but does not include (0,0,0).

Answer: 3n. Consider the planes x+y+z = k for k varying between 1 and 3n. The aim is to prove that you cannot do it with fewer.

To prove this, suppose we can do with fewer planes, say k. We write the equation of a plane as

$ax+by+cz-d=0.$

Note that the d’s are non-zero as (0,0,0) must not be a solution. Then take the product of all these degree one polynomials together and subtract a multiple of

$\prod_{i=1}^n (x-i)(y-i)(z-i),$

with the multiple chosen so the resulting polynomial has a root at (0,0,0). (This constant must be non-zero to cancel the non-zero product of the d’s.) This resulting polynomial is degree 3n by construction, and $x^ny^nz^n$ has a non-zero coefficient, but it is zero on the box $[0,n]^3$, which contradicts Combinatorial Nullstellensatz.