BMO1 2017 – Questions 5 and 6

The first round of the British Mathematical Olympiad was sat yesterday. The questions can be found here and video solutions here. My comments on the first four questions are in the previous post.

Overall, I didn’t think any of the questions on this paper were unusually difficult by the standard of BMO1, but I found everything slightly more time-consuming than typical. I thought Question 5 was a great problem, and I tried lots of things unsuccessfully, first, and so wanted to discuss it in slightly more technical language. For Question 6 I made a decisive mistake, which I’ll explain, and which cost a lot of time. But in general, my point is that the back end of the paper was a little fiddlier than normal, and required longer written solutions, and perhaps many students might have had less time than expected to attack them anyway after details earlier in the paper.

Question Five

As I said before, I thought this question was quite challenging. Not because the solution is particularly exotic or complicated, but because there were so many possible things that might have worked. In my opinion it would not have been out of place at the start of an IMO paper, because it’s perfectly possible to have enough good ideas that eliminating the ones that don’t work takes an hour, or hours. Even though it slightly spoils the flow of the solution, I’m particularly trying to emphasise the tangents that didn’t work, mostly for reassurance to anyone who spent a long time struggling.

I was thinking about this question in terms of a 2Nx2N board, where N is even, and for the given question equal to 100. I spent a while thinking that the bound was 8N-4, corresponding to taking the middle two rows and the middle two columns, but not the 2×2 square which is their intersection. If you think of a comb as a ‘handle’ of 1xN cells, with an extra N/2 alternating cells (say, ‘teeth’) bolted on, then it’s clear this construction works because there’s never space to fit in a handle, let alone the teeth.

I couldn’t prove that this was optimal though. A standard way to prove a given bound K was optimal would be to produce a tiling on the board with K combs, where every cell is included in exactly one comb. But this is clearly not possible in this situation, since the number of cells in a comb (which is 150) does not divide the total number of cells on the board.

Indeed, the general observation that if you take a comb and a copy of the comb rotated by 180, the teeth of the second comb can mesh perfectly with the teeth of the first comb to generate a 3xN unit. I wasted a moderate amount of time pursuing this route.

[Note, it will be obvious in a minute why I’m writing ‘shaded’ instead of ‘coloured’.]

But in motivating the construction, I was merely trying to shade cells so that they intersected every possible 1xN handle, and maybe I could prove that it was optimal for this. In fact, I can’t prove it’s optimal because it isn’t optimal – indeed it’s clear that a handle through one of the middle rows intersects plenty of shaded cells, not just one. However, with this smaller problem in mind, it didn’t take long to come up with an alternative proposal, namely splitting the board into equal quarters, and shading the diagonals of each quarter, as shown.

It seems clear that you can’t fit in a 1xN handle, and any sensible tiling with 1xN handles contains exactly one shaded cell, so this shading (with 4N shaded cells) is optimal. But is it optimal for a comb itself?

Consider a shading which works, so that all combs include a shaded cell. It’s clear that a comb is contained within a 2xN block, and in such a 2xN block, there are four possible combs, as shown.

You need to cover all these combs with some shading somewhere. But if you put the shaded cell on a tooth of comb A, then you haven’t covered comb B. And if you put the shaded cell on the handle of comb A, then you haven’t covered one of comb C and comb D. You can phrase this via a colouring argument too. If you use four colours with period 2×2, as shown

then any comb involves exactly three colours, and so one of them misses out the colour of the shaded cell. (I hope it’s clear what I mean, even with the confusing distinction between ‘shaded’ and ‘coloured’ cells.)

Certainly we have shown that any 2xN block must include at least two shaded cells. And that’s pretty much it. We have a tiling with 2N copies of a 2xN block, and with at least two shaded cells in each, that adds to at least 4N shaded cells overall.

Looking back on the method, we can identify another way to waste time. Tiling a board, eg a chessboard with dominos is a classic motif, which often relies on clever colouring. So it’s perhaps lucky that I didn’t spot this colouring observation earlier. Because the argument described really does use the local properties of how the combs denoted A-D overlap. An attempt at a global argument might start as follows: we can identify 2N combs which don’t use colour 1, and tile this subset of the grid with them, so we need to shade at least 2N cells from colours {2,3,4}. Similarly for sets of colours {1,3,4}, {1,2,4}, and {1,2,3}. But if we reduce the problem to this, then using roughly 2N/3 of each colour fits this global requirement, leading to a bound of 8N/3, which isn’t strong enough. [1]

Question Six

A word of warning. Sometimes it’s useful to generalise in problems. In Q5, I was thinking in terms of N, and the only property of N I used was that it’s even. In Q4, we ignored 2017 and came back to it at the end, using only the fact that it’s odd. By contrast, in Q2, the values did turn out to be important for matching the proof bounds with a construction.

You have to guess whether 300 is important or not here. Let’s see.

I have a natural first question to ask myself about the setup, but some notation is useful. Let a_1,a_2,\ldots,a_{300} be the ordering of the cards. We require that \frac{a_1+\ldots+a_n}{n} is an integer for every 1\le n\le 300. Maybe the values of these integers will be important, so hold that thought, but for now, replace with the divisibility statement that n | a_1+\ldots+a_n.

I don’t think it’s worth playing with small examples until I have a better idea whether the answer is 5 or 295. So the natural first question is: “what does it mean to have (a_1,\ldots,a_{n-1}) such that you can’t pick a suitable a_n?”

It means that there is no integer k in \{1,\ldots,300\}\backslash\{a_1,\ldots,a_{n-1}\} such that n\,\big|\,(a_1+\ldots+a_{n-1})+k, which for now we write as

k\equiv -(a_1+\ldots+a_{n-1})\,\mod n.

Consider the congruence class of -(a_1+\ldots+a_{n-1}) modulo n. There are either \lfloor \frac{300}{n}\rfloor or \lceil \frac{300}{n}\rceil integers under consideration in this congruence class. If no such k exists, then all of the relevant integers in this congruence class must appear amongst \{a_1,\ldots,a_{n-1}\}. At this stage, we’re trying to get a feel for when this could happen, so lower bounds on n are most relevant. Therefore, if we get stuck when trying to find a_n, we have

\lfloor \frac{300}{n} \rfloor\text{ or }\lceil \frac{300}{n}\rceil \le n-1, (*)

which is summarised more succinctly as

\lfloor \frac{300}{n} \rfloor \le n-1. (**)

[Note, with this sort of bounding argument, I find it helpful to add intermediate steps like (*) in rough. The chance of getting the wrong direction, or the wrong choice of \pm 1 is quite high here. Of course, you don’t need to include the middle step in a final write-up.]

We can check that (**) is false when n\le 17 and true when n\ge 18. Indeed, both versions of (*) are true when n\ge 18.

So we know the minimum failure length is at least 17. But is there a failing sequence of length 17? At a meta-level, it feels like there should be. That was a very natural bounding argument for 17 (which recall corresponds to n=18), and it’s easy to believe that might be part of an official solution. If we achieve equality throughout the argument, that’s most of the way to a construction as well. It won’t be so easy to turn this argument into a construction for n\ge 19 because there won’t be equality anywhere.

We have to hope there is a construction for n=18. What follows is a description of a process to derive (or fail to derive) such a construction. In a solution, one would not need to give this backstory.

Anyway, in such a construction, let \alpha\in\{1,2,\ldots,18\} describe the congruence class modulo 18 which is exhausted by \{a_1,\ldots,a_{17}\}. I’m going to hope that \alpha=18 because then the calculations will be easier since everything’s a multiple of 18. We haven’t yet used the fact that for a problem, we need \alpha\equiv-(a_1+\ldots+a_{n-1}). We definitely have to use that. There are 16 multiples of 18 (ie relevant integers in the congruence class), so exactly one of the terms so far, say a_j, is not a multiple of 18. But then

0 \equiv 0+\ldots+0+a_j+0+\ldots+0,

which can’t happen. With a bit of experimentation, we find a similar problem making a construction using the other congruence classes with 16 elements, namely \alpha\in \{13,14,\ldots,18\}.

So we have to tackle a different class. If \alpha\le 12 then our sequence must be

\alpha,18+\alpha,2\times 18 +\alpha, \ldots, 16\times 18 + \alpha,

in some order. In fact, let’s add extra notation, so our sequence is

(a_1,\ldots,a_{17}) = (18\lambda_1+ \alpha,\ldots,18\lambda_{17}+\alpha),

where (\lambda_1,\ldots,\lambda_{17}) is a permutation of {0,…,16}. And so we require

k \,\big|\, 18(\lambda_1+\ldots+\lambda_k) + k\alpha, (%)

for 1\le k\le 17. But clearly we can lop off that k\alpha, and could ignore the 18. Can we find a permutation \lambda such that

k \,\big|\, \lambda_1+\ldots+\lambda_k.

This was where I wasted a long time. I played around with lots of examples and kept getting stuck. Building it up one term at a time, I would typically get stuck around k=9,10. And I had some observations that in all the attempted constructions, the values of \frac{\lambda_1+\ldots+\lambda_k}{k} were around 8 and 9 too when I got stuck.

I became convinced this subproblem wasn’t possible, and decided that would be enough to show that n=18 wasn’t a possible failure length. I was trying to show the subproblem via a parity argument (how must the a_is alternate odd/even to ensure all the even partial sums are even) but this wasn’t a problem. Then I came up with a valid argument. We must have

\lambda_1+\ldots+\lambda_{17}=136= 16\times 8 + 8\quad\text{and}\quad 16\,\big|\,\lambda_1+\ldots+\lambda_{16},

which means \lambda_1+\ldots+\lambda_{16} must be 128 = 15×8 + 8, ie \lambda_{17}=8. But then we also have 15\,\big|\, \lambda_1+\ldots+\lambda_{15}, which forces $latex\lambda_{16}=8$ also. Which isn’t possible.

If this then hadn’t wasted enough time, I then tried to come up with a construction for n=19, for which there are lots more variables, and took a lot more time, and seemed to be suffering from similar problems, just in a more complicated way. So I became convinced I must have made a mistake, because I was forced down routes that were way too complicated for a 3.5 hour exam. Then I found it…

What did I do wrong? I’ll just say directly. I threw away the 18 after (%). This made the statement stronger. (And in fact false.) Suppose instead I’d thrown away a factor of 9 (or no factors at all, but it’s the residual 2 that’s important). Then I would be trying to solve

k\,\big|\,2(\lambda_1+\ldots+\lambda_k).

And now if you experiment, you will notice that taking \lambda_1=0,\lambda_2=1,\lambda_3=2,\ldots seems to work fine. And of course, we can confirm this, using the triangle number formula for the second time in the paper!

This had wasted a lot of time, but once that thought is present, we’re done, because we can go straight back and exhibit the sequence

(a_1,\ldots,a_{17}) = (1, 18+1,2\times 18 +1,\ldots, 16\times 18 +1).

Then the sum so far is congruent to -1 modulo 18, but we have exhausted all the available integers which would allow the sum of the first 18 terms to be a multiple of 18. This confirms that the answer to the question as stated is 17.

At the start, I said that we should be cautious about generalising. In the end, this was wise advice. We definitely used the fact that 18 was even in the stage I over-reduced the first time. We also used the fact that there was at least one value of \alpha with an ‘extra’ member of the congruence class. So I’m pretty sure this proof wouldn’t have worked with 288 = 16×18 cards.

Footnotes

[1] – If shading were a weighted (or continuous or whatever you prefer) property, ie that each cell has a quantity of shading given by a non-negative real number, and we merely demand that the total shading per comb is at least one, then the bound 8N/3 is in fact correct for the total shading. We could look at a 2xN block, and give 1/3 shading to one cell of each colour in the block. Alternatively, we could be very straightforward and apply 2/3N shading to every cell in the grid. The fact that shading has to be (in this language) zero or one, imposes meaningful extra constraints which involve the shape of the comb.

Advertisements

BMO1 2017 – Questions 1-4

The first round of the British Mathematical Olympiad was sat yesterday. The questions can be found here. I recorded some thoughts on the questions while I was in Cyprus, hence the nice Mediterranean sunset above. I hope this might be useful to current or future contestants, as a supplement to the concise official solutions available. It goes without saying that while these commentaries may be interesting at a general level, they will be much more educational to students who have at least digested and played around with the questions, so consider trying the paper first. Video solutions are available here. These have more in common with this blog post than the official solutions, though inevitably some of the methods are slightly different, and the written word has some merits and demerits over the spoken word for clarity and brevity.

The copyright for these questions lies with BMOS, and are reproduced here with permission. Any errors or omissions are obviously my own.

I found the paper overall quite a bit harder than in recent years, or at least harder to finish quickly. I’ve therefore postponed discussion of the final two problems to a second post, to follow shortly.

Question One

A recurring theme of Q1 from BMO1 in recent years has been: “it’s possible to do this problem by a long, and extremely careful direct calculation, but additional insight into the setup makes life substantially easier.”

This is the best example yet. It really is possible to evaluate Helen’s sum and Phil’s sum, and compare them directly. But it’s easy to make a mistake in recording all the remainders when the divisor is small, and it’s easy to make a mistake in summation when the divisor is large, and so it really is better to have a think for alternative approaches. Making a mistake in a very calculation-heavy approach is generally penalised heavily. And this makes sense intellectually, since the only way for someone to fix an erroneous calculation is to repeat it themselves, whereas small conceptual or calculation errors in a less onerous solution are more easily isolated and fixed by a reader. Of course, it also makes sense to discourage such attempts, which aren’t really related to enriching mathematics, which is the whole point of the exercise!

Considering small divisors (or even smaller versions of 365 and 366) is sometimes helpful, but here I think a ‘typical’ divisor is more useful. But first, some notation will make any informal observation much easier to turn into a formal statement. Corresponding to Helen and Phil, let h(n) be the remainder when n is divided by 365, and p(n) the remainder when n is divided by 366. I would urge students to avoid the use of ‘mod’ in this question, partly because working modulo many different bases is annoying notationally, partly because the sum is not taken modulo anything, and partly because the temptation to use mod incorrectly as an operator is huge here [1].

Anyway, a typical value might be n=68, and we observe that 68 x 5 + 25 = 365, and so h(68)=25 and p(68)=26. Indeed, for most values of n, we will have p(n)=h(n)+1. This is useful because

p(1)+p(2)+\ldots+p(366) - \left(h(1)+h(2)+\ldots+h(365)\right)

= \left(p(1)-h(1)\right) + \ldots+\left(p(365)-h(365)\right) + p(366),

and now we know that most of the bracketed terms are equal to one. We just need to handle the rest. The only time it doesn’t hold that p(n)=h(n)+1 is when 366 is actually a multiple of n. In this case, p(n)=0 and h(n)=n-1. We know that 366 = 2 x 3 x 61, and so its divisors are 1, 2, 3, 6, 61, 122, 183.

Then, in the big expression above, seven of the 365 bracketed terms are not equal to 1. So 358 of them are equal to one. The remaining ones are equal to 0, -1, -2, -5, -60, -121, -182 respectively. There are shortcuts to calculate the sum of these, but it’s probably safer to do it by hand, obtaining -371. Overall, since p(366)=0, we have

p(1)+p(2)+\ldots+p(366) - \left(h(1)+h(2)+\ldots+h(365)\right)

= -371 + 358 + 0 = -13.

So, possibly counter-intuitively, Helen has the larger sum, with difference 13, and we didn’t have to do a giant calculation…

Question Two

Suppose each person chooses which days to go swimming ‘at random’, without worrying about how to define this. Is this likely to generate a maximum or minimum value of n? I hope it’s intuitively clear that this probably won’t generate an extreme value. By picking at random we are throwing away lots of opportunity to force valuable overlaps or non-overlaps. In other words, we should start thinking about ways to set up the swimming itinerary with lots of symmetry and structure, and probably we’ll eventually get a maximum or a minimum. At a more general level, with a problem like this, one can start playing around with proof methods immediately, or one can start by constructing lots of symmetric and extreme-looking examples, and see what happens. I favour the latter approach, at least initially. You have to trust that at least one of the extreme examples will be guess-able.

The most obvious extreme example is that everyone swims on the first 75 days, and no-one swims on the final 25 days. This leads to n=75. But we’re clearly ‘wasting’ opportunities in both directions, because there are never exactly five people swimming. I tried a few more things, and found myself simultaneously attacking maximum and minimum, which is clearly bad, so focused on minimum. Just as a starting point, let’s aim for something small, say n=4. The obstacle is that if you demand at most four swimmers on 96 days, then even with six swimmers on the remaining four days, you don’t end up with enough swimming having taken place!

Maybe you move straight from this observation to a proof, or maybe you move straight to a construction. Either way, I think it’s worth saying that the proof and the construction come together. My construction is that everyone swims on the first 25 days, then on days 26-50 everyone except A and B swim, on days 51-75 everyone except C and D swim, and on days 76-100 everyone except E and F swim. This exactly adds up. And if you went for the proof first, you might have argued that the total number of swim days is 6×75 = 450, but is at most 4n + 6(100-n). This leads immediately to n\ge 25, and I just gave the construction. Note that if you came from this proof first, you can find the construction because your proof shows that to be exact you need 25 days with six swimmers, and 75 days with four swimmers, and it’s natural to try to make this split evenly. Anyway, this clears up the minimum.

[Less experienced contestants might wonder why I was worried about generating a construction despite having a proof. Remember we are trying to find the minimum. I could equally have a proof for n\ge 10 which would be totally totally valid. But this wouldn’t show that the minimum was n=10, because that isn’t in fact possible (as we’ve seen), hence it’s the construction that confirms that n=25 is the true minimum.]

It’s tempting to go back to the drawing board for the maximum, but it’s always worth checking whether you can directly adjust the proof you’ve already given. And here you can! We argued that

450\le 4n + 6(100-n)

to prove the minimum. But equally, we know that on the n days we have at least five swimmers, and on the remaining days, we have between zero and four swimmers, so

450 \ge 5n + 0\times (100-n), (*)

which gives n\le 90. If we have a construction that attains this bound then we are done. Why have I phrased (*) with the slightly childish multiple of zero? Because it’s a reminder that for a construction to attain this bound, we really do need the 90 days to have exactly five swimmers, and the remaining ten days to have no swimmers. So it’s clear what to do. Split the first 90 days into five groups of 15 days. One swimmer skips each group. No-one swims in the final ten days, perhaps because of a jellyfish infestation. So we’re done, and 25\le n\le 90.

At a general level, it’s worth noting that in the story presented, we found an example for the minimum which we turned into a proof, and then a proof for the maximum, which we then analysed to produce a construction.

Note that similar bounding arguments would apply if we fiddled with the numbers 5, 75 and 100. But constructions matching the bounds might not then be possible because the splits wouldn’t work so nicely. This would make everything more complicated, but probably not more interesting.

Question Three

It’s understandable that lots of students attempting this paper might feel ill-at-ease with conventional Euclidean geometry problems. A good first rule of thumb here, as in many settings, is “don’t panic!”, and a more specific second rule of thumb is “even if you think you can calculate, try to find geometric insight first.”

Here, it really does look like you can calculate. A configuration based on a given isosceles triangle and a length condition and a perpendicular line is open to several coordinate approaches, and certainly some sensible trigonometry. It’s also very open to organised labelling of the diagram. You have three equal lengths, and a right-angle, as shown.

The key step is this. Drop the perpendicular from A to BC, and call its foot D. That alone really is the key step, as it reduces both parts of the question to an easy comparison. It’s clear that the line AD splits the triangle into two congruent parts, and thus equal areas and perimeters. So it is enough to show that triangle BMN has the same area as triangle ABD, and that their outer-perimeters (ie the part of its perimeter which is also the perimeter of ABC) are the same.

But they’re congruent, so both of these statements are true, and the problem is solved.

My solution could be as short as two or three lines, so for the purposes of this post all that remains is to justify why you might think of the key step. Here are a few possible entry routes:

  • You might notice that line AD induces the required property for triangle ABD.
  • You might try to find a triangle congruent to AMN, and come up with D that way.
  • There’s already a perpendicular in the question so experimenting with another one is natural, especially since the perpendicular from A has straightforward properties.
  • AMN is a right angle, and so constructing D gives a cyclic quadrilateral. We didn’t use that directly in the proof above, but constructing cyclic quadrilaterals is usually a good idea.
  • If you were trying a calculation approach, you probably introduced the length AD, or at least the midpoint D as an intermediate step.

On the video, Mary Teresa proposes a number of elegant synthetic solutions with a few more steps. You might find it a useful exercise to try to come up with some motivating reasons like the bullet points above to justify her suggestion to reflect A in M as a first step.

Question Four

BMO1 2017Q4

I wasn’t paying enough attention initially, and I calculated a_2=0\text{ or }2. This made life much much more complicated. As with IMO 2017 Q1, if trying to deduce general behaviour from small examples, it’s essential to calculate the small examples correctly!

Once you engage your brain properly, you find that a_2=0 \text{ or }3, and of course a_2=0 is not allowed, since it must be positive. So a_2=3, and a similar calculation suggests a_3=1\text{ or }6. It’s clear that the set of values for a_{k+1} depends only on a_k, so if you take a_3=1, then you’re back to the situation you started with at the beginning. If you choose to continue the exploration with a_3=6, you will find a_4=2\text{ or }10, at which point you must be triggered by the possibility that triangle numbers play a role here.

As so often with a play-around with small values, you need to turn a useful observation into a concrete statement, which could then be applied to the problem statement. It looks like in any legal sequence, every term will be a triangle number, so we only need to clarify which triangle number. An example of a suitable statement might be:

Claim: If a_n=T_k=\frac{k(k+1)}{2}, the k-th triangle number, then a_{n+1}=T_{k-1}\text{ or }T_{k+1}.

There are three stages. 1) Checking the claim is true; 2) checking the claim is maximally relevant; 3) proving it. In this case, proving it is the easiest bit. It’s a quick exercise, and I’m omitting it. Of course, we can’t prove any statement which isn’t true, and here we need to make some quick adjustment to account for the case k=1, for which we are forced to take a_{n+1}=T_{k+1}.

The second stage really concerns the question “but what if a_n\ne T_k?” While there are deductions one could make, the key is that if a_1 is a triangle number, the claim we’ve just made shows that a_n is always a triangle number, so this question is irrelevant. Indeed the claim further shows that a_{2017}\le T_{2017}, and also that a_{2017}=T_k for some odd value of k. To be fully rigorous you should probably describe a sequence which attains each odd value of k, but this is really an exercise in notation [2], and it’s very obvious they are all attainable.

In any case, the set of possible values is \{T_1,T_3,\ldots,T_{2017}\}, which has size 1009.

Final two questions

These are discussed in a subsequent post.

Footnotes

[1] – mod n is not an operator, meaning you shouldn’t think of it as ‘sending integers to other integers’, or ‘taking any integer, to an integer in {0,1,…,n-1}’. Statements like 19 mod 5 = 4 are useful at the very start of an introduction to modular arithmetic, but why choose 4? Sometimes it’s more useful to consider -1 instead, and we want statements like a^p\equiv a modulo p to make sense even when a\ge p. 19 = 4 modulo 5 doesn’t place any greater emphasis on the 4 than the 19. This makes it more like a conventional equals sign, which is of course appropriate.

[2] – Taking a_n=T_n for $1\le n\le k$, and thereafter a_n=T_k if k is odd, and $a_n=T_{k+1}$ if k is even will certainly work, as will many other examples, some perhaps easier to describe than this one, though make sure you don’t accidentally try to use T_0!

Characterising fixed points in geometry problems

There’s a risk that this blog is going to become entirely devoted to Euclidean geometry, but for now I’ll take that risk. I saw the following question on a recent olympiad in Germany, and I enjoyed it as a problem, and set it on a training sheet for discussion with the ten British students currently in contention for our 2017 IMO team.

Given a triangle ABC for which AB\ne AC. Prove there exists a point D\ne A on the circumcircle satisfying the following property: for any points M,N outside the circumcircle on rays AB, AC respectively, satisfying BM=CN, the circumcircle of AMN passes through D.

Proving the existence of a fixed point/line/circle which has a common property with respect to some other variable points/lines/circles is a common style of problem. There are a couple of alternative approaches, but mostly what makes this style of problem enjoyable is the challenge of characterising what the fixed point should be. Sometimes an accurate diagram will give us everything we need, but sometimes we need to be clever, and I want to discuss a few general techniques through the context of this particular question. I don’t want to make another apologia for geometry as in the previous post, but if you’re looking for the ‘aha moment’, it’ll probably come from settling on the right characterisation.

At this point, if you want to enjoy the challenge of the question yourself, don’t read on!

Reverse reconstruction via likely proof method

At some point, once we’ve characterised D in terms of ABC, we’ll have to prove it lies on the circumcircle of any AMN. What properties do we need it to have? Well certainly we need the angle relation BDC = A, but because MDAN will be cyclic too, we also need the angle relation MDN = A. After subtracting, we require angles MDB = NDC.

Depending on your configuration knowledge, this is all quite suggestive. At the very least, when you have equal angles and equal lengths, you might speculate that the corresponding triangles are congruent. Here that would imply BD=CD, which characterises D as lying on the perpendicular bisector of BC. D is also on the circumcircle, so in fact it’s also on the angle bisector of BAC, here the external angle bisector. This is a very common configuration (normally using the internal bisector) in this level of problem, and if you see this coming up without prompting, it suggests you’re doing something right.

So that’s the conjecture for D. And we came up with the conjecture based on a likely proof strategy, so to prove it, we really just need to reverse the steps of the previous two paragraphs. We now know BD=CD. We also know angles ABD = ACD, so taking the complementary angles (ie the obtuse bit in the diagram) we have angles DBM = DCN, so we indeed have congruent triangles. So we can read off angles MDB = NDC just as in our motivation, and recover that MDAN is cyclic.

Whatever other methods there are to characterise point D (to follow), all methods will probably conclude with an argument like the one in this previous paragraph, to demonstrate that D does have the required property.

Limits

We have one degree of freedom in choosing M and N. Remember that initially we don’t know what the target point D is. If we can’t see it immediately from drawing a diagram corresponding to general M and N, it’s worth checking some special cases. What special cases might be most relevant depends entirely on the given problem. The two I’m going to mention here both correspond to some limiting configuration. The second of these is probably more straightforward, and was my route to determining D. The first was proposed by one of my students.

First, we conjecture that maybe the condition that M and N lie outside the circumcircle isn’t especially important, but has been added to prevent candidates worrying about diagram dependency. The conclusion might well hold without this extra stipulation. Remember at this stage we’re still just trying to characterise D, so even if we have to break the rules to find it, this won’t damage the solution, since we won’t be including our method for finding D in our written-up solution!

Anyway, WLOG AC < AB. If we take N very close to A, then the distances BM and MA are c and b-c respectively. The circumcircle of AMN is almost tangent to line AC. At this point we stop talking about ‘very close’ and ‘almost tangent’ and just assume that N=A and the so the circle AMN really is the circle through M, tangent to AC at A. We need to establish where this intersects the circumcircle for a second time.

To be clear, I found what follows moderately tricky, and this argument took a while to find and was not my first attempt at all. First we do some straightforward angle-chasing, writing A,B,C for the measures of the angles in triangle ABC. Then the angle BDC is also A and angle BDA is 180-C. We also have the tangency relation from which the alternate segment theorem gives angle MDA = A. Then BDM = BDA – MDA = 180 – C – A = B. So we know the lengths and angles in the configuration BDAM.

At this point, I had to use trigonometry. There were a couple of more complicated options, but the following works. In triangle BDM, a length b is subtended by angle B, as is the case for the original triangle ABC. By the extended sine rule, BDM then has the same circumradius as ABC. But now the length BD is subtended by angle DMB in one of these circumcircles, and by DAB in the other. Therefore these angles are either equal or complementary (in the sense that they sum to 180). Clearly it must be the latter, from which we obtain that angles DMA = MAD = 90 – A/2. In other words, D lies on the external angle bisector of A, which is the characterisation we want.

Again to clarify, I don’t think this was a particularly easy or particularly natural argument for this exact problem, but it definitely works, and the idea of getting a circle tangent to a line as a limit when the points of intersection converge is a useful one. As ever, when an argument uses the sine rule, you can turn it into a synthetic argument with enough extra points, but of the options I can currently think of, I think this trig is the cleanest.

My original construction was this. Let M and N be very very far down the rays. This means triangle AMN is large and approximately isosceles. This means that the line joining A to the circumcentre of AMN is almost the internal angle bisector of MAN, which is, of course, also the angle bisector of BAC. Also, because triangle AMN is very large, its circumcircle looks, locally, like a line, and has to be perpendicular to the circumradius at A. In other words, the circumcircle of AMN is, near A, approximately line perpendicular to the internal angle bisector of BAC, ie the external angle bisector of BAC. My ‘aha moment’ factor on this problem was therefore quite high.

Direct arguments

A direct argument for this problem might consider a pairs of points (M,N) and (M’,N’), and show directly that the circumcircles of ABC, AMN and AM’N’ concur at a second point, ie are coaxal. It seems unlikely to me that an argument along these lines wouldn’t find involve some characterisation of the point of concurrency along the way.

Do bear in mind, however, that such an approach runs the risk of cluttering the diagram. Points M and N really weren’t very important in anything that’s happened so far, so having two pairs doesn’t add extra insight in any of the previous methods. If this would have been your first reaction, ask yourself whether it would have been as straightforward or natural to find a description of D which led to a clean argument.

Another direct argument

Finally, a really neat observation, that enables you to solve the problem without characterising D. We saw that triangles DBM and DCN were congruent, and so we can obtain one from the other by rotating around D. We say D is the centre of the spiral similarity (here in fact with homothety factor 1 ie a spiral congruence) sending BM to CN. Note that in this sort of transformation, the direction of these segments matters. A different spiral similarity sends BM to NC.

But let’s take any M,N and view D as this spiral centre. The transformation therefore maps line AB to AC and preserves lengths. So in fact we’ve characterised D without reference to M and N ! Since everything we’ve said is reversible, this means as M and N vary, the point we seek, namely D, is constant.

This is only interesting as a proof variation if we can prove that D is the spiral centre without reference to one of the earlier arguments. But we can! In general a point D is the centre of spiral similarity mapping BM to CN iff it is also the centre of spiral similarity mapping BC to MN. And we can find the latter centre of spiral similarity using properties of the configuration. A is the intersection of MB and CN, so we know precisely that the spiral centre is the second intersection point of the two circumcircles, exactly as D is defined in the question.

(However, while this is cute, it’s somehow a shame not to characterise D as part of a solution…)

BMO2 2017

The second round of the British Mathematical Olympiad was taken yesterday by about 100 invited participants, and about the same number of open entries. To qualify at all for this stage is worth celebrating. For the majority of the contestants, this might be the hardest exam they have ever sat, indeed relative to current age and experience it might well be the hardest exam they ever sit. And so I thought it was particularly worth writing about this year’s set of questions. Because at least in my opinion, the gap between finding every question very intimidating, and solving two or three is smaller, and more down to mindset, than one might suspect.

A key over-arching point at this kind of competition is the following: the questions have been carefully chosen, and carefully checked, to make sure they can be solved, checked and written up by school students in an hour. That’s not to say that many, or indeed any, will take that little time, but in principle it’s possible. That’s also not to say that there aren’t valid but more complicated routes to solutions, but in general people often spend a lot more time writing than they should, and a bit less time thinking. Small insights along the lines of “what’s really going on here?” often get you a lot further into the problem than complicated substitutions or lengthy calculations at this level.

So if some of the arguments below feel slick, then I guess that’s intentional. When I received the paper and had a glance in my office, I was only looking for slick observations, partly because I didn’t have time for detailed analysis, but also because I was confident that there were slick observations to be made, and I felt it was just my task to find them.

Anyway, these are the questions: (note that the copyright to these is held by BMOS – reproduced here with permission.)

Question One

2017-bmo2-q1I immediately tried the example where the perpendicular sides are parallel to the coordinate axes, and found that I could generate all multiples of 3 in this way. This seemed a plausible candidate for an answer, so I started trying to find a proof. I observed that if you have lots of integer points on one of the equal sides, you have lots of integer points on the corresponding side, and these exactly match up, and then you also have lots of integer points on the hypotenuse too. In my first example, these exactly matched up too, so I became confident I was right.

Then I tried another example ( (0,0), (1,1), (-1,1) ) which has four integer points, and could easily be generalised to give any multiple of four as the number of integer points. But I was convinced that this matching up approach had to be the right thing, and so I continued, trusting that I’d see where this alternate option came in during the proof.

Good setup makes life easy. The apex of the isosceles triangle might as well be at the origin, and then your other vertices can be (m,n), (n,-m) or similar. Since integral points are preserved under the rotation which takes equal side to the other, the example I had does generalise, but we really need to start enumerating. The number of integer points on the side from (0,0) to (m,n) is G+1, where G is the greatest common divisor of m and n. But thinking about the hypotenuse as a vector (if you prefer, translate it so one vertex is at the origin), the number of integral points on this line segment must be \mathrm{gcd}(m+n,m-n) +1.

To me, this felt highly promising, because this is a classic trope in olympiad problem-setting. Even without this experience, we know that this gcd is equal to G if m and n have different parities (ie one odd, one even) and equal to 2G if m and n have the same parity.

So we’re done. Being careful not to double-count the vertices, we have 3G integral points if m and n have opposite parities, and 4G integral points if m and n have the same parity, which exactly fits the pair of examples I had. But remember that we already had a pair of constructions, so (after adjusting the hypothesis to allow the second example!) all we had to prove was that the number of integral points is divisible by at least one of 3 and 4. And we’ve just done that. Counting how many integers less than 2017 have this property can be done easily, checking that we don’t double-count multiples of 12, and that we don’t accidentally include or fail to include zero as appropriate, which would be an annoying way to perhaps lose a mark after totally finishing the real content of the problem.

Question Two

2017-bmo2-q2(Keen observers will note that this problem first appeared on the shortlist for IMO 2006 in Slovenia.)

As n increases, obviously \frac{1}{n} decreases, but the bracketed expression increases. Which of these effects is more substantial? Well \lfloor \frac{n}{k}\rfloor is the number of multiples of k which are at most n, and so as a function of n, this increases precisely when n is a multiple of k. So, we expect the bracketed expression to increase substantially when n has lots of factors, and to increase less substantially when n has few factors. An extreme case of the former might be when n is a large factorial, and certainly the extreme case of the latter is n a prime.

It felt easier to test a calculation on the prime case first, even though this was more likely to lead to an answer for b). When n moves from p-1 to p, the bracketed expression goes up by exactly two, as the first floor increases, and there is a new final term. So, we start with a fraction, and then increase the numerator by two and the denominator by one. Provided the fraction was initially greater than two, it stays greater than two, but decreases. This is the case here (for reasons we’ll come back to shortly), and so we’ve done part b). The answer is yes.

Then I tried to do the calculation when n was a large factorial, and I found I really needed to know the approximate value of the bracketed expression, at least for this value of n. And I do know that when n is large, the bracketed expression should be approximately n\log n, with a further correction of size at most n to account for the floor functions, but I wasn’t sure whether I was allowed to know that.

But surely you don’t need to engage with exactly how large the correction due to the floors is in various cases? This seemed potentially interesting (we are after all just counting factors), but also way too complicated. An even softer version of what I’ve just said is that the harmonic function (the sum of the first n reciprocals) diverges faster than n. So in fact we have all the ingredients we need. The bracketed expression grows faster than n, (you might want to formalise this by dividing by n before analysing the floors) and so the a_ns get arbitrarily large. Therefore, there must certainly be an infinite number of points of increase.

Remark: a few people have commented to me that part a) can be done easily by treating the case n=2^k-1, possibly after some combinatorial rewriting of the bracketed expression. I agree that this works fine. Possibly this is one of the best examples of the difference between doing a problem leisurely as a postgraduate, and actually under exam pressure as a teenager. Thinking about the softest possible properties of a sequence (roughly how quickly does it grow, in this case) is a natural first thing to do in all circumstances, especially if you are both lazy and used to talking about asymptotics, and certainly if you don’t have paper.

Question 3

2017-bmo2-q3I only drew a very rough diagram for this question, and it caused no problems whatsoever, because there aren’t really that many points, and it’s fairly easy to remember what their properties are. Even in the most crude diagram, we see R and S lie on AC and AD respectively, and so the conclusion about parallel lines is really about similarity of triangles ARS and ACD. This will follow either from some equal angles, or by comparing ratios of lengths.

bmo2-2017-q3Since angle bisectors by definition involve equal angles, the first attack point seems promising. But actually the ratios of lengths is better, provided we know the angle bisector theorem, which is literally about ratios of lengths in the angle bisector diagram. Indeed

\frac{AR}{RC}=\frac{AQ}{CQ},\quad \frac{AS}{SD}=\frac{AP}{PD},     (1)

and so it only remains to show that these quantities are in fact all equal. Note that there’s some anti-symmetry here – none of these expressions use B at all! We could for example note that AP/PD = BP/PC, from which

\left(\frac{AS}{SD}\right)^2 = \frac{AP.BP}{PC.PD},     (2)

and correspondingly for R and Q, and work with symmetric expressions. I was pretty sure that there was a fairly well-known result that in a cyclic quadrilateral, where P is the intersection of the diagonals

\frac{AP}{PC} = \frac{AD.AB}{DC.BC},     (3)

(I was initially wondering whether there was a square on the LHS, but an example diagram makes the given expression look correct.)

There will be a corresponding result for Q, and then we would be almost done by decomposing (2) slightly differently, and once we’d proved (3) of course. But doing this will turn out to be much longer than necessary. The important message from (3) is that in a very simple diagram (only five points), we have a result which is true, but which is not just similar triangles. There are two pairs of similar triangles in the diagram, but they aren’t in the right places to get this result. What you do have is some pairs of triangles with one pair of equal angles, and one pair of complementary angles (that is, \theta in one, and 180-\theta in the other). This is a glaring invitation to use the sine rule, since the sines of complementary angles are equal.

But, this is also the easiest way to prove the angle bisector theorem. So maybe we should just try this approach directly on the original ratio-of-lengths statement that we decided at (1) was enough, namely \frac{AQ}{CQ}=\frac{AP}{PD}. And actually it drops out rapidly. Using natural but informal language referencing my diagram

\frac{AP}{PD} = \frac{\sin(\mathrm{Green})}{\sin(\mathrm{Pink})},\quad\text{and}\quad \frac{AQ}{CQ}= \frac{\sin(\mathrm{Green})}{\sin(180-\mathrm{Pink})}

and we are done. But whatever your motivation for moving to the sine rule, this is crucial. Unless you construct quite a few extra cyclic quadrilaterals, doing this with similar triangles and circle theorems alone is going to be challenging.

Remark: If you haven’t seen the angle bisector theorem before, that’s fine. Both equalities in (1) are a direct statement of the theorem. It’s not an intimidating statement, and it would be a good exercise to prove either of these statements in (1). Some of the methods just described will be useful here too!

Question 4

2017-bmo2-q4You might as well start by playing around with methodical strategies. My first try involved testing 000, 111, … , 999. After this, you know which integers appear as digits. Note that at this stage, it’s not the same as the original game with only three digits, because we can test using digits which we know are wrong, so that answers are less ambiguous. If the three digits are different, we can identify the first digit in two tests, and then the second in a further test, and so identify the third by elimination. If only two integers appear as digits, we identify each digit separately, again in three tests overall. If only one integer appears, then we are already done. So this is thirteen tests, and I was fairly convinced that this wasn’t optimal, partly because it felt like testing 999 was a waste. But even with lots of case tries I couldn’t do better. So I figured I’d try to prove some bound, and see where I got.

A crucial observation is the following: when you run a test, the outcome eliminates some possibilities. One of the outcomes eliminates at least half the codes, and the other outcome eliminates at most half the codes. So, imagining you get unlucky every time, after k tests, you might have at least 1000\times 2^{-k} possible codes remaining. From this, we know that we need at least 9 tests.

For this bound to be tight, each test really does need to split the options roughly in two. But this certainly isn’t the case for the first test, which splits the options into 729 (no digit agreements) and 271 (at least one agreement). Suppose the first test reduces it to 729 options, then by the same argument as above, we still need 9 tests. We now know we need at least 10 tests, and so the original guess of 13 is starting to come back into play.

We now have to make a meta-mathematical decision about what to do next. We could look at how many options might be left after the second test, which has quite a large number of cases (depending on how much overlap there is between the first test number and the second test number). It’s probably going to be less than 512 in at least one of the cases, so this won’t get us to a bound of 11 unless we then consider the third test too. This feels like a poor route to take for now, as the tree of options has branching at rate 3 (or 4 if you count obviously silly things) per turn, so gets unwieldy quickly. Another thought is that this power of two argument is strong when the set of remaining options is small, so it’s easier for a test to split the field roughly in two.

Now go back to our proposed original strategy. When does the strategy work faster than planned? It works faster than planned if we find all the digits early (eg if they are all 6 or less). So the worst case scenario is if we find the correct set of digits fairly late. But the fact that we were choosing numbers of the form aaa is irrelevant, as the digits are independent (consider adding 3 to the middle digit modulo 10 at all times in any strategy – it still works!).

This is key. For k\le 9, after k tests, it is possible that we fail every test, which means that at least (10-k) options remain for each digit, and so at least (10-k)^3 options in total. [(*) Note that it might actually be even worse if eg we get a ‘close’ on exactly one test, but we are aiming for a lower bound, so at this stage considering an outcome sequence which is tractable is more important than getting the absolute worst case outcome sequence if it’s more complicated.] Bearing in mind that I’d already tried finishing from the case of reduction to three possibilities, and I’d tried hard to sneak through in one fewer test, and failed, it seemed sensible to try k=7.

After 7 tests, we have at least 27 options remaining, which by the powers-of-two argument requires at least 5 further tests to separate. So 12 in total, which is annoying, because now I need to decide whether this is really the answer and come up a better construction, or enhance the proof.

Clearly though, before aiming for either of these things, I should actually try some other values of k, since this takes basically no time at all. And k=6 leaves 64 options, from which the power of two argument is tight; and k=5 leaves 125, which is less tight. So attacking k=6 is clearly best. We just need to check that the 7th move can’t split the options exactly into 32 + 32. Note that in the example, where we try previously unseen digits in every position, we split into 27 + 37 [think about (*) again now!]. Obviously, if we have more than four options left for any digit, we are done as then we have strictly more than 4x4x4=64 options. So it remains to check the counts if we try previously unseen digits in zero, one or two positions. Zero is silly (gives no information), and one and two can be calculated, and don’t give 32 + 32.

So this is a slightly fiddly end to the solution, and relies upon having good control over what you’re trying to do, and what tools you currently have. The trick to solving this is resisting calculations and case divisions that are very complicated. In the argument I’ve proposed, the only real case division is right at the end, by which point we are just doing an enumeration in a handful of cases, which is not really that bad.

BMO1 2016 – the non-geometry

Here’s a link to yesterday’s BMO1 paper, and the video solutions for all the problems. I gave the video solution to the geometric Q5, and discuss aspects of this at some length in the previous post.

In these videos, for obvious educational reasons, there’s a requirement to avoid referencing theory and ideas that aren’t standard on the school curriculum or relatively obvious directly from first principles. Here, I’ve written down some of my own thoughts on the other problems in a way that might add further value for those students who are already have some experience at olympiads and these types of problems. In particular, on problems you can do, it’s worth asking what you can learn from how you did them that might be applicable generally, and obviously for some of the harder problems, it’s worth knowing about solutions that do use a little bit of theory. Anyway, I hope it’s of interest to someone.

bmo1-2016-q1Obviously we aren’t going to write out the whole list, but there’s a trade-off in time between coming up with neat ideas involving symmetry, and just listing and counting things. Any idea is going to formalise somehow the intuitive statement ‘roughly half the digits are odd’. The neat ideas involve formalising the statement ‘if we add leading zeros, then roughly half the digits are odd’. The level of roughness required is less in the first statement than the second statement.

Then there’s the trade-off. Trying to come up with the perfect general statement that is useful and true might lead to something like the following:

‘If we write the numbers from 0000 to N, with leading zeros, and all digits of N+1 are even, then half the total digits, ie 2N of them, are odd.’

This is false, and maybe the first three such things you try along these lines are also false. What you really want to do is control the numbers from 0000 to 1999, for which an argument by matching is clear, and gives you 2000 x 4 / 2 = 4000 odd digits. You can exploit the symmetry by matching k with 1999-k, or do it directly first with the units, then with the tens and so on.

The rest (that is, 2000 to 2016) can be treated by listing and counting. Of course, the question wants an actual answer, so we should be wary of getting it wrong by plus or minus one in some step. A classic error of this kind is that the number of integers between 2000 and 2016 inclusive is 17, not 16. I don’t know why the memory is so vivid, but I recall being upset in Year 2 about erring on a problem of this kind involving fences and fenceposts.

bmo1-2016-q2As with so many new types of equation, the recipe is to reduce to a type of equation you already know how to solve. Here, because {x} has a different form on different ranges, it makes sense to consider the three ranges

x\in[0,1/25],\, x\in[1/25,1/8],\, x\in [1/8,\infty),

as for each of these ranges, we can rewrite 5y\{8y\}\{25y\} in terms of standard functions without this bracket notation. On each range we can solve the corresponding equation. We then have to check that each solution does actually lie in the appropriate range, and in two cases it does, and in one case it doesn’t.

bmo1-2016-q3Adding an appropriately-chosen value to each side allows you to factorise the quadratics. This might be very useful. But is it an invitation to do number theory and look at coprime factors and so on, or is a softer approach more helpful?

The general idea is that the set of values taken by any quadratic sequence with integer coefficients and leading coefficient one looks from a distance like the set of squares, or the set \{m(m+1), \,m\in\mathbb{N}\}, which you might think of as ‘half-squares’ or ‘double triangle numbers’ as you wish. And by, ‘from a distance’ I mean ‘up to an additive constant’. If you care about limiting behaviour, then of course this additive constant might well not matter, but if you care about all solutions, you probably do care. To see why this holds, note that

n^2+2n = (n+1)^2 - 1,

so indeed up to an additive constant, the quadratic on the LHS gives the squares, and similarly

n^2 - 7n = (n-4)(n-3)-12,

and so on. To solve the equation n^2=m^2+6, over the integers, one can factorise, but another approach is to argue that the distance between adjacent squares is much more than 6 in the majority of cases, which leaves only a handful of candidates for n and m to check.

The same applies at this question. Adding on 9 gives

n^2-6n+9 = m^2 + m -1,

which is of course the same as

(n-3)^2 = m(m+1)-1.

Now, since we now that adjacent squares and ‘half-squares’ are more than one apart in all but a couple of cases, we know why there should only be a small number of solutions. I would call a method of this kind square-sandwiching, but I don’t see much evidence from Google that this term is generally used, except on this blog.

Of course, we have to be formal in an actual solution, and the easiest way to achieve this is to sandwich m(m+1)-1 between adjacent squares m^2 and (m+1)^2, since it is very much clear-cut that the only squares which differ by one are zero and one itself.

bmo1-2016-q4I really don’t have much to say about this. It’s not on the school curriculum so the official solutions are not allowed to say this, but you have to use that all integers except those which are 2 modulo 4 can be written as a difference of two squares. The easiest way to show this is by explicitly writing down the appropriate squares, treating the cases of odds and multiples of four separately.

So you lose if after your turn the running total is 2 modulo 4. At this point, the combinatorics isn’t too hard, though as in Q1 one has to be mindful that making an odd number of small mistakes will lead to the wrong answer! As in all such problems, it’s best to try and give a concrete strategy for Naomi. And it’s best if there’s something inherent in the strategy which makes it clear that it’s actually possible to implement. (Eg, if you claim she should choose a particular number, ideally it’s obvious that number is available to choose.)

One strategy might be: Naomi starts by choosing a multiple of four. Then there are an even number of multiples of four, so Naomi’s strategy is:

  • whenever Tom chooses a multiple of four, Naomi may choose another multiple of four;
  • whenever Tom chooses a number which is one (respectively three) modulo 4, Naomi may choose another which is three (respectively one) modulo 4.

Note that Naomi may always choose another multiple of four precisely because we’ve also specified the second condition. If sometimes Tom chooses an odd number and Naomi responds with a multiple of four out an idle and illogical sense of caprice, then the first bullet point would not be true. One can avoid this problem by being more specific about exactly what the algorithm is, though there’s a danger that statements like ‘whenever Tom chooses k, Naomi should choose 100-k’ can introduce problems about avoiding the case k=50.

bmo1-2016-q6I started this at the train station in Balatonfured with no paper and so I decided to focus on the case of just m, m+1 and n, n+2. This wasn’t a good idea in my opinion because it was awkward but guessable, and so didn’t give too much insight into actual methods. Also, it didn’t feel like inducting on the size of the sequences in question was likely to be successful.

If we know about the Chinese Remainder Theorem, we should know that we definitely want to use it here in some form. Here are some clearly-written notes about CRT with exercises and hard problems which a) I think are good; b) cite this blog in the abstract. (I make no comment on correlation or causality between a) and b)…)

CRT is about solutions to sets of congruence equations modulo various bases. There are two aspects to this , and it feels to me like a theorem where students often remember one aspect, and forget the other one, in some order. Firstly, the theorem says that subject to conditions on the values modulo any non-coprime bases, there exist solutions. In many constructive problems, especially when the congruences are not explicit, this is useful enough by itself.

But secondly, the theorem tells us what all the solutions are. There are two stages to this: finding the smallest solution, then finding all the solutions. Three comments: 1) the second of these is easy – we just add on all multiples of the LCM of the bases; 2) we don’t need to find the smallest solution – any solution will do; 3) if you understand CRT, you might well comment that the previous two comments are essentially the same. Anyway, finding the smallest solution, or any solution is often hard. When you give students an exercise sheet on CRT, finding an integer which is 3 mod 5, 1 mod 7 and 12 mod 13 is the hard part. Even if you’re given the recipe for the algorithm, it’s the kind of computation that’s more appealing if you are an actual computer.

Ok, so returning to this problem, the key step is to phrase everything in a way which makes the application of CRT easy. We observe that taking n=2m satisfies the statement – the only problem of course is that 2m is not odd. But CRT then tells us what all solutions for n are, and it’s clear that 2m is the smallest, so we only need to add on the LCM (which is odd) to obtain the smallest odd solution.

BMO1 2016 Q5 – from areas to angles

For the second year in a row Question 5 has been a geometry problem; and for the second year in a row I presented the video solution; and the for the second year in a row I received the question(s) while I was abroad. You can see the video solutions for all the questions here (for now). I had a think about Q5 and Q6 on the train back from a day out at Lake Balaton in Western Hungary, so in keeping with last year’s corresponding post, here are some photos from those sunnier days.

bmo1-2016-q5aI didn’t enjoy this year’s geometry quite as much as last year’s, but I still want to say some things about it. At the time of writing, I don’t know who proposed Q5, but in contrast to most geometry problems, where you can see how the question might have emerged by tweaking a standard configuration, I don’t have a good intuition for what’s really going on here. I can, however, at least offer some insight into why the ‘official’ solution I give on the video has the form that it does.

bmo1-2016-q5bThe configuration given is very classical, with only five points, and lots of equal angles. The target statement is also about angles, indeed we have to show that a particular angle is a right-angle. So we might suspect that the model approach might well involve showing some other tangency relation, where one of the lines AC and BC is a radius and the other a tangent to a relevant circle. I think it’s worth emphasising that throughout mathematics, the method of solving a problem is likely to involve similar objects to the statement of the problem itself. And especially so in competition problems – it seemed entirely reasonable that the setter might have found a configuration with two corresponding tangency relations and constructed a problem by essentially only telling us the details of one of the relations.

There’s the temptation to draw lots of extra points or lots of extra lines to try and fit the given configuration into a larger configuration with more symmetry, or more suggestive similarity [1]. But, at least for my taste, you can often make a lot of progress just by thinking about what properties you want the extra lines and points to have, rather than actually drawing them. Be that as it may, for this question, I couldn’t initially find anything suitable along these lines [2]. So we have to think about the condition.

But then the condition we’ve been given involves areas, which feels at least two steps away from giving us lots of information about angles. It doesn’t feel likely that we are going to be able to read off some tangency conditions immediately from the area equality we’ve been given. So before thinking about the condition too carefully, it makes sense to return to the configuration and think in very loose terms about how we might prove the result.

How do we actually prove that an angle is a right-angle? (*) I was trying to find some tangency condition, but it’s also obviously the angle subtending by the diameter of a circle. You could aim for the Pythagoras relation on a triangle which includes the proposed right-angle, or possibly it might be easier to know one angle and two side-lengths in such a triangle, and conclude with some light trigonometry? We’ve been given a condition in terms of areas, so perhaps we can use the fact that the area of a right-angled triangle is half the product of the shorter side-lengths? Getting more exotic, if the configuration is suited to description via vectors, then a dot product might be useful, but probably this configuration isn’t.

The conclusion should be that it’s not obvious what sort of geometry we’re going to need to do to solve the problem. Maybe everything will come out from similar triangles with enough imagination, but maybe it won’t. So that’s why in the video, I split the analysis into an analysis of the configuration itself, and then an analysis of the area condition. What really happens is that we play with the area condition until we get literally anything that looks at all like one of the approaches discussed in paragraph (*). To increase our chances, we need to know as much about the configuration as possible, so any deductions from the areas are strong.

The configuration doesn’t have many points, so there’s not much ambiguity about what we could do. There are two tangents to the circle. We treat APC with equal tangents and the alternate segment theorem to show the triangle is isosceles and that the base angles are equal to the angle at B in ABC. Then point Q is ideally defined in terms of ABC to use power of a point, and add some further equal angles into the diagram. (Though it turns out we don’t need the extra equal angle except through power of a point.)

So we have some equal angles, and also some length relations. One of the length relations is straightforward (AP=CP) and the other less so (power of a point CQ^2 = AQ\cdot BQ). The really key observation is that the angle-chasing has identified

\angle PAQ = 180 - \angle \hat C,

which gives us an alternative goal: maybe it will be easier to show that PAQ is a right-angle.

Anyway, that pretty much drinks the configuration dry, and we have to use the area condition. I want to emphasise how crucial this phase in for this type of geometry problem. Thinking about how to prove the goal, and getting a flavour for the type of relation that comes out of the configuration is great, but now we need to watch like a hawk when we play with the area condition for relations which look similar to what we have, and where we might be going, as that’s very likely to be the key to the problem.

We remarked earlier that we’re aiming for angles, and are given areas. A natural middle ground is lengths. All the more so since the configuration doesn’t have many points, and so several of the triangles listed as having the same area also have the same or similar bases. You might have noticed that ABC and BCQ share height above line AQ, from which we deduce AB=BQ. It’s crucial then to identify that this is useful because it supports the power of a point result from the configuration itself. It’s also crucial to identify that we are doing a good job of relating lots of lengths in the diagram. We have two pairs of equal lengths, and (through Power of a Point) a third length which differs from one of them by a factor of \sqrt{2}.

If we make that meta-mathematical step, we are almost home. We have a relation between a triple of lengths, and between a pair of lengths. These segments make up the perimeter of triangle APQ. So if we can relate one set of lengths and the other set of lengths, then we’ll know the ratios of the side lengths of APQ. And this is excellent, since much earlier we proposed Pythagoras as a possible method for establish an angle is a right-angle, and this is exactly the information we’d need for that approach.

Can we relate the two sets of lengths? We might guess yes, that with a different comparison of triangles areas (since we haven’t yet used the area of APC) we can find a further relation. Indeed, comparing APC and APQ gives CQ = 2PC by an identical argument about heights above lines.

bmo1-2016-q5cNow we know all the ratios, it really is just a quick calculation…

[1] – I discussed the notion of adding extra points when the scripts for the recording were being shared around. It was mentioned that for some people, the requirement to add extra points (or whatever) marks a hard division between ‘problems they can do’ and ‘problem they can’t do’. While I didn’t necessarily follow this practice while I was a contestant myself, these days the first thing I do when I see any angles or an angle condition in a problem is to think about whether there’s a simple way to alter the configuration so the condition is more natural. Obviously this doesn’t always work (see [2]), but it’s on my list of ‘things to try during initial thinking’, and certainly comes a long way before approaches like ‘place in a Cartesian coordinate system’.

[2] – Well, I could actually find something suitable, but I couldn’t initially turn it into a solution. The most natural thing is to reflect P in AC to get P’, and Q in BC to get Q’. The area conditions [AP’C]=[ABC]=[BCQ’] continue to hold, but now P’ and B are on the same side of AC, hence P’B || AC. Similarly AQ’ || BC. I see no reason not to carry across the equal length deductions from the original diagram, and we need to note that angles P’AC, ACP’, CBA are equal and angles Q’AB and BAC are equal. In the new diagram, there are many things it would suffice to prove, including that CP’Q’ are collinear. Note that unless you draw the diagram deliberately badly, it’s especially easy accidentally to assume that CP’Q’ are collinear while playing around, so I wasted quite a bit of time. Later, while writing up this post, I could finish it [3].

[3] – In the double-reflected diagram, BCQ’ is similar to P’BA, and since Q’C=2P’C = P’A, and Q’B=AB, you can even deduce that the scale factor is \sqrt{2}. There now seemed two options:

  • focus on AP’BC, where we now three of the lengths, and three of the angles are equal, so we can solve for the measure of this angle. I had to use a level of trigonometry rather more exotic than the Pythagoras of the original solution, so this doesn’t really serve purpose.
  • Since BCQ’ is similar to P’BA and ABQ’ similar to CP’A, we actually have Q’BCA similar to AP’BC. In particular, \angle CBP' = \angle ACB, and thus both are 90. Note that for this, we only needed the angle deductions in the original configuration, and the pair of equal lengths.
  • There are other ways to hack this final stage, including showing that BP’ meets AQ’ at the latter’s midpoint, to give CP’Q’ collinear.

Lagrange multipliers Part One: A much simpler setting

I am currently in northern Hungary for our annual winter school for some of the strongest young school-aged mathematicians in the UK and Hungary. We’ve had a mixture of lectures, problem-solving sessions and the chance to enjoy a more authentic version of winter than is currently on offer in balmy Oxford.

One of my favourite aspects of this event is the chance it affords for the students and the staff to see a slightly different mathematical culture. It goes without saying that Hungary has a deep tradition in mathematics, and the roots start at school. The British students observe fairly rapidly that their counterparts have a much richer diet of geometry, and methods in combinatorics at school, which is certainly an excellent grounding for use in maths competitions. By contrast, our familiarity with calculus is substantially more developed – by the time students who study further maths leave school, they can differentiate almost anything.

But the prevailing attitude in olympiad circles is that calculus is unrigorous and hence illegal method. The more developed summary is that calculus methods are hard, or at least technical. This is true, and no-one wants to spoil a measured development of analysis from first principles, but since some of the British students asked, it seemed worth giving a short exposition of why calculus can be made rigorous. They are mainly interested in the multivariate case, and the underlying problem is that the approach suggested by the curriculum doesn’t generalise well at all to the multivariate setting. Because it’s much easier to imagine functions of one variable, we’ll develop the machinery of the ideas in this setting in this post first.

Finding minima – the A-level approach

Whether in an applied or an abstract setting, the main use of calculus at school is to find where functions attain their maximum or minimum. The method can be summarised quickly: differentiate, find where the derivative is zero, and check the second-derivative at that value to determine that the stationary point has the form we want.

Finding maxima and finding minima are a symmetric problem, so throughout, we talk about finding minima. It’s instructive to think of some functions where the approach outlined above fails.

20160131_124059In the top left, there clearly is a minimum, but the function is not differentiable at the relevant point. We can probably assert this without defining differentiability formally: there isn’t a well-defined local tangent at the minimum, so we can’t specify the gradient of the tangent. In the top right, there’s a jump, so depending on the value the function takes at the jump point, maybe there is a minimum. But in either case, the derivative doesn’t exist at the jump point, so our calculus approach will fail.

In the middle left, calculus will tell us that the stationary point in the middle is a ‘minimum’, but it isn’t the minimal value taken by the function. Indeed the function doesn’t have a minimum, because it seems to go off to -\infty in both directions. In the middle right, the asymptote provides a lower bound on the values taken by the function, but this bound is never actually achieved. Indeed, we wouldn’t make any progress by calculus, since there are no stationary points.

At the bottom, the functions are only defined on some interval. In both cases, the minimal value is attained at one of the endpoints of the interval, even though the second function has a point which calculus would identify as a minimum.

The underlying problem in any calculus argument is that the derivative, if it exists, only tells us about the local behaviour of the function. At best, it tells us that a point is a local minimum. This is at least a necessary condition to be a global minimum, which is what we actually care about. But this is a change of emphasis from the A-level approach, for which having zero derivative and appropriately-signed second-derivative is treated as a sufficient condition to be a global minimum.

Fortunately, the A-level approach is actually valid. It can be shown that if a function is differentiable everywhere, and it only has one stationary point, where the second-derivative exists and is positive, then this is in fact the global minimum. The first problem is that this is really quite challenging to show – since in general the derivative might not be continuous, although it might have many of the useful properties of a continuous function. Showing all of this really does require setting everything up carefully with proper definitions. The second problem is that this approach does not generalise well to multivariate settings.

Finding minima – an alternative recipe

What we do is narrow down the properties which the global minimum must satisfy. Here are some options:

0) There is no global minimum. For example, the functions pictured in the middle row satisfy this.

Otherwise, say the global minimum is attained at x. It doesn’t matter if it is attained at several points. At least one of the following options must apply to each such x.

1) f'(x)=0,

2) f'(x) is not defined,

3) x lies on the boundary of the domain where f is defined.

We’ll come back to why this is true. But with this decomposition, the key to identifying a global minimum via calculus is to eliminate options 0), 2) and 3). Hopefully we can eliminate 2) immediately. If we know we can differentiate our function everywhere, then 2) couldn’t possibly hold for any value of x. Sometimes we will be thinking about functions defined everywhere, in which case 3) won’t matter. Even if our function is defined on some interval, this only means we have to check two extra values, and this isn’t such hard work.

20160131_124716It’s worth emphasising why if x is a local minimum not on the boundary and f'(x) exists, then f'(x)=0. We show that if f'(x)\ne 0, then x can’t be a local minimum. Suppose f'(x)>0. Then both the formal definition of derivative, and the geometric interpretation in terms of the gradient of a tangent which locally approximates the function, give that, when h is small,

f(x-h) = f(x)-h f'(x) +o(h),

where this ‘little o’ notation indicates that for small enough h, the final term is much smaller than the second term. So for small enough h, f(x-h)<f(x), and so we don’t have a local minimum.

The key is eliminating option 0). Once we know that there definitely is a global minimum, we are in a good position to identify it using calculus and a bit of quick checking. But how would we eliminate option 0)?

Existence of global minima

This is the point where I’m in greatest danger of spoiling first-year undergraduate course content, so I’ll be careful.

As we saw in the middle row, when functions are defined on the whole real line, there’s the danger that they can diverge to \pm \infty, or approach some bounding value while never actually attaining it. So life gets much easier if you work with functions defined on a closed interval. We also saw what can go wrong if there are jumps, so we will assume the function is continuous, meaning that it has no jumps, or that as y gets close to x, f(y) gets close to f(x). If you think a function can be differentiated everywhere, then it is continuous, because we’ve seen that once a function has a jump (see caveat 2) then it certainly isn’t possible to define the derivative at the jump point.

It’s a true result that a continuous function defined on a closed interval is bounded and attains its bounds. Suppose such a function takes arbitrarily large values. The main idea is that if the function takes arbitrarily large values throughout the interval, then because the interval is finite it also takes arbitrarily large values near some point, which will make it hard to be continuous at that point. You can apply a similar argument to show that the function can’t approach a threshold without attaining it somewhere. So how do you prove that this point exists? Well, you probably need to set up some formal definitions of all the properties under discussion, and manipulate them carefully. Which is fine. If you’re still at school, then you can either enjoy thinking about this yourself, or wait until analysis courses at university.

My personal opinion is that this is almost as intuitive as the assertion that if a continuous function takes both positive and negative values, then it has a zero somewhere in between. I feel if you’re happy citing the latter, then you can also cite the behaviour of continuous functions on closed intervals.

Caveat 2) It’s not true to say that if a function doesn’t have jumps then it is continuous. There are other kinds of discontinuity, but in most contexts these are worse than having a jump, so it’s not disastrous in most circumstances to have this as your prime model of non-continuity.

Worked example

Question 1 of this year’s BMO2 was a geometric inequality. I’ve chosen to look at this partly because it’s the first question I’ve set to make it onto BMO, but mainly because it’s quite hard to find olympiad problems which come down to inequalities in a single variable.

Anyway, there are many ways to parameterise and reparameterise the problem, but one method reduces, after some sensible application of Pythagoras, to showing

f(x)=x+ \frac{1}{4x} + \frac{1}{4x+\frac{1}{x}+4}\ge \frac{9}{8}, (*)

for all positive x.

There are simpler ways to address this than calculus, especially if you establish or guess that the equality case is x=1/2. Adding one to both sides is probably a useful start.

But if you did want to use calculus, you should argue as follows. (*) is certainly true when x\ge \frac{9}{8} and also when $x\le \frac{2}{9}$. The function f(x) is continuous, and so on the interval [\frac{2}{9},\frac{9}{8}] it has a minimum somewhere. We can differentiate, and fortunately the derivative factorises (this might be a clue that there’s probably a better method…) as

(1-\frac{1}{4x^2}) \left[ 1 - \frac{4}{(4x+\frac{1}{x}+4)^2} \right].

If x is positive, the second bracket can’t be zero, so the only stationary point is found at x=1/2. We can easily check that f(\frac12)=\frac98, and we have already seen that f(\frac29),f(\frac98)>\frac98. We know f attains its minimum on [\frac29,\frac98], and so this minimal value must be $\frac98$, as we want.

Overall, the moral of this approach is that even if we know how to turn the handle both for the calculation, and for the justification, it probably would be easier to use a softer approach if possible.

Next stage

For the next stage, we assess how much of this carries across to the multivariate setting, including Lagrange multipliers to find minima of a function subject to a constraint.

Pencils, Simson’s Line and BMO1 2015 Q5

When on olympiad duty, I normally allow myself to be drawn away from Euclidean geometry in favour of the other areas, which I feel are closer to home in terms of the type of structures and arguments I am required to deal with in research. For various reasons, I nonetheless ended up choosing to present the solution to the harder geometry on the first round of this year’s British Mathematical Olympiad a couple of weeks ago. The paper was taken a week ago, so I’m now allowed to write about it, and Oxford term finished yesterday so I now have time to write up the notes I made about it during a quick trip to Spain. Here’s three gratuitous photos to remind us all what a blue sky looks like:

And here’s the statement of the problem:

BMO1 Q5 a

and you can find the video of the solution I presented here (at least for now). Thanks to the AV unit at the University of Bath, not just as a formality, but because they are excellent – I had no right to end up looking even remotely polished.

As so often with geometry problems, the key here is to find an entry point into the problem. There are a lot of points and a lot of information (and we could add extra points if we wanted to), but we don’t expect that we’ll need to use absolutely all the information simultaneously. The main reason I’m going to the trouble to write this blog post is that I found an unusually large number of such entry points for this problem. I think finding the entry points is what students usually find hardest, and while I don’t have a definitive way to teach people how to find these, perhaps seeing a few, with a bit of reverse reconstruction of my thought process might be helpful or interesting?

If you haven’t looked at the problem before, you will lose this chance if you read what follows. Nonetheless, some of you might want to anyway, and some of you might have looked at the problem but forgotten it, or not have a diagram to hand, so here’s my whiteboard diagram:

BMO1 Q5 b

Splitting into stages

A natural first question is: “how am supposed to show that four points are collinear?” Typically it’s interesting enough to show that three points are collinear. So maybe our strategy will be to pick three of the points, show they are collinear, then show some other three points are collinear then patch together. In my ‘official solution’ I made the visual observation that it looks like the four points P,Q,R,S are not just collinear, but lie on a line parallel to FE. This is good, because it suggests an alternative, namely split the points P,Q,R,S into three segments, and show each of them is parallel to FE. We can reduce our argument by 1/3 since PQ and RS are symmetric in terms of the statement.

BMO1 Q5 c

So in our reduced diagram for RS, we need an entry point. It doesn’t look like A is important at all. What can we say about the remaining seven points. Well it looks like we’ve got a pencil of three lines through C, and two triangles each constructed by taking one point on each of these lines. Furthermore, two pairs of sides of the triangles are parallel. Is this enough to prove that the third side is parallel?

Well, yes it is. I claim that this is the natural way to think about this section of the diagram. The reason I avoided it in the solution is that it requires a few more lines of written deduction than we might have expected. The key point is that saying BF parallel to DR is the same as saying BFC and DRC are similar. And the same applies to BE parallel to DS being the same as saying BEC similar to DSC.

We now have control of a lot of angles in the diagram, and by being careful we could do an angle chase to show that <FEB = <RSD or similar, but this is annoying to write down on a whiteboard. We also know that similarity gives rise to constant ratios of lengths. And this is (at least in terms of total equation length) probably the easiest way to proceed. FC/RC = BC/DC by the first similarity relation, and EC/SC=BC/DC by the second similarity relation, so FC/RC = EC/SC and we can reverse the argument to conclude FE || RS.

So, while I’m happy with the cyclic quadrilaterals argument in the video (and it works in an almost identical fashion for the middle section QR too), spotting this pencil of lines configuration was key. Why did I spot it? I mean, once A is eliminated, there were only the seven points in the pencil left, but we had to (actively) make the observation that it was a pencil. Well, this is where it becomes hard to say. Perhaps it was the fact that I was working out of a tiny notebook so felt inclined to think about it abstractly before writing down any angle relations (obviously there are lots)? Perhaps it was because I just knew that pencils of lines and sets of parallel lines go together nicely?

While I have said I am not a geometry expert, I am aware of Desargues’ Theorem, of which this analysis is a special case, or at least of the ingredients. This is not an exercise in showing off that I know heavy projective machinery to throw at non-technical problems, but rather that knowing the ingredients of a theorem is enough to remind you that there are relations to be found, which is certainly a meta-analytic property that exists much more widely in mathematics and beyond.

Direct enlargment

If I’d drawn my board diagram even more carefully, it might have looked like FE was in fact the enlargement of the line P,Q,R,S from D by a factor of 2. This is the sort of thing that might have been just an accidental consequence of the diagram, but it’s still worth a try. In particular, we only really need four points in our reduced diagram here, eg D,E,F,R, though we keep in mind that we may need to recall some property of the line FR, which is really the line FC.

Let’s define R’ to be the enlargement of R from D by a factor 2. That is, we look along the ray DR, and place the point R’ twice as far from D as R. We want to show that R’ lies on FE. This would mean that FR is the perpendicular bisector of DR’ in the triangle FDR’, and would further require that FR is the angle bisector of <DFR’, which we note is <DFE. At this stage our diagram is small enough that I can literally draw it convincingly on a post-it note, even including P and P’ for good measure:

BMO1 Q5 d

So all we have to do is check that FC (which is the same as FR) is actually the angle bisector of DFE, and for this we should go back to a more classical diagram (maybe without P,Q,R,S) and argue by angle-chasing. Then, we can reverse the argument described in the previous paragraph. Q also fits this analysis, but P and S are a little different, since these lie on the external angle bisectors. This isn’t qualitatively harder to deal with, but it’s worth emphasising that this might be harder to see!

I’ve described coming at this approach from the observation of the enlargement with a factor of 2. But it’s plausible that one might have seen the original diagram and said “R is the foot of the perpendicular from D onto the angle bisector of DFE”, and then come up with everything useful from there. I’m not claiming that this observation is either especially natural nor especially difficult, but it’s the right way to think about point R for this argument.

Simson Lines

The result about the Simson Line says that whenever P is a point on the circumcircle of a triangle ABC, the feet of the perpendiculars from P to the sides of the triangle (some of which will need to be extended) are collinear. This line is called the Simson line. The converse is also true, and it is little extra effort to show that the reflections of P in the sides are collinear (ie the Simson line enlarged from P by factor 2) and pass through the orthocentre H of ABC.

It turns out that this can be used to solve the problem quite easily. I don’t want to emphasise how to do this. I want to emphasise again that the similarity of the statement of the theorem to the statement of this particular problem is the important bit. Both involve dropping perpendiculars from a single point onto other lines. So even if it hadn’t worked easily in this case, it would still have been a sensible thing to try if one knew (and, crucially, remembered) the Simson line result.

I was working on this script during an evening in Barcelona, and tapas culture lends itself very well to brief solutions. Whether it was exactly between the arrival of cerveza and the arrival of morcilla or otherwise, this was the extent of my notes on this approach to the problem:

BMO1 Q5 e

And this makes sense. No computation or technical wizardry is required. Once you’ve identified the relevant reference triangle (here HEC), and have an argument to check that the point playing the role of P (here D) is indeed on the circumcircle (it’s very clear here), you are done. But it’s worth ending by reinforcing the point I was trying to make, that considering the Simson line is an excellent entry point to this problem because of the qualitative similarities in the statements. Dealing with the details is sometimes hard and sometimes not, and in this case it wasn’t, but that isn’t normally the main challenge.

Generating Functions for the IMO

The background to this post is that these days I find myself using generating functions all the time, especially for describing the stationary states of various coalescence-like processes. I remember meeting them vaguely while preparing for the IMO as a student. However, a full working understanding must have eluded me at the time, as for Q5 on IMO 2008 in Madrid I had written down in big boxes the two statements involving generating functions that immediately implied the answer, but failed to finish it off. The aim of this post is to help this year’s team avoid that particular pitfall.

What are they?

I’m going to define some things in a way which will be most relevant to the type of problems you are meeting now. Start with a sequence (a_0,a_1,a_2,\ldots). Typically these will be the sizes of various combinatorial sets. Eg a_n = number of partitions of [n] with some property. Define the generating function of the sequence to be:

f(x)=\sum_{k\geq 0}a_k x^k=a_0+a_1x+a_2x^2+\ldots.

If the sequence is finite, then this generating function is a polynomial. In general it is a power series. As you may know, some power series can be rather complicated, in terms of where they are defined. Eg

1+x+x^2+x^3+\ldots=\frac{1}{1-x},

only when |x|<1. For other values of x, the LHS diverges. Defining f over C is fine too. This sort of thing is generally NOT important for applications of generating functions to combinatorics. To borrow a phrase from Wilf, a generating function is a convenient `clothesline’ on which to hang a sequence of numbers.

We need a notation to get back from the generating function to the coefficients. Write [x^k]g(x) to denote the coefficient of x^k in the power series g(x). So, if g(x)=3x^3-5x^2+7, then [x^2]g(x)=-5. It hopefully should never be relevant unless you read some other notes on the topic, but the notation [\alpha x^2]g(x):=\frac{[x^2]g(x)}{\alpha}, which does make sense after a while.

How might they be useful?

Example: binomial coefficients a_k=\binom{n}{k} appear, as the name suggests, as coefficients of

f_n(x)=(1+x)^n=\sum_{k=0}^n \binom{n}{k}x^k.

Immediate consequence: it’s trivial to work out \sum_{k=0}^n \binom{n}{k} and \sum_{k=0}^n(-1)^k \binom{n}{k} by substituting x=\pm 1 into f_n.

Less obvious consequence. By considering choosing n from a red balls and b blue balls, one can verify

\binom{a+b}{n}=\sum_{k=0}^n \binom{a}{k}\binom{b}{n-k}.

We can rewrite the RHS as

\sum_{k+l=n}\binom{a}{k}\binom{b}{l}.

Think how we calculate the coefficient of x^n in the product f(x)g(x), and it is now clear that \binom{a+b}{n}=[x^n](1+x)^{a+b}, while

\sum_{k+l=n}\binom{a}{k}\binom{b}{l}=[x^n](1+x)^a(1+x)^b,

so the result again follows. This provides a good slogan for generating functions: they often replicate arguments via bijections, even if you can’t find the bijection.

Useful for? – Multinomial sums

The reason why the previous argument for binomial coefficients worked nicely is because we were interested in the coefficients, but had a neat expression for the generating function as a polynomial. In particular, we had an expression

\sum_{k+l=n}a_k b_l.

This is always a clue that generating functions might be useful. This is sometimes called a convolution.

Exercise: prove that in general, if f(x) is the generating function of (a_k) and g(x) the generating function of (b_l), then f(x)g(x) is the generating function of \sum_{k+l=n}a_kb_l.

Even more usefully, this works in the multinomial case:

\sum_{k_1+\ldots+k_m=n}a^{(1)}_{k_1}\ldots a^{(m)}_{k_m}.

In many applications, these a^{(i)}s will all be the same. We don’t even have to specify how many k_i’s there are to be considered. After all, if we want the sum to be n, then only finitely many can be non-zero. So:

\sum_{m}\sum_{k_1+\ldots+k_m=n}a_{k_1}\ldots a_{k_m}=[x^n]f(x)^n=[x^n]f(x)^\infty,

provided f(0)=1.

Useful when? – You recognise the generating function!

In some cases, you can identify the generating function as a `standard’ function, eg the geometric series. In that case, manipulating the generating functions is likely to be promising. Here is a list of some useful power series you might spot.

1+x+x^2+\ldots=\frac{1}{1-x},\quad |x|<1

1+2x+3x^2+\ldots=\frac{1}{(1-x)^2},\quad |x|<1

e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\ldots

\cos x=1-\frac{x^2}{2!}+\frac{x^4}{4!}\pm\ldots

Exercise: if you know what differentiation means, show that if f(x) is the gen fn of (a_k), then xf'(x) is the gen fn of ka_k.

Technicalities: some of these identities are defined only for certain values of x. This may be a problem if they are defined at, say, only a single point, but in general this shouldn’t be the case. In addition, you don’t need to worry about differentiability. You can definition differentiation of power series by x^n\mapsto nx^{n-1}, and sort out convergence later if necessary.

Useful for? – Recurrent definitions

The Fibonacci numbers are defined by:

F_0=F_1=1,\quad F_{n+1}=F_n+F_{n-1},\quad n\geq 1.

Let F(x) be the generating function of the sequence F_n. So, for n=>1,

[x^n]F(x)=[x^{n-1}]F(x)+[x^{n-2}]F(x)=[x^n](xF(x)+x^2F(x)),

and F(0)=1, so we can conclude that:

F(x)=1+(x+x^2)F(x)\quad\Rightarrow\quad F(x)=\frac{1}{1-x-x^2}.

Exercise: Find a closed form for the generating function of the Catalan numbers, defined recursively by:

C_n=C_0C_{n-1}+C_1C_{n-2}+\ldots+C_{n-1}C_0.

Can you now find the coefficients explicitly for this generating function?

Useful for? – Partitions

Partitions can be an absolute nightmare to work with because of the lack of explicit formulae. Often any attempt at a calculation turns into a massive IEP bash. This prompts a search for bijective or bare-hands arguments, but generating functions can be useful too.

For now (*), let’s assume a partition of [n] means a sequence of positive integers a_1\geq a_2\geq\ldots\geq a_k such that a_1+\ldots+a_k=n. Let p(n) be the number of partitions of [n].

(* there are other definitions, in terms of a partition of the set [n] into k disjoint but unlabelled sets. Be careful about definitions, but the methods often extend to whatever framework is required. *)

Exercise: Show that the generating function of p(n) is:

\left(\frac{1}{1-x}\right)\left(\frac{1}{1-x^2}\right)\left(\frac{1}{1-x^3}\right)\ldots

Note that if we are interested only in partitions of [n], then we don’t need to consider any terms with exponent greater than n, so if we wanted we could take a finite product instead.

Example: the mint group will remember this problem from the first session in Cambridge:

Show that the number of partitions of [n] with distinct parts is equal to the number of partitions of [n] with odd parts.

Rather than the fiddly bijection argument found in the session, we can now treat this as a simple calculation. The generating function for distinct parts is given by:

(1+x)(1+x^2)(1+x^3)\ldots,

while the generating function for odd parts is given by:

\left(\frac{1}{1-x}\right)\left(\frac{1}{1-x^3}\right)\left(\frac{1}{1-x^5}\right)\ldots.

Writing the former as

\left(\frac{1-x^2}{1-x}\right)\left(\frac{1-x^4}{1-x^2}\right)\left(\frac{1-x^6}{1-x^3}\right)\ldots

shows that these are equal and the result follows.

Other things – Multivariate Generating Functions

If you want to track a sequence in two variables, say a_{m,n}, then you can encode this with the bivariate generating function

f(x,y):=\sum_{m,n\geq 0}a_{m,n}x^my^n.

The coefficients are then extracted by [x^ay^b] and so on. There’s some interesting stuff on counting lattice paths with this method.

Sums over arithmetic progressions via roots of unity

Note that we can extract both \sum a_n and \sum (-1)^na_n by judicious choice of x in f(x). By taking half the sum or half the difference, we can obtain

a_0+a_2+a_4+\ldots=\frac12(f(1)+f(-1)),\quad a_1+a_3+a_5+\ldots=\frac12(f(1)-f(-1)).

Can we do this in general? Yes actually. If you want a_0+a_k+a_{2k}+\ldots, this is given by:

a_0+a_k+a_{2k}+\ldots+\frac{1}{k}\left(f(1)+f(w)+\ldots+f(w^{k-1})\right),

where w=e^{2\pi i/k} is a $k$th root of unity. Exercise: Prove this.

For greater clarity, first try the case k=4, and consider the complex part of the power series evaluated at +i and -1.

Bijections, Prufer Codes and Cayley’s Formula

I’m currently at the training camp in Cambridge for this year’s UK IMO squad. This afternoon I gave a talk to some of the less experienced students about combinatorics. My aim was to cover as many useful tricks for calculating the sizes of combinatorial sets as I could in an hour and a half. We started by discussing binomial coefficients, which pleasingly turned out to be revision for the majority. But my next goal was to demonstrate that we are much more interested in the fact that we can calculate these if we want than in the actual expression for their values.

Put another way, my argument was that the interpretation of \binom{n}{m} as the number of ways to choose m objects from a collection of n, or the number of up-and-right paths from (0,0) to (m,n) is more useful than the fact that \binom{n}{m}=\frac{n!}{m!(n-m)!}. The opening gambit was to prove the fundamental result underlying the famous construction of Pascal’s triangle that

\binom{n+1}{m+1}=\binom{n}{m}+\binom{n}{m+1}.

This is not a hard result to prove by manipulating factorials, but it is a very easy result to prove in the path-counting setting, for example.

So it turned out that the goal of my session, as further supported by some unsubtly motivated problems from the collection, was to convince the students to use bijections as much as possible. That is, if you have to count something awkward, show that counting the awkward thing is equivalent to counting something more manageable, then count that instead. For many simpler questions, this equivalence is often drawn implicitly using words (“each of the n objects can be in any subset of the collection of bags so we multiply…” etc), but it is always worth having in mind the formal bijective approach. Apart from anything else, asking the question “is this bijection so obvious I don’t need to prove it” is often a good starting-point for assessing whether the argument is in fact correct!

Anyway, I really wanted to show my favouriite bijection argument, but there wasn’t time, and I didn’t want to spoil other lecturers’ thunder by defining a graph and a tree and so forth. The exploration process encoding of trees is a strong contender, but today I want to define quickly the Prufer coding for trees, and use it to prove a famous result I’ve been using a lot recently, Cayley’s formula for the number of spanning trees on the complete graph with n vertices, n^{n-2}.

We are going to count rooted trees instead. Since we can choose any vertex to be the root, there are n^{n-1} rooted trees on n vertices. The description of the Prufer code is relatively simple. Take a rooted tree with vertices labelled by [n]. A leaf is a vertex with degree 1, other than the root. Find the leaf with the largest label. Write down the label of the single vertex to which this leaf is connected, then delete the leaf. Now repeat the procedure, writing down the label of the vertex connected to the leaf now with the largest label, until there are only two vertices remaining, when you delete the non-root vertex, and write down the label of the root. We get a string of (n-1) labels. We want to show that this mapping is a bijection from the set of rooted trees with vertices labelled by [n] to [n]^{n-1}.

Let’s record informally how we would recover a tree from the Prufer code. First, observe that the label of any vertex which is not a leaf must appear in the code. Why? Well, the root label appears right at the end, if not earlier, and every vertex must be deleted. But a vertex cannot be deleted until it has degree one, so the neighbours further from the root (or ancestors) of the vertex must be removed first, and so by construction the label appears. So know what the root is, and what the leaves are straight away.

In fact we can say slightly more than this. The number of times the root label appears is the degree of the root, while the number of times any other label appears is the degree of the corresponding vertex minus one. Call this sequence the Prufer degrees.

So we construct the tree backwards from the leaves towards the root. We add edges one at a time, with the k-th edge joining the vertex with the k-th label to some other vertex. For k=1, this other vertex is the leaf with maximum label. In general, let G_k be the graph formed after the addition of k-1 edges, so G_1 is empty, and G_n is the full tree. Define T_k to be the set of vertices such that their degree in G_k is exactly one less than their Prufer degree. Note that T_1 is therefore the set of leaves suggested by the Prufer code. So we form G_{k+1} by adding an edge between the vertex with label appearing at position k+1 in the Prufer sequence and the vertex of T_k with maximum label.

Proving that this is indeed the inverse is a bit fiddly, more because of notation than any actual mathematics. You probably want to show injectivity by an extremal argument, taking the closest vertex to the root that is different in two trees with the same Prufer code. I hope it isn’t a complete cop out to swerve around presenting this in full technical detail, as I feel I’ve achieved by main goal of explaining why bijection arguments can reduce a counting problem that was genuinely challenging to an exercise in choosing sensible notation for proving a fairly natural bijection.