BMO2 2017

The second round of the British Mathematical Olympiad was taken yesterday by about 100 invited participants, and about the same number of open entries. To qualify at all for this stage is worth celebrating. For the majority of the contestants, this might be the hardest exam they have ever sat, indeed relative to current age and experience it might well be the hardest exam they ever sit. And so I thought it was particularly worth writing about this year’s set of questions. Because at least in my opinion, the gap between finding every question very intimidating, and solving two or three is smaller, and more down to mindset, than one might suspect.

A key over-arching point at this kind of competition is the following: the questions have been carefully chosen, and carefully checked, to make sure they can be solved, checked and written up by school students in an hour. That’s not to say that many, or indeed any, will take that little time, but in principle it’s possible. That’s also not to say that there aren’t valid but more complicated routes to solutions, but in general people often spend a lot more time writing than they should, and a bit less time thinking. Small insights along the lines of “what’s really going on here?” often get you a lot further into the problem than complicated substitutions or lengthy calculations at this level.

So if some of the arguments below feel slick, then I guess that’s intentional. When I received the paper and had a glance in my office, I was only looking for slick observations, partly because I didn’t have time for detailed analysis, but also because I was confident that there were slick observations to be made, and I felt it was just my task to find them.

Anyway, these are the questions: (note that the copyright to these is held by BMOS – reproduced here with permission.)

Question One

2017-bmo2-q1I immediately tried the example where the perpendicular sides are parallel to the coordinate axes, and found that I could generate all multiples of 3 in this way. This seemed a plausible candidate for an answer, so I started trying to find a proof. I observed that if you have lots of integer points on one of the equal sides, you have lots of integer points on the corresponding side, and these exactly match up, and then you also have lots of integer points on the hypotenuse too. In my first example, these exactly matched up too, so I became confident I was right.

Then I tried another example ( (0,0), (1,1), (-1,1) ) which has four integer points, and could easily be generalised to give any multiple of four as the number of integer points. But I was convinced that this matching up approach had to be the right thing, and so I continued, trusting that I’d see where this alternate option came in during the proof.

Good setup makes life easy. The apex of the isosceles triangle might as well be at the origin, and then your other vertices can be (m,n), (n,-m) or similar. Since integral points are preserved under the rotation which takes equal side to the other, the example I had does generalise, but we really need to start enumerating. The number of integer points on the side from (0,0) to (m,n) is G+1, where G is the greatest common divisor of m and n. But thinking about the hypotenuse as a vector (if you prefer, translate it so one vertex is at the origin), the number of integral points on this line segment must be \mathrm{gcd}(m+n,m-n) +1.

To me, this felt highly promising, because this is a classic trope in olympiad problem-setting. Even without this experience, we know that this gcd is equal to G if m and n have different parities (ie one odd, one even) and equal to 2G if m and n have the same parity.

So we’re done. Being careful not to double-count the vertices, we have 3G integral points if m and n have opposite parities, and 4G integral points if m and n have the same parity, which exactly fits the pair of examples I had. But remember that we already had a pair of constructions, so (after adjusting the hypothesis to allow the second example!) all we had to prove was that the number of integral points is divisible by at least one of 3 and 4. And we’ve just done that. Counting how many integers less than 2017 have this property can be done easily, checking that we don’t double-count multiples of 12, and that we don’t accidentally include or fail to include zero as appropriate, which would be an annoying way to perhaps lose a mark after totally finishing the real content of the problem.

Question Two

2017-bmo2-q2(Keen observers will note that this problem first appeared on the shortlist for IMO 2006 in Slovenia.)

As n increases, obviously \frac{1}{n} decreases, but the bracketed expression increases. Which of these effects is more substantial? Well \lfloor \frac{n}{k}\rfloor is the number of multiples of k which are at most n, and so as a function of n, this increases precisely when n is a multiple of k. So, we expect the bracketed expression to increase substantially when n has lots of factors, and to increase less substantially when n has few factors. An extreme case of the former might be when n is a large factorial, and certainly the extreme case of the latter is n a prime.

It felt easier to test a calculation on the prime case first, even though this was more likely to lead to an answer for b). When n moves from p-1 to p, the bracketed expression goes up by exactly two, as the first floor increases, and there is a new final term. So, we start with a fraction, and then increase the numerator by two and the denominator by one. Provided the fraction was initially greater than two, it stays greater than two, but decreases. This is the case here (for reasons we’ll come back to shortly), and so we’ve done part b). The answer is yes.

Then I tried to do the calculation when n was a large factorial, and I found I really needed to know the approximate value of the bracketed expression, at least for this value of n. And I do know that when n is large, the bracketed expression should be approximately n\log n, with a further correction of size at most n to account for the floor functions, but I wasn’t sure whether I was allowed to know that.

But surely you don’t need to engage with exactly how large the correction due to the floors is in various cases? This seemed potentially interesting (we are after all just counting factors), but also way too complicated. An even softer version of what I’ve just said is that the harmonic function (the sum of the first n reciprocals) diverges faster than n. So in fact we have all the ingredients we need. The bracketed expression grows faster than n, (you might want to formalise this by dividing by n before analysing the floors) and so the a_ns get arbitrarily large. Therefore, there must certainly be an infinite number of points of increase.

Remark: a few people have commented to me that part a) can be done easily by treating the case n=2^k-1, possibly after some combinatorial rewriting of the bracketed expression. I agree that this works fine. Possibly this is one of the best examples of the difference between doing a problem leisurely as a postgraduate, and actually under exam pressure as a teenager. Thinking about the softest possible properties of a sequence (roughly how quickly does it grow, in this case) is a natural first thing to do in all circumstances, especially if you are both lazy and used to talking about asymptotics, and certainly if you don’t have paper.

Question 3

2017-bmo2-q3I only drew a very rough diagram for this question, and it caused no problems whatsoever, because there aren’t really that many points, and it’s fairly easy to remember what their properties are. Even in the most crude diagram, we see R and S lie on AC and AD respectively, and so the conclusion about parallel lines is really about similarity of triangles ARS and ACD. This will follow either from some equal angles, or by comparing ratios of lengths.

bmo2-2017-q3Since angle bisectors by definition involve equal angles, the first attack point seems promising. But actually the ratios of lengths is better, provided we know the angle bisector theorem, which is literally about ratios of lengths in the angle bisector diagram. Indeed

\frac{AR}{RC}=\frac{AQ}{CQ},\quad \frac{AS}{SD}=\frac{AP}{PD},     (1)

and so it only remains to show that these quantities are in fact all equal. Note that there’s some anti-symmetry here – none of these expressions use B at all! We could for example note that AP/PD = BP/PC, from which

\left(\frac{AS}{SD}\right)^2 = \frac{AP.BP}{PC.PD},     (2)

and correspondingly for R and Q, and work with symmetric expressions. I was pretty sure that there was a fairly well-known result that in a cyclic quadrilateral, where P is the intersection of the diagonals

\frac{AP}{PC} = \frac{AD.AB}{DC.BC},     (3)

(I was initially wondering whether there was a square on the LHS, but an example diagram makes the given expression look correct.)

There will be a corresponding result for Q, and then we would be almost done by decomposing (2) slightly differently, and once we’d proved (3) of course. But doing this will turn out to be much longer than necessary. The important message from (3) is that in a very simple diagram (only five points), we have a result which is true, but which is not just similar triangles. There are two pairs of similar triangles in the diagram, but they aren’t in the right places to get this result. What you do have is some pairs of triangles with one pair of equal angles, and one pair of complementary angles (that is, \theta in one, and 180-\theta in the other). This is a glaring invitation to use the sine rule, since the sines of complementary angles are equal.

But, this is also the easiest way to prove the angle bisector theorem. So maybe we should just try this approach directly on the original ratio-of-lengths statement that we decided at (1) was enough, namely \frac{AQ}{CQ}=\frac{AP}{PD}. And actually it drops out rapidly. Using natural but informal language referencing my diagram

\frac{AP}{PD} = \frac{\sin(\mathrm{Green})}{\sin(\mathrm{Pink})},\quad\text{and}\quad \frac{AQ}{CQ}= \frac{\sin(\mathrm{Green})}{\sin(180-\mathrm{Pink})}

and we are done. But whatever your motivation for moving to the sine rule, this is crucial. Unless you construct quite a few extra cyclic quadrilaterals, doing this with similar triangles and circle theorems alone is going to be challenging.

Remark: If you haven’t seen the angle bisector theorem before, that’s fine. Both equalities in (1) are a direct statement of the theorem. It’s not an intimidating statement, and it would be a good exercise to prove either of these statements in (1). Some of the methods just described will be useful here too!

Question 4

2017-bmo2-q4You might as well start by playing around with methodical strategies. My first try involved testing 000, 111, … , 999. After this, you know which integers appear as digits. Note that at this stage, it’s not the same as the original game with only three digits, because we can test using digits which we know are wrong, so that answers are less ambiguous. If the three digits are different, we can identify the first digit in two tests, and then the second in a further test, and so identify the third by elimination. If only two integers appear as digits, we identify each digit separately, again in three tests overall. If only one integer appears, then we are already done. So this is thirteen tests, and I was fairly convinced that this wasn’t optimal, partly because it felt like testing 999 was a waste. But even with lots of case tries I couldn’t do better. So I figured I’d try to prove some bound, and see where I got.

A crucial observation is the following: when you run a test, the outcome eliminates some possibilities. One of the outcomes eliminates at least half the codes, and the other outcome eliminates at most half the codes. So, imagining you get unlucky every time, after k tests, you might have at least 1000\times 2^{-k} possible codes remaining. From this, we know that we need at least 9 tests.

For this bound to be tight, each test really does need to split the options roughly in two. But this certainly isn’t the case for the first test, which splits the options into 729 (no digit agreements) and 271 (at least one agreement). Suppose the first test reduces it to 729 options, then by the same argument as above, we still need 9 tests. We now know we need at least 10 tests, and so the original guess of 13 is starting to come back into play.

We now have to make a meta-mathematical decision about what to do next. We could look at how many options might be left after the second test, which has quite a large number of cases (depending on how much overlap there is between the first test number and the second test number). It’s probably going to be less than 512 in at least one of the cases, so this won’t get us to a bound of 11 unless we then consider the third test too. This feels like a poor route to take for now, as the tree of options has branching at rate 3 (or 4 if you count obviously silly things) per turn, so gets unwieldy quickly. Another thought is that this power of two argument is strong when the set of remaining options is small, so it’s easier for a test to split the field roughly in two.

Now go back to our proposed original strategy. When does the strategy work faster than planned? It works faster than planned if we find all the digits early (eg if they are all 6 or less). So the worst case scenario is if we find the correct set of digits fairly late. But the fact that we were choosing numbers of the form aaa is irrelevant, as the digits are independent (consider adding 3 to the middle digit modulo 10 at all times in any strategy – it still works!).

This is key. For k\le 9, after k tests, it is possible that we fail every test, which means that at least (10-k) options remain for each digit, and so at least (10-k)^3 options in total. [(*) Note that it might actually be even worse if eg we get a ‘close’ on exactly one test, but we are aiming for a lower bound, so at this stage considering an outcome sequence which is tractable is more important than getting the absolute worst case outcome sequence if it’s more complicated.] Bearing in mind that I’d already tried finishing from the case of reduction to three possibilities, and I’d tried hard to sneak through in one fewer test, and failed, it seemed sensible to try k=7.

After 7 tests, we have at least 27 options remaining, which by the powers-of-two argument requires at least 5 further tests to separate. So 12 in total, which is annoying, because now I need to decide whether this is really the answer and come up a better construction, or enhance the proof.

Clearly though, before aiming for either of these things, I should actually try some other values of k, since this takes basically no time at all. And k=6 leaves 64 options, from which the power of two argument is tight; and k=5 leaves 125, which is less tight. So attacking k=6 is clearly best. We just need to check that the 7th move can’t split the options exactly into 32 + 32. Note that in the example, where we try previously unseen digits in every position, we split into 27 + 37 [think about (*) again now!]. Obviously, if we have more than four options left for any digit, we are done as then we have strictly more than 4x4x4=64 options. So it remains to check the counts if we try previously unseen digits in zero, one or two positions. Zero is silly (gives no information), and one and two can be calculated, and don’t give 32 + 32.

So this is a slightly fiddly end to the solution, and relies upon having good control over what you’re trying to do, and what tools you currently have. The trick to solving this is resisting calculations and case divisions that are very complicated. In the argument I’ve proposed, the only real case division is right at the end, by which point we are just doing an enumeration in a handful of cases, which is not really that bad.

Advertisements

Lagrange multipliers Part One: A much simpler setting

I am currently in northern Hungary for our annual winter school for some of the strongest young school-aged mathematicians in the UK and Hungary. We’ve had a mixture of lectures, problem-solving sessions and the chance to enjoy a more authentic version of winter than is currently on offer in balmy Oxford.

One of my favourite aspects of this event is the chance it affords for the students and the staff to see a slightly different mathematical culture. It goes without saying that Hungary has a deep tradition in mathematics, and the roots start at school. The British students observe fairly rapidly that their counterparts have a much richer diet of geometry, and methods in combinatorics at school, which is certainly an excellent grounding for use in maths competitions. By contrast, our familiarity with calculus is substantially more developed – by the time students who study further maths leave school, they can differentiate almost anything.

But the prevailing attitude in olympiad circles is that calculus is unrigorous and hence illegal method. The more developed summary is that calculus methods are hard, or at least technical. This is true, and no-one wants to spoil a measured development of analysis from first principles, but since some of the British students asked, it seemed worth giving a short exposition of why calculus can be made rigorous. They are mainly interested in the multivariate case, and the underlying problem is that the approach suggested by the curriculum doesn’t generalise well at all to the multivariate setting. Because it’s much easier to imagine functions of one variable, we’ll develop the machinery of the ideas in this setting in this post first.

Finding minima – the A-level approach

Whether in an applied or an abstract setting, the main use of calculus at school is to find where functions attain their maximum or minimum. The method can be summarised quickly: differentiate, find where the derivative is zero, and check the second-derivative at that value to determine that the stationary point has the form we want.

Finding maxima and finding minima are a symmetric problem, so throughout, we talk about finding minima. It’s instructive to think of some functions where the approach outlined above fails.

20160131_124059In the top left, there clearly is a minimum, but the function is not differentiable at the relevant point. We can probably assert this without defining differentiability formally: there isn’t a well-defined local tangent at the minimum, so we can’t specify the gradient of the tangent. In the top right, there’s a jump, so depending on the value the function takes at the jump point, maybe there is a minimum. But in either case, the derivative doesn’t exist at the jump point, so our calculus approach will fail.

In the middle left, calculus will tell us that the stationary point in the middle is a ‘minimum’, but it isn’t the minimal value taken by the function. Indeed the function doesn’t have a minimum, because it seems to go off to -\infty in both directions. In the middle right, the asymptote provides a lower bound on the values taken by the function, but this bound is never actually achieved. Indeed, we wouldn’t make any progress by calculus, since there are no stationary points.

At the bottom, the functions are only defined on some interval. In both cases, the minimal value is attained at one of the endpoints of the interval, even though the second function has a point which calculus would identify as a minimum.

The underlying problem in any calculus argument is that the derivative, if it exists, only tells us about the local behaviour of the function. At best, it tells us that a point is a local minimum. This is at least a necessary condition to be a global minimum, which is what we actually care about. But this is a change of emphasis from the A-level approach, for which having zero derivative and appropriately-signed second-derivative is treated as a sufficient condition to be a global minimum.

Fortunately, the A-level approach is actually valid. It can be shown that if a function is differentiable everywhere, and it only has one stationary point, where the second-derivative exists and is positive, then this is in fact the global minimum. The first problem is that this is really quite challenging to show – since in general the derivative might not be continuous, although it might have many of the useful properties of a continuous function. Showing all of this really does require setting everything up carefully with proper definitions. The second problem is that this approach does not generalise well to multivariate settings.

Finding minima – an alternative recipe

What we do is narrow down the properties which the global minimum must satisfy. Here are some options:

0) There is no global minimum. For example, the functions pictured in the middle row satisfy this.

Otherwise, say the global minimum is attained at x. It doesn’t matter if it is attained at several points. At least one of the following options must apply to each such x.

1) f'(x)=0,

2) f'(x) is not defined,

3) x lies on the boundary of the domain where f is defined.

We’ll come back to why this is true. But with this decomposition, the key to identifying a global minimum via calculus is to eliminate options 0), 2) and 3). Hopefully we can eliminate 2) immediately. If we know we can differentiate our function everywhere, then 2) couldn’t possibly hold for any value of x. Sometimes we will be thinking about functions defined everywhere, in which case 3) won’t matter. Even if our function is defined on some interval, this only means we have to check two extra values, and this isn’t such hard work.

20160131_124716It’s worth emphasising why if x is a local minimum not on the boundary and f'(x) exists, then f'(x)=0. We show that if f'(x)\ne 0, then x can’t be a local minimum. Suppose f'(x)>0. Then both the formal definition of derivative, and the geometric interpretation in terms of the gradient of a tangent which locally approximates the function, give that, when h is small,

f(x-h) = f(x)-h f'(x) +o(h),

where this ‘little o’ notation indicates that for small enough h, the final term is much smaller than the second term. So for small enough h, f(x-h)<f(x), and so we don’t have a local minimum.

The key is eliminating option 0). Once we know that there definitely is a global minimum, we are in a good position to identify it using calculus and a bit of quick checking. But how would we eliminate option 0)?

Existence of global minima

This is the point where I’m in greatest danger of spoiling first-year undergraduate course content, so I’ll be careful.

As we saw in the middle row, when functions are defined on the whole real line, there’s the danger that they can diverge to \pm \infty, or approach some bounding value while never actually attaining it. So life gets much easier if you work with functions defined on a closed interval. We also saw what can go wrong if there are jumps, so we will assume the function is continuous, meaning that it has no jumps, or that as y gets close to x, f(y) gets close to f(x). If you think a function can be differentiated everywhere, then it is continuous, because we’ve seen that once a function has a jump (see caveat 2) then it certainly isn’t possible to define the derivative at the jump point.

It’s a true result that a continuous function defined on a closed interval is bounded and attains its bounds. Suppose such a function takes arbitrarily large values. The main idea is that if the function takes arbitrarily large values throughout the interval, then because the interval is finite it also takes arbitrarily large values near some point, which will make it hard to be continuous at that point. You can apply a similar argument to show that the function can’t approach a threshold without attaining it somewhere. So how do you prove that this point exists? Well, you probably need to set up some formal definitions of all the properties under discussion, and manipulate them carefully. Which is fine. If you’re still at school, then you can either enjoy thinking about this yourself, or wait until analysis courses at university.

My personal opinion is that this is almost as intuitive as the assertion that if a continuous function takes both positive and negative values, then it has a zero somewhere in between. I feel if you’re happy citing the latter, then you can also cite the behaviour of continuous functions on closed intervals.

Caveat 2) It’s not true to say that if a function doesn’t have jumps then it is continuous. There are other kinds of discontinuity, but in most contexts these are worse than having a jump, so it’s not disastrous in most circumstances to have this as your prime model of non-continuity.

Worked example

Question 1 of this year’s BMO2 was a geometric inequality. I’ve chosen to look at this partly because it’s the first question I’ve set to make it onto BMO, but mainly because it’s quite hard to find olympiad problems which come down to inequalities in a single variable.

Anyway, there are many ways to parameterise and reparameterise the problem, but one method reduces, after some sensible application of Pythagoras, to showing

f(x)=x+ \frac{1}{4x} + \frac{1}{4x+\frac{1}{x}+4}\ge \frac{9}{8}, (*)

for all positive x.

There are simpler ways to address this than calculus, especially if you establish or guess that the equality case is x=1/2. Adding one to both sides is probably a useful start.

But if you did want to use calculus, you should argue as follows. (*) is certainly true when x\ge \frac{9}{8} and also when $x\le \frac{2}{9}$. The function f(x) is continuous, and so on the interval [\frac{2}{9},\frac{9}{8}] it has a minimum somewhere. We can differentiate, and fortunately the derivative factorises (this might be a clue that there’s probably a better method…) as

(1-\frac{1}{4x^2}) \left[ 1 - \frac{4}{(4x+\frac{1}{x}+4)^2} \right].

If x is positive, the second bracket can’t be zero, so the only stationary point is found at x=1/2. We can easily check that f(\frac12)=\frac98, and we have already seen that f(\frac29),f(\frac98)>\frac98. We know f attains its minimum on [\frac29,\frac98], and so this minimal value must be $\frac98$, as we want.

Overall, the moral of this approach is that even if we know how to turn the handle both for the calculation, and for the justification, it probably would be easier to use a softer approach if possible.

Next stage

For the next stage, we assess how much of this carries across to the multivariate setting, including Lagrange multipliers to find minima of a function subject to a constraint.