Lecture 10 – the configuration model

I am aiming to write a short post about each lecture in my ongoing course on Random Graphs. Details and logistics for the course can be found here.

As we enter the final stages of the semester, I want to discuss some extensions to the standard Erdos-Renyi random graph which has been the focus of most of the course so far. Although we will not get far into the details during this course, the overall goal is to develop models which are close to Erdos-Renyi in terms of ease of analysis, while also allowing more of the features characteristic of networks observed in the real world.

One of the more obvious deficiencies of the sparse regime of Erdos-Renyi random graphs for modelling ‘real-world phenomena’ concerns the degree sequence. Indeed, the empirical degree distribution of G(n,c/n) converges to Poisson(c). By contrast, in real-world networks, a much wider range of degrees is typically observed, and in many cases it is felt that these should follow a power law, with a small number of a very highly connected agents.

One way around this problem to construct random graphs where we insist that the graph has a given sequence of degrees. The configuration model, which is the subject of this lecture and this post (and about which I’ve written before), offers one way to achieve this.

Definition and notes

Let n\ge 1 and let d=(d_1,d_2,\ldots,d_n) be a sequence of non-negative integers such that \sum_{i=1}^n d_i is even. Then the configuration model with degree sequence d is a random multigraph with vertex set [n], constructed as follows:

  • To each vertex i\in[n], assign d_i half-edges;
  • Then, take a uniform matching of these half-edges;
  • Finally, for each pair of half-edges in the matching, replace the two half-edges with a genuine edge, to obtain the multigraph CM_n(d), in which, by construction, vertex i has degree d_i.

One should note immediately that although the matching is uniform, the multigraph is not uniform amongst multigraphs with that degree sequence. Note also that the condition on the sums of the degrees is necessary for any graph, and in this context means that the number of half-edges is even, without which it would not be possible to construct a matching.

This effect is manifest in the simplest possible example, when n=2 and d=(3,3). There are two possible graphs, up to isomorphism, which are shown below:

For obvious reasons, we might refer to these as the handcuffs and the theta , respectively. It’s helpful if we, temporarily, assume the half-edges are distinguishable at the moment we join them up in the configuration model construction. Because then there are 3×3=9 ways to join them up to form the handcuffs (think of which half-edge ends up forming the edge between the two vertices) while there are 3!=6 ways to pair up the half-edges in the theta.

In general, for multigraphs H with the correct degree sequence, we have

\mathbb{P}( CM_n(d)\simeq H) \propto \left( 2^{\# \text{loops}(H)} \prod_{e\in E(H)} \text{mult}(e)! \right),

where \text{mult}(e) is the multiplicity with which a given edge e appears in H.

Note: it might seem counterintuitive that this procedure is biased against multiple edges and self-loops, but it is really just saying that there are more ways to form two distinct edges than to form two equal edges (ie a multiedge pair) when we view the half-edges as distinguishable. (See this post for further discussion of this aspect in the 3-regular setting.)

However, a consequence of this result is that if we condition on the event that CM_n(d) is simple, then the resulting random graph is uniform on the set of simple graphs satisfying the degree property. Note that the same example as above shows that there’s no guarantee that there exists a simple graph whose degrees are some given sequence.

d-regular configuration model

In general, from a modelling point of view, we are particularly interested in simple, connected graphs, and so it is valuable to study whether the large examples of the configuration model are likely to have these properties. In this lecture, I will mainly focus on the case where the multigraphs are d-regular, meaning that all the vertices have degree equal to d. For the purposes of this lecture, we denote by G^d(n), the d-regular configuration model CM_n(d,\ldots,d).

  • d=1: to satisfy the parity condition on the sums of degrees, we must have n even. But then G^1(n) will consist of n/2 disjoint edges.
  • d=2: G^2(n) will consist of some number of disjoint cycles, and it is a straightforward calculation to check that when n is large, with high probability the graph will be disconnected.

In particular, I will focus on the case when d=3, which is the first interesting case. Most of the results we prove here can be generalised (under various conditions) to more general examples of the configuration model. The main goal of the lecture is revision of some techniques of the course, plus one new one, in a fresh setting, and the strongest possible versions of many of these results can be found amongst the references listed at the end.

Connectedness

In the lecture, we showed that G^3(2n) is connected with high probability. This is, in fact, a very weak result, since in fact G^d(n) is d-connected with high probability for d\ge 3 [Bol81, Wor81]. Here, d-connected means that one must remove at least d vertices in order to disconnect the graph, or, equivalently, that there are d disjoint paths between any pair of vertices. Furthermore, Bollobas shows that for d\ge 3, G^d(n) is a (random) expander family [Bol88].

Anyway, for the purposes of this course, the main tool is direct enumeration. The matching number M_{2k} satisfies

M_{2k}=(2k-1)\times (2k-3)\times\ldots\times 3\times 1 = \frac{(2k)!}{2^k \cdot k!},

and so Stirling’s approximation gives the asymptotics

M_{2k} = (\sqrt{2}+o(1)) \left(\frac{2}{e}\right)^k k^k,

although it will be useful to use the true bounds

c \left(\frac{2}{e}\right)^k k^k \le M_{2k}\le C\left(\frac{2}{e}\right)^k k^k,\quad \forall k,

instead in some places. Anyway, in G^3(2n), there are 6n half-edges in total, and so the probability that the graph may be split into two parts consisting of 2\ell,2m vertices, with 2\ell+2m=2n, and with no edges between the classes is \frac{\binom{2n}{2\ell} M_{6\ell}M_{6m}}{M_{6n}}. Continue reading

Advertisement

Lecture 4 – Hitting time theorem

I am aiming to write a short post about each lecture in my ongoing course on Random Graphs. Details and logistics for the course can be found here.

This lecture consisted of revision of the most relevant theory of Galton-Watson trees, with a focus on the case where the offspring distribution is Poisson, since, as we have seen in previous lectures, this is a strong candidate to approximate the structure of G(n,\lambda/n). It makes sense to cover the theory of the trees before attempting to make rigorous the sense of approximation.

Given a Galton-Watson tree T, it is natural to label the vertices in a breadth-first order as \varnothing=v_1,v_2,\ldots,v_{|T|}. This is easiest if we have constructed the Galton-Watson tree as a subset of the infinite Ulam-Harris tree, where vertices have labels like (3,5,17,4), whose parent is (3,5,17). If this child vertex is part of the tree, then so are (3,5,17,1), (3,5,17,2), and (3,5,17,3). This means our breadth-first order is canonically well-defined, as we have a natural ordering of the children of each parent vertex.

Note: one advantage of using breadth-first order rather than depth-first order (which corresponds to the usual dictionary, or lexicographic ordering of the labels) is that if the tree is infinite, we don’t explore all of it during a depth-first search. (In the sense that there exist vertices which are never given a finite label.) For breadth-first search, a similar problem arises precisely when some vertex has infinitely many children. For a conventional Galton-Watson tree, the latter situation is much less of a problem than the infinite total population problem, which happens with positive probability whenever \mu=\mathbb{E}[X]>1.

Anyway, given the depth-first order, one can consider an exploration process S_0,S_1,S_2,\ldots,S_{|T|} given by

S_0=1,\quad S_i=S_{i-1}+(X_i-1),

where X_i is the number of children of v_i. In this way, we see that

S_i=\big| \Gamma(v_1)\cup\ldots\cup\Gamma(v_i)\backslash \{v_1,\ldots,v_i\}\big|,\quad i\ge 1,

records the number of vertices in some stack containing those which we have ‘seen but not explored’. Some authors prefer to start from 0, in which case one ends up with a similar but slightly different interpretation of the ‘stack’, but that’s fine since we aren’t going to define formally what ‘seen’ and ‘explored’ means in this post.

Essentially, we exhaust the vertices of the tree whenever S_t=0, and so the condition that |T|=n requires

S_n=0,\quad S_m\ge 1,\; m=0,1,\ldots,n-1.

Conveniently, so long as we have avoiding ordering ambiguity, for example by insisting that trees live within the Ulam-Harris tree, we can reconstruct T uniquely from (S_0,S_1,\ldots,S_{|T|}).

Furthermore, if T is a Galton-Watson process, then the numbers of children X_i are IID, and so in fact this exploration process is a random walk, and the size of the tree can be recovered as the hitting time of zero.

Note: making fully rigorous the argument that children in the GW tree are independent of the breadth-first walk fully rigorous is somewhat technical, and not to be dismissed lightly, though not of principle interest at the level of this topics course. See Proposition 1.5 in Section 1.2 of Le Gall’s notes or Section 1.2.2 of my doctoral thesis for further discussion and argument.

The hitting time theorem allows us to study the distribution of the hitting time of a random walk whose increments are bounded below by -1, in terms of the distribution of the value of the random walk.

Theorem: Let (S_n,\, n\ge 0) be a random walk with S_0=0 and IID increments (X_n,n\ge 1) satisfying \mathbb{P}(X_n\ge -1)=1. Let H_{-k}=\inf \left\{n\,:\, S_n=-k\right\} be the hitting time of -k.

Then \mathbb{P}\big( H_{-k}=n\big) = \frac{k}{n}\mathbb{P}\big(S_n=-k).

Commentary: there are local central limit theorem estimates and large deviation estimates that allow good control of the probability on the RHS for a rich class of contexts. So at a meta-level, the hitting time theorem allows us to reduce a complicated (though still classical) problem, to a real classical problem, which is particularly helpful when the LHS is a device for capturing relevant information about our random tree model.

Continue reading

BMO1 2017 – Questions 5 and 6

The first round of the British Mathematical Olympiad was sat yesterday. The questions can be found here and video solutions here. My comments on the first four questions are in the previous post.

Overall, I didn’t think any of the questions on this paper were unusually difficult by the standard of BMO1, but I found everything slightly more time-consuming than typical. I thought Question 5 was a great problem, and I tried lots of things unsuccessfully, first, and so wanted to discuss it in slightly more technical language. For Question 6 I made a decisive mistake, which I’ll explain, and which cost a lot of time. But in general, my point is that the back end of the paper was a little fiddlier than normal, and required longer written solutions, and perhaps many students might have had less time than expected to attack them anyway after details earlier in the paper.

Question Five

As I said before, I thought this question was quite challenging. Not because the solution is particularly exotic or complicated, but because there were so many possible things that might have worked. In my opinion it would not have been out of place at the start of an IMO paper, because it’s perfectly possible to have enough good ideas that eliminating the ones that don’t work takes an hour, or hours. Even though it slightly spoils the flow of the solution, I’m particularly trying to emphasise the tangents that didn’t work, mostly for reassurance to anyone who spent a long time struggling.

I was thinking about this question in terms of a 2Nx2N board, where N is even, and for the given question equal to 100. I spent a while thinking that the bound was 8N-4, corresponding to taking the middle two rows and the middle two columns, but not the 2×2 square which is their intersection. If you think of a comb as a ‘handle’ of 1xN cells, with an extra N/2 alternating cells (say, ‘teeth’) bolted on, then it’s clear this construction works because there’s never space to fit in a handle, let alone the teeth.

I couldn’t prove that this was optimal though. A standard way to prove a given bound K was optimal would be to produce a tiling on the board with K combs, where every cell is included in exactly one comb. But this is clearly not possible in this situation, since the number of cells in a comb (which is 150) does not divide the total number of cells on the board.

Indeed, the general observation that if you take a comb and a copy of the comb rotated by 180, the teeth of the second comb can mesh perfectly with the teeth of the first comb to generate a 3xN unit. I wasted a moderate amount of time pursuing this route.

[Note, it will be obvious in a minute why I’m writing ‘shaded’ instead of ‘coloured’.]

But in motivating the construction, I was merely trying to shade cells so that they intersected every possible 1xN handle, and maybe I could prove that it was optimal for this. In fact, I can’t prove it’s optimal because it isn’t optimal – indeed it’s clear that a handle through one of the middle rows intersects plenty of shaded cells, not just one. However, with this smaller problem in mind, it didn’t take long to come up with an alternative proposal, namely splitting the board into equal quarters, and shading the diagonals of each quarter, as shown.

It seems clear that you can’t fit in a 1xN handle, and any sensible tiling with 1xN handles contains exactly one shaded cell, so this shading (with 4N shaded cells) is optimal. But is it optimal for a comb itself?

Consider a shading which works, so that all combs include a shaded cell. It’s clear that a comb is contained within a 2xN block, and in such a 2xN block, there are four possible combs, as shown.

You need to cover all these combs with some shading somewhere. But if you put the shaded cell on a tooth of comb A, then you haven’t covered comb B. And if you put the shaded cell on the handle of comb A, then you haven’t covered one of comb C and comb D. You can phrase this via a colouring argument too. If you use four colours with period 2×2, as shown

then any comb involves exactly three colours, and so one of them misses out the colour of the shaded cell. (I hope it’s clear what I mean, even with the confusing distinction between ‘shaded’ and ‘coloured’ cells.)

Certainly we have shown that any 2xN block must include at least two shaded cells. And that’s pretty much it. We have a tiling with 2N copies of a 2xN block, and with at least two shaded cells in each, that adds to at least 4N shaded cells overall.

Looking back on the method, we can identify another way to waste time. Tiling a board, eg a chessboard with dominos is a classic motif, which often relies on clever colouring. So it’s perhaps lucky that I didn’t spot this colouring observation earlier. Because the argument described really does use the local properties of how the combs denoted A-D overlap. An attempt at a global argument might start as follows: we can identify 2N combs which don’t use colour 1, and tile this subset of the grid with them, so we need to shade at least 2N cells from colours {2,3,4}. Similarly for sets of colours {1,3,4}, {1,2,4}, and {1,2,3}. But if we reduce the problem to this, then using roughly 2N/3 of each colour fits this global requirement, leading to a bound of 8N/3, which isn’t strong enough. [1]

Question Six

A word of warning. Sometimes it’s useful to generalise in problems. In Q5, I was thinking in terms of N, and the only property of N I used was that it’s even. In Q4, we ignored 2017 and came back to it at the end, using only the fact that it’s odd. By contrast, in Q2, the values did turn out to be important for matching the proof bounds with a construction.

You have to guess whether 300 is important or not here. Let’s see.

I have a natural first question to ask myself about the setup, but some notation is useful. Let a_1,a_2,\ldots,a_{300} be the ordering of the cards. We require that \frac{a_1+\ldots+a_n}{n} is an integer for every 1\le n\le 300. Maybe the values of these integers will be important, so hold that thought, but for now, replace with the divisibility statement that n | a_1+\ldots+a_n.

I don’t think it’s worth playing with small examples until I have a better idea whether the answer is 5 or 295. So the natural first question is: “what does it mean to have (a_1,\ldots,a_{n-1}) such that you can’t pick a suitable a_n?”

It means that there is no integer k in \{1,\ldots,300\}\backslash\{a_1,\ldots,a_{n-1}\} such that n\,\big|\,(a_1+\ldots+a_{n-1})+k, which for now we write as

k\equiv -(a_1+\ldots+a_{n-1})\,\mod n.

Consider the congruence class of -(a_1+\ldots+a_{n-1}) modulo n. There are either \lfloor \frac{300}{n}\rfloor or \lceil \frac{300}{n}\rceil integers under consideration in this congruence class. If no such k exists, then all of the relevant integers in this congruence class must appear amongst \{a_1,\ldots,a_{n-1}\}. At this stage, we’re trying to get a feel for when this could happen, so lower bounds on n are most relevant. Therefore, if we get stuck when trying to find a_n, we have

\lfloor \frac{300}{n} \rfloor\text{ or }\lceil \frac{300}{n}\rceil \le n-1, (*)

which is summarised more succinctly as

\lfloor \frac{300}{n} \rfloor \le n-1. (**)

[Note, with this sort of bounding argument, I find it helpful to add intermediate steps like (*) in rough. The chance of getting the wrong direction, or the wrong choice of \pm 1 is quite high here. Of course, you don’t need to include the middle step in a final write-up.]

We can check that (**) is false when n\le 17 and true when n\ge 18. Indeed, both versions of (*) are true when n\ge 18.

So we know the minimum failure length is at least 17. But is there a failing sequence of length 17? At a meta-level, it feels like there should be. That was a very natural bounding argument for 17 (which recall corresponds to n=18), and it’s easy to believe that might be part of an official solution. If we achieve equality throughout the argument, that’s most of the way to a construction as well. It won’t be so easy to turn this argument into a construction for n\ge 19 because there won’t be equality anywhere.

We have to hope there is a construction for n=18. What follows is a description of a process to derive (or fail to derive) such a construction. In a solution, one would not need to give this backstory.

Anyway, in such a construction, let \alpha\in\{1,2,\ldots,18\} describe the congruence class modulo 18 which is exhausted by \{a_1,\ldots,a_{17}\}. I’m going to hope that \alpha=18 because then the calculations will be easier since everything’s a multiple of 18. We haven’t yet used the fact that for a problem, we need \alpha\equiv-(a_1+\ldots+a_{n-1}). We definitely have to use that. There are 16 multiples of 18 (ie relevant integers in the congruence class), so exactly one of the terms so far, say a_j, is not a multiple of 18. But then

0 \equiv 0+\ldots+0+a_j+0+\ldots+0,

which can’t happen. With a bit of experimentation, we find a similar problem making a construction using the other congruence classes with 16 elements, namely \alpha\in \{13,14,\ldots,18\}.

So we have to tackle a different class. If \alpha\le 12 then our sequence must be

\alpha,18+\alpha,2\times 18 +\alpha, \ldots, 16\times 18 + \alpha,

in some order. In fact, let’s add extra notation, so our sequence is

(a_1,\ldots,a_{17}) = (18\lambda_1+ \alpha,\ldots,18\lambda_{17}+\alpha),

where (\lambda_1,\ldots,\lambda_{17}) is a permutation of {0,…,16}. And so we require

k \,\big|\, 18(\lambda_1+\ldots+\lambda_k) + k\alpha, (%)

for 1\le k\le 17. But clearly we can lop off that k\alpha, and could ignore the 18. Can we find a permutation \lambda such that

k \,\big|\, \lambda_1+\ldots+\lambda_k.

This was where I wasted a long time. I played around with lots of examples and kept getting stuck. Building it up one term at a time, I would typically get stuck around k=9,10. And I had some observations that in all the attempted constructions, the values of \frac{\lambda_1+\ldots+\lambda_k}{k} were around 8 and 9 too when I got stuck.

I became convinced this subproblem wasn’t possible, and decided that would be enough to show that n=18 wasn’t a possible failure length. I was trying to show the subproblem via a parity argument (how must the a_is alternate odd/even to ensure all the even partial sums are even) but this wasn’t a problem. Then I came up with a valid argument. We must have

\lambda_1+\ldots+\lambda_{17}=136= 16\times 8 + 8\quad\text{and}\quad 16\,\big|\,\lambda_1+\ldots+\lambda_{16},

which means \lambda_1+\ldots+\lambda_{16} must be 128 = 15×8 + 8, ie \lambda_{17}=8. But then we also have 15\,\big|\, \lambda_1+\ldots+\lambda_{15}, which forces $latex\lambda_{16}=8$ also. Which isn’t possible.

If this then hadn’t wasted enough time, I then tried to come up with a construction for n=19, for which there are lots more variables, and took a lot more time, and seemed to be suffering from similar problems, just in a more complicated way. So I became convinced I must have made a mistake, because I was forced down routes that were way too complicated for a 3.5 hour exam. Then I found it…

What did I do wrong? I’ll just say directly. I threw away the 18 after (%). This made the statement stronger. (And in fact false.) Suppose instead I’d thrown away a factor of 9 (or no factors at all, but it’s the residual 2 that’s important). Then I would be trying to solve

k\,\big|\,2(\lambda_1+\ldots+\lambda_k).

And now if you experiment, you will notice that taking \lambda_1=0,\lambda_2=1,\lambda_3=2,\ldots seems to work fine. And of course, we can confirm this, using the triangle number formula for the second time in the paper!

This had wasted a lot of time, but once that thought is present, we’re done, because we can go straight back and exhibit the sequence

(a_1,\ldots,a_{17}) = (1, 18+1,2\times 18 +1,\ldots, 16\times 18 +1).

Then the sum so far is congruent to -1 modulo 18, but we have exhausted all the available integers which would allow the sum of the first 18 terms to be a multiple of 18. This confirms that the answer to the question as stated is 17.

At the start, I said that we should be cautious about generalising. In the end, this was wise advice. We definitely used the fact that 18 was even in the stage I over-reduced the first time. We also used the fact that there was at least one value of \alpha with an ‘extra’ member of the congruence class. So I’m pretty sure this proof wouldn’t have worked with 288 = 16×18 cards.

Footnotes

[1] – If shading were a weighted (or continuous or whatever you prefer) property, ie that each cell has a quantity of shading given by a non-negative real number, and we merely demand that the total shading per comb is at least one, then the bound 8N/3 is in fact correct for the total shading. We could look at a 2xN block, and give 1/3 shading to one cell of each colour in the block. Alternatively, we could be very straightforward and apply 2/3N shading to every cell in the grid. The fact that shading has to be (in this language) zero or one, imposes meaningful extra constraints which involve the shape of the comb.

Symmedians and Balkan MO 2017 Q2

While I was away, I wrote about my latest approach to teaching geometry at olympiad camps. This post will end up being about Q2 from the Balkan MO which took place yesterday in Macedonia, but first there is quite a long prelude. My solution, and probably many solutions, to this problem made use of a standard configuration in triangle geometry, namely the symmedian. I want to introduce the configuration, give some simpler examples in practice, and along the way talk about my slightly patched-together philosophy about the merits of practising Euclidean geometry.

The symmedian

Draw a triangle ABC, with A at the top of the page, and extend the rays AB and AC. The median is the line from A through M, the midpoint of BC. Now take points D and E on AB and AC respectively. The following properties are equivalent:

  • DE is parallel to BC;
  • triangle ADE is similar to triangle ABC;
  • the median of ABC passes through the midpoint of DE, and thus is also the median of ADE.

I think it’s a little awkward to prove either of the first two from the third – ratios of areas works – but the rest of the equivalences are straightforward. Later I’m going to talk about the difference between an exercise and a problem. These are all, at best, exercises.

Now take B’ on the ray AC, and C’ on the ray AB such that triangle AB’C’ is similar to triangle ABC. One way to achieve this is to take B’ and C’ to be the reflections in the angle bisector of A of B and C respectively (so then AB’=AB and AC’=AC). We say the line B’C’ is antiparallel to BC, as is any other line DE parallel to B’C’. (Probably this should say ‘with respect to triangle ABC’ or similar, but the context here is very clear, and I want this to seem natural rather than opaque.) Note that DE is an antiparallel line iff BCED is a cyclic quadrilateral. We should remember that, as cyclic quadrilaterals are the signposts for progress in both exercises and problems.

The median of triangle AB’C’ obeys the same equivalences as described above, and so bisects any antiparallel segment. We call the median of triangle AB’C’ the symmedian of triangle ABC. Using the first set of equivalences, the symmedian of triangle ABC bisects any line antiparallel to BC. Furthermore, by construction, the symmedian is the image of the median of ABC under reflection in the bisector of the angle at A. We sometimes say that the symmedian is the isogonal conjugate of the median.

That’s my definition. Note that there was essentially one definition then a couple of easy equivalent definitions. At no point again will I discuss the equivalence of these definitions – we have to take that for granted if we want to get on to more interesting things.

Intersection of tangents + concurrency

Now, in triangle ABC, draw the tangents to the circumcircle at B and C. These meet at P. It turns out that AP is the symmedian. This could have been our definition of a symmedian, but it wasn’t, so let’s quickly prove this.

Trigonometric arguments are very accessible, but I’ll give a Euclidean argument. Draw the antiparallel DE through P, as shown. Our task is to show that EP=PD. At this point, I would again say that this is an exercise.

We colour the angle ABC in green. Two angles around point C share this measure by the alternate segment theorem. The angle at E shares this measure because DE is antiparallel. Therefore CPE is isosceles, and so EP=CP. But CP=BP, so by applying the same argument for the orange angles, we get EP=CP=BP=DP as required.

Pause to regroup. Proving this wasn’t hard, but it was perhaps surprising. If this was all new to you, and I told you to consider the reflection of the median in the angle bisector, you probably wouldn’t instantly exclaim “it goes through the tangent intersection!” So this is a useful piece of knowledge to have gained, in case we ever have to work with the intersection of two tangents like this. Maybe it won’t be useful, but maybe it will. Maybe the statement itself won’t but some extra insights from the proof will be useful, like the fact that we actually showed P is the centre of the circle BCED, and thus angles ECD=EBD=90.

A second property is that in a triangle ABC, the symmedian from A, the symmedian from B and the symmedian from C intersection at, naturally, the symmedian point, which is usually denoted K. This comes from the fact that each symmedian is the isogonal conjugate of the respective median, and the medians are known to concur at the centroid. I’m not going to get into this now.

Configurations – an example

Here’s a problem. Take an isosceles trapezium ABCD as shown (ie throughout I don’t want to worry about alternative diagrams).

Let M be the midpoint of AD, and let E be the point on CM such that angle DBM = EBA. Prove that ABCDE is cyclic.

Well, certainly ABCD is cyclic. So we just need to show E also lies on that circle. And we have two equal angles, but they aren’t in the right place to conclude this immediately. However, we have angle MCA = DBM = EBA, so ABCE is cyclic, and the result follows.

Why is angle MCA = DBM? Well, the isosceles trapezium has an axis of (reflective) symmetry, and MCA is the is image of DBM under that reflection. Simple. If we wanted to do it with congruent triangles, this would all be a bit more laborious. First have to show BD=AC using one set of congruent triangles, then CM=BM using another, finally finishing using DM=MA. This is much less interesting. The symmetry of the configuration is a higher-level observation which could be proved from the axioms of geometry if necessary, but gives us more information more quickly. When we use a configuration like the symmedian configuration, we are really doing a higher-again-level version of this.

Anyway, that problem is fine, but it’s not especially difficult.

Consider instead the following problem. (I saw this online, possibly with slightly different notation, a few days ago and can no longer find the link. If anyone can help, I will add the link.)

Let AB be a chord of a circle, with midpoint M, and let the tangents at A and B meet at P. Consider a line through P which meets the circle at C and D in that order. Extend CM to meet the circle again at E. Show DME is isosceles.

Here’s a diagram, though it includes some clues.

I thought this was a fun problem, and for a while I couldn’t do it because despite lots of equal angles and equal lengths, I couldn’t conjure any congruent triangles in the right places, and I didn’t care enough about solving it to get involved in trigonometry. Then came the moment of insight. We have a midpoint, and also the intersection of the tangents. So DP is the symmedian of triangle DAB, and DM is the median. This gives us the two equal orange angles. Cyclicity gives us an extra equal angle at E as well.

Note now that the situation is very very similar to the previous question (after changing some of the labels), only this time we know ACBDE is cyclic, but don’t know that ABDE is an isosceles trapezium. If ABDE is an isosceles trapezium, we are clearly finished, as then by the same symmetry argument, EM=DM. This direction is probably harder to prove than the direction of the previous problem. Again there are a couple of ways to proceed, but one way is to consider the point E’ such that ABDE’ is an isosceles trapezium, and arguing that E’ lies on the given circle, and the circle through BME, and thus must coincide with E, in a reverse reconstruction argument.

Anyway, this is all slightly a matter of taste, but I would say the second problem is much much more fun than the first problem, even though the second part of the solution is basically the first problem but in a more awkward direction. If you’re going to do Euclidean geometry at all (very much another question), I think you should do questions like the second question wherever possible. And the enjoyable ‘aha moment’ came from knowing about the symmedian configuration. Is it really plausible that you’d look at the original diagram (without the dashed orange lines) and think of the antiparallel to AB in triangle DAB through point P? Probably not. So knowing about the configuration gave access to the good bit of a nice problem.

‘Philosophy of this sort of thing’

If the goal was to solve the second problem in a competition, knowing about the symmedian configuration would be a big advantage. I’ve tried to justify a related alternative view that knowing about the configuration gave access to an enjoyable problem. The question is how many configurations to study, and how hard to study them?

We tend not to think of cyclic quadrilaterals as a special configuration, but that is what they are. We derived circle theorems from the definition of a circle so that we don’t always have to mark on the centre, every single time we have a cyclic quadrilateral. So becoming familiar with a few more is not unreasonable. In particular, there are times when proofs are more important than statements. In research (certainly mine), understanding how various proofs work is the most important aspect, for when you try to extend them or specialise. And in lots of competition problems, the interesting bit is normally finding the argument rather than basking in wonder at the statement (though sometimes the latter is true too!).

To digress briefly. In bridge, I don’t know enough non-obvious motifs in bidding or gameplay to play interesting hands well. I trust that if I thought about some of it very very carefully, I could come up with some of them, especially in gameplay, but not in real time. And it is supposed to be fun right?! Concentrating very very hard to achieve a basic level of competence is not so enjoyable, especially if it’s supposed to be a break from regular work. The end result of this is that I don’t play bridge, which is a shame, because I think the hurdles between where I am currently and a state where I enjoy playing bridge are quite low. If I knew I was going to play bridge regularly, a bit of time reading about conventions would be time well spent. And obviously this applies equally in pursuits which aren’t directly intellectual. Occasionally practising specific skills in isolation broadens overall enjoyment in sport, music, and probably everything. As anyone who’s played in an orchestra knows, there are standard patterns that come up all the time. If you practise these occasionally, you get to a stage where you don’t really need to concentrate that hard in the final movement of Beethoven 5, and instead can listen to the horns, make funny faces at the first violins, and save your mental energy for the handful of non-standard tricky bits. And of course, then move on to more demanding repertoire, where maybe the violas actually get a tune.

This is highly subjective, but my view is that in all these examples are broadly similar to configurations in geometry, and in all of them a little goes a long way.

How? In lots of the geometry configurations you might meet in, for example, a short session at a training camp, most of the conclusions about the configurations have proofs which, like in our symmedian case, are simple exercises. Once you’ve got over some low initial experience hurdles, you have to trust that you can normally solve any simple exercise if required. If you can’t, moving on and returning later, or asking for help is a good policy. The proof shown above that symmedians pass through tangent meet points (and especially a trigonometric alternative) really isn’t interesting enough to spend hours trying to find it. The statements themselves are more useful and interesting here. And it can often be summarised quite quickly: “symmedians are the isogonal conjugates of the medians, so they bisect antiparallels, meet at K, and pass through the alternate tangent meeting points.” Probably having a picture in your mind is even simpler.

There’s a separate question of whether this is worthwhile. I think solving geometry problems occasionally is quite fun, so I guess yes I do think it is worthwhile, but I understand others might not. And if you want to win maths competitions, in the current framework you have to solve geometry problems under time pressure. But from an educational point of view, even though the statements themselves have no real modern research value, I think a) that’s quite a high bar to set, and there’s no a priori reason why they should – >99.9% of things anyone encounters before university have no value to modern research maths; b) in terms of knowledge acquisition, it’s similar in spirit to lots of things that are relevant to later study. I don’t have to solve PDEs very often, but when I do, I hope they are equivalent or similar to one of the small collection of PDEs I do know how to solve. If I worked more with PDEs, the size of this collection would grow naturally, after some initial struggles, and might eventually match my collection of techniques for showing scaling limits of random processes, which is something I need to use often, so the collection is much larger. Maybe that similarity isn’t enough justification by itself, but I think it does mean it can’t be written off as educationally valueless.

Balkan MO 2017 Question Two

An acute angled triangle ABC is given, with AB<AC, and \omega is its circumcircle. The tangents t_B,t_C at B,C respectively meet at L. The line through B parallel to AC meets t_C at D. The line through C parallel to AB meets t_B at E. The circumcircle of triangle BCD meets AC internally at T. The circumcircle of triangle BCE meets AB extended at S. Prove that ST, BC and AL are concurrent.

Ok, so why have I already written 1500 words about symmedians as a prelude to this problem? Because AL is a symmedian. This was my first observation. This observation is then a route into non-Euclidean solutions. It means, for example, that you can describe the point of concurrency fairly explicitly with reference to triangle ABC. If you wish, you can then proceed using areal coordinates. One member of the UK team, whom I know is perfectly capable of finding a synthetic solution, did this. And why not? It’s a competition, and if you can see a method that will definitely work, and definitely take 45 minutes (or whatever) then that’s good.

I was taking a break from work in my office, and had no interest in spending the time evaluating determinants because that isn’t enjoyable at any level, so I focused on the geometry.

I think there’s a good moral from the diagram above, which is the first moderately correct one I drew. I often emphasise that drawing an accurate diagram is important, as it increases the chance that you’ll spot key properties. In this case though, where you’re trying to examine a known configuration, I think it’s more important what you choose to include on your diagram, than how accurately you draw it. (In a moment, we’ll see why it definitely wasn’t very accurate.)

In particular, what’s not on the diagram? E is not on the diagram, and S got added later (as did the equal length signs in TB and CS, which rather spoil what’s about to happen). My first diagram was wildly incorrect, but it also suggested to me that the line ST was hard to characterise, and that I should start by deducing as much as possible about S and T by themselves. So by symmetry, probably it was enough just to deduce as much as possible about T.

Label the angles of triangle ABC as <A, <B, And therefore TB is an antiparallel in triangle ABC. (Note this doesn’t look antiparallel on my diagram at all, but as I said, this didn’t really matter.) Obviously you then guess that CS is also an antiparallel, and on a different diagram I checked this, for essentially the same reasons.

We haven’t yet made any use of the symmedian, but this is clearly where it’ll be useful. Note that if we didn’t know about everything in the prelude, we might well have deduced all of this, but we might not have thought to prove that AL bisects TB unless we’d drawn a very accurate diagram.

At this point, we have to trust that we have enough information to delete most of the diagram, leaving just {A,B,C,S,T} and the line AL. There are a few ways to finish, including similar triangles if you try very hard or trigonometry if you do it right, but again knowledge of some standard configurations is useful. Probably the quickest way is to use Ceva’s theorem in triangle ACS. You can also use Menelaus’ theorem in ABC, so long as you know a little bit about where the symmedian meets the opposite side.

An alternative is the following. We have a complete quadrilateral here, namely BTCS, and the intersection of all its diagonals. One is A, one is the proposed point of concurrency, and one is the point at infinity, since TB || CS. You can chase that, but I found it more clear to let P be the intersection of ST and BC (which we want to prove lies on AL), then look at the complete quadrilateral ATPB. Then AT and BP meet at C, and AB and TP meet at S. So if we look at where the diagonals of ATPB meet the line CS, we have a harmonic range.

If I’d wanted, I could instead have written the prelude about harmonic ranges, but I had fewer ideas how to set these up in a slick Euclidean way. Also, it feels better to talk about the start, rather than the end of a proof, especially when there were alternative endings. Anyway, a harmonic range is a collection of two pairs of points on a line (A, B; C, D), satisfying the following ratio of directed lengths:

\frac{AC}{BC} = -\frac{AD}{BD}.

A classic example is when D is the point at infinity, the RHS is -1, and so C is the midpoint of AB. Being happy about using the point at infinity is a property of projective geometry, of which this is a good first example. Anyway, returning to the problem, we are looking at where the diagonals of ATPB meet line CS, and this pair of points forms a harmonic range with (C,S). TB meets CS at the point at infinity, and so AP meets CS at the midpoint of CS. But from the symmedian configuration, AL bisects CS, so AP and AL are in fact the same line, and so P lies on AL as required.

I think was a brilliant example of when knowing a bit of theory is enjoyable. It wasn’t at all obvious initially how to use the symmedian property, but then the observation that TB is antiparallel felt like a satisfying breakthrough, but didn’t immediately kill the problem.

RMM 2017 – UK Team Blog

This is the customary and slightly frivolous account of a trip to Bucharest for the ninth edition of the Romanian Master of Mathematics, an annual competition for school students, widely recognised as the hardest of its kind.

I discuss the problems in two previous posts (here and here), and there is also a pdf with fewer pictures, which includes both the discussion and this diary, as well as some more formal comments about the competition itself, the results, and thanks.

Wednesday 22 February

Did you know that trains in Moldova use different width tracks to trains in Romania? Well, I didn’t know either, but I found out at 1am today, as my wagon lit from Chisinau was painstakingly jacked up to allow the transfer from ex-Soviet gauge to Western gauge. Outside, a man in a smart uniform and epaulettes shouted loudly and continuously at a group of men in smart uniforms without epaulattes. When their task was done, four sets of border and custom checks remained before the opportunity for another visit to the samovar, and finally a chance to sleep.

All of which is to say that I have arrived at maths competitions in better mental shape than 6am today at Gara de Nord. The UK students have a more conventional itinerary, but their flight from Luton doesn’t arrive until mid-afternoon. After my first Haifa ‘winter’, I’m craving pork and snow, and find both in the mountain town of Sinaia, an hour away by train in Transylvania. I also find a bear. The bear seems very scared.

I return in time to meet the UK students as well as James and MT. Some of our contestants are now into their fourth year of attending international competitions, and the labour of finding them fresh material resembles Hercules against the hydra, but some problems on combinatorial geometry with convexity seem to have kept everyone entertained on the flight. Dinner is at the Moxa campus of the University of Economics, and features chicken with one of two possible carbohydrates, as in fact do the next six meals. However, today is Thomas’s 18th birthday, and so his parents have arranged a delicious cake, which elicits considerably more enthusiasm. On the short walk back to our meeting, we notice it is possible both to buy fireworks and get a tattoo among other options, so Thomas is spoiled for choice about how to take advantage of his majority.

The team’s activities remain a mystery to James and me though, as we have to join the other leaders for the first meeting, to receive the proposed problems. We spend some time thinking about them separately then together, and our initial impression is that it’s a very suitable paper, that hopefully our team will enjoy.

Thursday 23 February

The leaders meet to finalise the choice and statement of the problems. With a bit more time this morning, I’ve solved Q1, Q2, Q5, and proved Q3 once I’d looked up the correct bound. James eats conics for breakfast and shows me a glorious range of interpretations of Q4. We feel happy that our students will have a chance at all of these, while Q6 may prove more restricting. Either way, it’s clearly an appropriate set for this competition, and is approved quickly. So it’s time to finalise the English version of the paper, or finalize the American version. Many alternatives to the word sieve are proposed. Andrea from Italy is clearly already craving home comforts, but his suggestion of cheese grater is not taken up. This time I’m sorting the LaTeX, so get to settle the commas, but also take the blame for inconsistently spacing the rubric between the two papers. I’m sure everyone noticed.

While all this has been happening, the students have been at a lecture by Sergiu Moroianu at the Institute of Mathematics. Joe Benton gives an account of what they learned in the longer pdf version of this report.

For all the charms of Chipping Norton, I sense MT is enjoying the grittier nature of Bucharest Sector 1, and has been shepherding the students round various sites in between attempts at practice problems. I join them for a brief visit to a geology museum. I am very cynical, but it slightly exceeds my expectations, and is infinitely better than the nearby Museum of the Romanian Peasant, which currently ties with the Hanoi Ethnology Museum as my least favourite olympiad excursion of all time.

The opening ceremony is held in the grand hall of the university, and includes several welcoming and thoughtful speeches from the Mayor of Bucharest and the headteacher of Tudor Vianu, the school which hosts this competition every year. Each team briefly presents themselves on stage. Joe and Neel have accumulated a large collection of UK flags from previous competitions, and should hereby consider themselves publicly shamed for forgetting their promise to bring them. It is over soon, and while the students enjoy a quiet evening and an early night, the leaders have to finalise markschemes for all the problems. The walk back takes us through Victory Square, and past the protesters whose fires and slogans have been on front pages around the world in the past months. It’s an interesting time, and the atmosphere of this city feels very different from my first visit, for the inaugural edition of this competition in 2008.

Friday 24 February

The first day of the contest starts at 9am. The British students seem fairly relaxed, and hopefully are aiming high. Contestants may ask questions of clarification during the first 30 minutes. Rosie does this, and I send my reply to her two queries back via the courier. Five minutes later it is returned to me with the explanation that the student does not understand the answer. Even under competition pressure this seems unlikely, given that my answers are, respectively ‘yes’, and putting a ring around one of three options she has listed. It turns out that actually the student courier did not understand what to do with the answer, and the situation is quickly corrected.

We approve more markschemes. The US deputy leader Po-Shen and I share our views on the challenge of correctly finding the bound in Q3, and our suggestion that this instead be worth 2 points is upheld. Various further discussions fill the morning, and we return just in time to meet the students at the end of the exam. Harvey claims all three problems with a relaxed grin, while Joe claims all three problems with the haunted look of a man whose twelfth espresso of the day has just worn off. Alexander and Thomas say that they spent most of the time making sure their solutions to Q1 were totally watertight, which, given the intricacy of the arguments, was clearly a very sensible strategy.

To provide a distraction, if not actually a break from time-pressured problem-solving, I’ve booked a pair of escape rooms for the UK students later in the afternoon. Bucharest is the home of these games, where the aim is to solve themed puzzles as part of a story in time to escape a locked room. I join one of the rooms, where there are some theatrical reveals involving wrenches, and clues hidden in combination-locked cabinets, where ability to add three-digit numbers proves useful. Someone’s carrying voice means we get to enjoy some of the drama and twists of the other room too. Anyway, this proved an ideal way to avoid useless post-mortems, and I highly recommend Vlad and his pair of rooms.

Later, James and I get to look at the students’ work from this morning. Their assessments are pretty accurate. Harvey’s solutions to everything are beautiful, while Neel’s bounding argument in Q2 is certainly the most vulgar (and, in fact, unnecessary) calculation of the year so far. Joe’s solution to Q3 bears such obvious resemblence to an official solution that his uncharacteristic abundance of small errors probably won’t matter, including the memorable set A_i\backslash\{i\}, where the two is mean different things. Some of the team might reflect that a moment of casualness in checking the n=2 case on Q2 is a frustrating way to lose a potential mark, but when I compare notes with James, it sounds like the slow and steady approach to Q1 has indeed paid off for everyone, so hopefully it will not be too painful to agree the scores tomorrow.

Saturday 25 February

It’s the second day of the competition, and the UK team look bright-eyed and positive at breakfast. They aren’t the only ones under pressure this morning, as James and I must settle the scores from yesterday’s questions with local markers, known as coordinators. It’s hard to guess in how much detail one will have to explain your contestants’ scripts, so it is safer to prepare almost line-by-line. On this occasion though, perhaps we have over-prepared, as every meeting ends quickly with offers of 7/7 exactly where we were hoping, and indeed in a couple of places where we were not hoping. The markschemes are very clear about certain omissions which carry a point deduction, so to ensure fairness and consistency, we insist that two scores are moved down. I’m confident that any British student would prefer an honourable 41/42 than an accidental 42/42.

No-one’s going to be scoring 41 nor 42 unless they solve the extremely challenging geometry Q6, and as we meet our students afterwards, it turns out they have not managed any progress there. However, they claim an almost full set of solutions to Questions 4 and 5, which, if accurate, is a very good return. Everyone is in a good mood, and after I explain a couple of approaches to Q6, no-one seems too disappointed that they didn’t spot these.

There are various schedules floating around, listing multiple locations and times for lunch, but our space-time trajectory intersects none of them, so we follow the Chinese team to a recommended cheap Szechuan restaurant round the corner. Various circle theorems are explored via the Lazy Susan, and there is a grand reveal of the marks we’ve recently confirmed. There’s time for another pair of escape rooms while the second day scripts arrive. As Rosie remarks, two in two days can lead to excessive outside-the-box thinking. Sometimes a radiator really isn’t a sinister prop, a device for encoding five-digit numbers, or a clue to a Templar tunnel; it’s just a radiator. Otherwise we’d be cold.

When the scripts arrive, as expected the cupboard is pretty bare on Q6. If there were marks for quantity, Neel might get some, and if there were marks for most uses of esoteric theory in a single page, Alexander might get one. No set of scripts for an international-level medium combinatorics problem will ever be perfect, but our Q5s come close. It’s therefore not a long evening, and we can join the students for dinner with the American team. For most of them it’s their first visit to Europe, and there’s much comparing of culture and maths training programmes. There’s also a long discussion of whether it’s sensible to teach maths in primary school. Those present who have small children or younger siblings weigh in on the mysteries of the ‘grid method’, and whether toddlers implicitly understand commutativity, even if they can’t spell it.

Sunday 26 February

The UK leaders gather early in the ‘philosophical anti-cafe’ opposite Vianu school, to ponder the final scripts with a coffee and a view of an artfully-arranged folio of Spinoza. James has a loyalty card here. Unfortunately two of our students have clear algebraic errors in Q4, but apart from that everything is very straightforward. Though following last night’s conversation, we note that maybe a revision clinic on mathematical spelling might prove useful. Anonymous student X thinks there’s one L in ‘ellipse’, counterbalanced by anonymous student Y who thinks there are two in ‘column’. The word ‘parallel’ comes in many disguises.

Of course, the coordinators couldn’t care less about that, and they don’t even mind Neel’s two-cases-at-once inductive step, so again we get what we ask for on Q5 immediately, and on Q4 in the time it takes James to draw a lozenge tiling representing Thomas’s shearing argument. For Q6, it turns out there clearly is a mark for most uses of esoteric theory in a single page, so Alexander gets it. They show us a diagram with over a hundred lines which suggests that the exotic equivalence he claims is actually true. There we go. Overall, the quality of our written solutions has been extremely high. It feels like I say this every time now, but it isn’t idle propaganda. We remember the horrors that used to emerge occasionally, and the effort to make this improvement permanent feels well worth it.

Meanwhile, to fill the day, the students have gone to Sinaia. Two of their guides went with them to help with tickets at the station, apparently under the impression that never having taken a train before wouldn’t be an obstacle to this role. Either way, they made it, and following my request for material for this report, I receive a trickle of presentable photos, though there is talk afterwards of some rather more informal versions which are apparently not suitable. The Transylvanian winter is thawing, but slowly and messily, and Harvey reports that several of the group spent more time horizontal than vertical. Irrespective of their preferred axis, there’s no comment on whether they saw my bear, or any other bear. But since my bear was scared of me, one wonders what it would make of MT’s telling-off face? (Last seen by me during the notorious ‘balcony incident’ at a summer school in 2005, but hardly forgotten.)

The students return in time for confirmation of the results and their medals. As so often, there is pleasure that we have done so well collectively, mixed with mild disappointment for those who ended up just short of a boundary, and that the UK was so close to placing first. Because of the strength of the invited countries, earning a medal of any colour is a very worthwhile achievement, and so Rosie is impressively sanguine about missing out so narrowly in such an unfortunate manner. Alexander was closer than it appears, and could have two more opportunities to take part.

The closing ceremony at Vianu school proceeds rapidly. There is the usual challenge of photographing the students receiving their prizes, but this time is easy. Thomas is about a foot taller than everyone else on the stage, while Neel is flanked by almost the entire Russian team, but his chutzpah trumps their numerical advantage, with laughter all round. Joe claims this year’s gold medal is substantially weightier. He hasn’t brought his previous pair, so the chance to verify this and recreate a Mark Spitz moment goes begging.

It’s 7pm, and UK student enthusiasm for the closing disco (not my words) is about as high as MT’s enthusiasm to chaperone the closing disco. Instead we find a Middle Eastern restaurant, and it’s refreshing to eat hummus in a place which doesn’t claim to be the ‘best in Israel’ though I don’t think Abu Said in Akko will be rushing to steal the recipe. Po-Shen outlines his vision of a year-long maths camp. I think present company are tired enough after five days here. Some are interested to view, if not actually participate in, the protests in Victory Square, but it seems tonight is a quiet one and nothing is being burned, so late-night cards and a perusal of each others’ scripts will have to do.

Monday 27th February

The rest of the group have a flight back to London later today which apparently cost 99p per person before tax. I don’t know how much less the 5am option was, but I think it’s probably worth it. My own flight is truly at 5am tomorrow and I plan to stay up all night. The students return to school tomorrow, doubtless to receive a glorious mix of adulation and apathy. Harvey requests whether next year this trip can be timed differently so that he can miss the whole of his local Eisteddfod, rather than just one day. I promise to ask the organisers, say goodbye, then head for the hills on a train journey long enough to write the entirety of this report.

3am at Bucharest airport, and thoughts can now turn to the future. Many of us will meet in five weeks’ for another round of mathematics in the more tranquil setting of Cambridge. Meanwhile, I certainly enjoyed, admittedly through red eyes, the entertainment of a flight to Israel where baggage size regulations are actually enforced at the boarding gate, and apparently everyone else made it back safely too.

RMM 2017 – Problems 1, 4 and 5

I’ve recently taken a UK team to the 2017 edition of the Romanian Master of Mathematics competition in Bucharest. The British students did extremely well and we all enjoyed ourselves mathematically and generally. The customary diary may appear shortly, but this time I want to focus mainly on the questions, since that is after all the main point of these competitions! I hope that what follows is interesting, and at slightly education to potential future students.

I’ve split this into two posts based on my opinion on difficulty, which is subjective but probably correlates fairly positively with most people’s. The account of Q1 is guest-written by two British students, based on their solutions during the competition.

Problem 1

a) Prove that every positive integer n can be written uniquely in the form

n = \sum_{j=1}^{2k+1} (-1)^{j-1} 2^{m_j},

where k\geq 0 and 0 \leq m_1 < m_2 < \cdots < m_{2k+1} are integers. This number k is called the weight of n.

b) Find (in closed form) the difference between the number of positive integers at most 2^{2017} with even weight and the number of positive integers at most 2^{2017} with odd weight.

Rosie Cates and Neel Nanda:

a) We are trying to express n in terms of powers of 2, so it seems sensible to write in binary. As 2^{m_1} is the smallest power of 2, this term is responsible for the last 1 in the binary representation of n. Let $letx x = n – 2^{m_1}$ (ie n with the last 1 removed from its binary expansion). Now if we pair up terms in the sum to get

x = (2^{m_{2k}+1} - 2^{m_{2k}}) + \ldots + (2^{m_3} - 2^{m_2}),

we can see that each bracket looks like 11…100…0 when written in binary. Also, the condition that m_i < m_{i+1} is equivalent to ensuring that we do not break any strings of consecutive 1s that were in the binary expansion of x (so for example 111110 = 110000 +1110 is not allowed). So writing x in the desired form is the same as writing it as the sum of numbers of the form 11…100\ldots 0 without breaking any strings of 1s. For example

1110100110 = 1110000000 + 100000 + 110.

Clearly there is exactly one way of doing this for every x, so (as each n has exactly one x) there is exactly one way to do it for each n as well.

This approach allows k to be understood differently. Write n in binary and remove the last 1; now count the number of groups of consecutive 1s. This is equal to k.

b) The second half of the problem becomes a lot simpler with the observation that n\leq 2^{m_{2k+1}}, as

n=2^{m_{2k+1}}-(2^{m_{2k}}-2^{m_{2k-1}})-\ldots-(2^{m_2}-2^{m_1}),

and the sequence m_n is increasing, so each bracket is positive. As each sequence of (m_n)s corresponds uniquely to an integer, this means we just want to count sequences of (m_n)s with greatest term at most 2017. The sequence is increasing, so each sequence corresponds to a subset of {0, 1, …, 2017} of size (2k+1). There are \binom{2018}{2k+1} subsets of size (2k+1), so the question reduces to finding a closed form for \sum_{k=0}^{1008} (-1)^k {{2018}\choose{2k+1}}.

This is reminiscent of a classic problem in combinatorics: using the binomial theorem to evaluate sums of binomial coefficients weighted by powers. The best example is

\sum_{k=0}^n (-1)^k \binom{n}{k} =(1-1)^n=0,

but here rather than (-1) we want something whose square is $(-1)$, so we consider the complex number i. Using the same ideas, we get that

\sum_{r=0}^{2018} i^r \binom{2018}{r}=(1+i)^{2018},

which contains what we want, but also binomial coefficients with even r. But if r is even, i^r is real, and if r is odd, i^r is imaginary. So the sum we want appears as the imaginary part, that is

\mathrm{Im}\left((1+i)^{2018}\right)=\mathrm{Im}\left((\sqrt{2} \cdot e^{\frac{i\pi}{4}})^{2018}\right)=2^{1009}.

Dominic: note that in both parts, the respective authors find slightly more than what they were required to. That is, respectively, the interpretation of k, and a bound on m_{2k+1}. The latter is an excellent example of the general notion that sometimes it is better to use a stronger statement than what you actually require in an induction argument (here for existence). The stronger statement (which you guess from playing with examples) makes the inductive step easier, as it’s then clear that the new term you get is distinct from the terms you already have.

Problem 4

In the Cartesian plane, let \mathcal G_1, \mathcal G_2 be the graphs of the quadratic functions f_1(x) = p_1x^2 + q_1x + r_1, f_2(x) = p_2x^2 + q_2x + r_2, where p_1 > 0 > p_2. The graphs \mathcal G_1, \mathcal G_2 cross at distinct points A and B. The four tangents to \mathcal G_1, \mathcal G_2 at~A and B form a convex quadrilateral which has an inscribed circle. Prove that the graphs \mathcal{G}_1 and \mathcal{G}_2 have the same axis of symmetry.

This question is quite unusual for an olympiad of this kind, and I was initially skeptical, but then it grew on me. Ultimately, I was unsurprised that many contestants attacked entirely with coordinate calculations. If you use this strategy, you will definitely get there in the end, but you have to accept that you aren’t allowed to make any mistakes. And because of the amount of symmetry in the configuration, even if you make a mistake, you might still get the required answer, and so not notice that you’ve made a mistake. But I decided I liked it because various levels of geometric insight either reduced or removed the nastier calculations.

Typically, one could gain geometric insight by carefully observing an accurate diagram, but an accurate parabola is hard to draw. However, even from a vague diagram, we might guess the key intermediate property of the configuration, which is that the line joining the other two points in the quadrilateral is parallel to the y-axis. This means that they have the same x-coordinate, and indeed this x-coordinate must in fact be the same for any parabola through A and B, so it is reasonable to guess that it is \frac{x_A+x_B}{2}, the mean of the x-coordinates of A and B.

Since you know this is the goal, it’s not too bad to calculate the equations of the tangent lines directly, and demonstrate this algebraically. But I was determined to use the focus-directrix definition of a parabola. Either recall, or digest the interesting new fact that a parabola may be defined as the locus of points which are the same distance from a fixed point P (the focus), and a fixed line \ell (the directrix). Naturally, the distance to the line is perpendicular distance.

To ensure the form given in the statement where y is a quadratic function of x, in this setting the directrix should be parallel to the x-axis. To define the tangent to the parabola at A, let A’ be the foot of the perpendicular from A onto \ell, so AA’=PA. I claim that the tangent at A is given by the perpendicular bisector of A’P. Certainly this passes through A, and it is easy to convince yourself that it can’t pass through any other point B on the parabola, since BA’> PB, as A’ is on \ell but is not the foot of the perpendicular form B to \ell. This final observation is truly a lot more obvious if you’re looking at a diagram.

We now want to finish geometrically too. In our quadrilateral, one diagonal is parallel to the y-axis, and it will suffice to show that the existence of an incircle implies that A and B must have the same y-coordinate. We have just shown A and B are the same (horizontal) distance from the other diagonal. So certainly if they have the same y-coordinate, then the quadrilateral is a kite, and the sums of opposite sides are equal, which is equivalent to the existence of an incircle. One could then finish by arguing that this ceases to be true if you move one of A and B in either direction, or by some short explicit calculation if such a perturbation argument leaves you ill at ease.

Question 5

Fix an integer n \geq 2. An n x n  sieve is an n x n array with n cells removed so that exactly one cell is removed from every row and every column. A stick is a 1 x k or k x 1 array for any positive integer k. For any sieve A, let m(A) be the minimal number of sticks required to partition A. Find all possible values of m(A), as A varies over all possible n x n sieves.

This is a fairly classic competition problem, and while in my opinion the statement isn’t particularly fascinating, it’s interesting that it admits such a wide range of approaches.

As ever, you need to start by playing around with the setup, and guessing that the answer is 2n-2, and not thinking `it can’t possibly be the same answer as Q3??’ Then think about reasons why you couldn’t do better than 2n-2. My very vague reason was that if you only use horizontal sticks, the answer is clearly 2n-2, and the same if you only use vertical sticks. But it feels like you can only make life harder for yourself if you try to use both directions of sticks in lots of places. Note that some sort of argument involving averaging over stick lengths is definitely doomed to fail unless it takes into account the Latin square nature of the location of holes! For example, if you were allowed to put all the holes in the first row, m(A) would be n-1.

Induction is tempting. That is, you remove some number of sticks, probably those corresponding to a given hole, to reduce the board to an (n-1)x(n-1) configuration. If you do this, you need to be clear about why you can remove what you want to remove (in particular, the number of sticks you want to remove), and whether it’s qualitatively different if the hole in question lies on the border of the board. In all of these settings, you want to be careful about 1×1 sticks, which it’s easy inadvertently to count as both horizontal and vertical. This is unlikely to affect the validity of any argument (just picking either an arbitrary or a canonical direction if it’s 1×1 should be fine) but does make it much harder to check the validity.

Joe exhibited directly a construction of 2n-2 cells which must be covered by different sticks. This approach lives or dies by the quality of the written argument. It must look general, even though any diagram you draw must, almost by definition, correspond to some particular case. Alternatively, since the problem is set on a grid, the cells correspond naturally to edges of a bipartite graph, where classes correspond to rows and columns. The holes form a perfect matching on this bipartite graph. But, as Harvey observed, if you split the rows and columns in two, on either side of the relevant hole (or not in the 2+2 cases where the hole is at the border), you have a (2n-2)+(2n-2) bipartite graph, and a perfect matching here corresponds to a set of cells which must be covered by different sticks. This is an ingenious idea, and if you’ve recently met Hall’s Marriage Theorem, which gives a verifiable criterion for the existence of such a perfect matching, there are few better uses of your next ten minutes than to check whether Hall’s condition a) should hold; b) can be proven to hold in this setting.

BMO1 2016 – the non-geometry

Here’s a link to yesterday’s BMO1 paper, and the video solutions for all the problems. I gave the video solution to the geometric Q5, and discuss aspects of this at some length in the previous post.

In these videos, for obvious educational reasons, there’s a requirement to avoid referencing theory and ideas that aren’t standard on the school curriculum or relatively obvious directly from first principles. Here, I’ve written down some of my own thoughts on the other problems in a way that might add further value for those students who are already have some experience at olympiads and these types of problems. In particular, on problems you can do, it’s worth asking what you can learn from how you did them that might be applicable generally, and obviously for some of the harder problems, it’s worth knowing about solutions that do use a little bit of theory. Anyway, I hope it’s of interest to someone.

bmo1-2016-q1Obviously we aren’t going to write out the whole list, but there’s a trade-off in time between coming up with neat ideas involving symmetry, and just listing and counting things. Any idea is going to formalise somehow the intuitive statement ‘roughly half the digits are odd’. The neat ideas involve formalising the statement ‘if we add leading zeros, then roughly half the digits are odd’. The level of roughness required is less in the first statement than the second statement.

Then there’s the trade-off. Trying to come up with the perfect general statement that is useful and true might lead to something like the following:

‘If we write the numbers from 0000 to N, with leading zeros, and all digits of N+1 are even, then half the total digits, ie 2N of them, are odd.’

This is false, and maybe the first three such things you try along these lines are also false. What you really want to do is control the numbers from 0000 to 1999, for which an argument by matching is clear, and gives you 2000 x 4 / 2 = 4000 odd digits. You can exploit the symmetry by matching k with 1999-k, or do it directly first with the units, then with the tens and so on.

The rest (that is, 2000 to 2016) can be treated by listing and counting. Of course, the question wants an actual answer, so we should be wary of getting it wrong by plus or minus one in some step. A classic error of this kind is that the number of integers between 2000 and 2016 inclusive is 17, not 16. I don’t know why the memory is so vivid, but I recall being upset in Year 2 about erring on a problem of this kind involving fences and fenceposts.

bmo1-2016-q2As with so many new types of equation, the recipe is to reduce to a type of equation you already know how to solve. Here, because {x} has a different form on different ranges, it makes sense to consider the three ranges

x\in[0,1/25],\, x\in[1/25,1/8],\, x\in [1/8,\infty),

as for each of these ranges, we can rewrite 5y\{8y\}\{25y\} in terms of standard functions without this bracket notation. On each range we can solve the corresponding equation. We then have to check that each solution does actually lie in the appropriate range, and in two cases it does, and in one case it doesn’t.

bmo1-2016-q3Adding an appropriately-chosen value to each side allows you to factorise the quadratics. This might be very useful. But is it an invitation to do number theory and look at coprime factors and so on, or is a softer approach more helpful?

The general idea is that the set of values taken by any quadratic sequence with integer coefficients and leading coefficient one looks from a distance like the set of squares, or the set \{m(m+1), \,m\in\mathbb{N}\}, which you might think of as ‘half-squares’ or ‘double triangle numbers’ as you wish. And by, ‘from a distance’ I mean ‘up to an additive constant’. If you care about limiting behaviour, then of course this additive constant might well not matter, but if you care about all solutions, you probably do care. To see why this holds, note that

n^2+2n = (n+1)^2 - 1,

so indeed up to an additive constant, the quadratic on the LHS gives the squares, and similarly

n^2 - 7n = (n-4)(n-3)-12,

and so on. To solve the equation n^2=m^2+6, over the integers, one can factorise, but another approach is to argue that the distance between adjacent squares is much more than 6 in the majority of cases, which leaves only a handful of candidates for n and m to check.

The same applies at this question. Adding on 9 gives

n^2-6n+9 = m^2 + m -1,

which is of course the same as

(n-3)^2 = m(m+1)-1.

Now, since we now that adjacent squares and ‘half-squares’ are more than one apart in all but a couple of cases, we know why there should only be a small number of solutions. I would call a method of this kind square-sandwiching, but I don’t see much evidence from Google that this term is generally used, except on this blog.

Of course, we have to be formal in an actual solution, and the easiest way to achieve this is to sandwich m(m+1)-1 between adjacent squares m^2 and (m+1)^2, since it is very much clear-cut that the only squares which differ by one are zero and one itself.

bmo1-2016-q4I really don’t have much to say about this. It’s not on the school curriculum so the official solutions are not allowed to say this, but you have to use that all integers except those which are 2 modulo 4 can be written as a difference of two squares. The easiest way to show this is by explicitly writing down the appropriate squares, treating the cases of odds and multiples of four separately.

So you lose if after your turn the running total is 2 modulo 4. At this point, the combinatorics isn’t too hard, though as in Q1 one has to be mindful that making an odd number of small mistakes will lead to the wrong answer! As in all such problems, it’s best to try and give a concrete strategy for Naomi. And it’s best if there’s something inherent in the strategy which makes it clear that it’s actually possible to implement. (Eg, if you claim she should choose a particular number, ideally it’s obvious that number is available to choose.)

One strategy might be: Naomi starts by choosing a multiple of four. Then there are an even number of multiples of four, so Naomi’s strategy is:

  • whenever Tom chooses a multiple of four, Naomi may choose another multiple of four;
  • whenever Tom chooses a number which is one (respectively three) modulo 4, Naomi may choose another which is three (respectively one) modulo 4.

Note that Naomi may always choose another multiple of four precisely because we’ve also specified the second condition. If sometimes Tom chooses an odd number and Naomi responds with a multiple of four out an idle and illogical sense of caprice, then the first bullet point would not be true. One can avoid this problem by being more specific about exactly what the algorithm is, though there’s a danger that statements like ‘whenever Tom chooses k, Naomi should choose 100-k’ can introduce problems about avoiding the case k=50.

bmo1-2016-q6I started this at the train station in Balatonfured with no paper and so I decided to focus on the case of just m, m+1 and n, n+2. This wasn’t a good idea in my opinion because it was awkward but guessable, and so didn’t give too much insight into actual methods. Also, it didn’t feel like inducting on the size of the sequences in question was likely to be successful.

If we know about the Chinese Remainder Theorem, we should know that we definitely want to use it here in some form. Here are some clearly-written notes about CRT with exercises and hard problems which a) I think are good; b) cite this blog in the abstract. (I make no comment on correlation or causality between a) and b)…)

CRT is about solutions to sets of congruence equations modulo various bases. There are two aspects to this , and it feels to me like a theorem where students often remember one aspect, and forget the other one, in some order. Firstly, the theorem says that subject to conditions on the values modulo any non-coprime bases, there exist solutions. In many constructive problems, especially when the congruences are not explicit, this is useful enough by itself.

But secondly, the theorem tells us what all the solutions are. There are two stages to this: finding the smallest solution, then finding all the solutions. Three comments: 1) the second of these is easy – we just add on all multiples of the LCM of the bases; 2) we don’t need to find the smallest solution – any solution will do; 3) if you understand CRT, you might well comment that the previous two comments are essentially the same. Anyway, finding the smallest solution, or any solution is often hard. When you give students an exercise sheet on CRT, finding an integer which is 3 mod 5, 1 mod 7 and 12 mod 13 is the hard part. Even if you’re given the recipe for the algorithm, it’s the kind of computation that’s more appealing if you are an actual computer.

Ok, so returning to this problem, the key step is to phrase everything in a way which makes the application of CRT easy. We observe that taking n=2m satisfies the statement – the only problem of course is that 2m is not odd. But CRT then tells us what all solutions for n are, and it’s clear that 2m is the smallest, so we only need to add on the LCM (which is odd) to obtain the smallest odd solution.

DGFF 3 – Gibbs-Markov property for entropic repulsion

In the previous post, we saw that it isn’t much extra effort to define the DGFF with non-zero boundary conditions, by adding onto the zero-BC DGFF the unique (deterministic) harmonic function which extends the boundary values into the domain. We also saw how a Gibbs-Markov property applies, whereby the values taken by the field on some sub-region A\subset D depend on the values taken on D\backslash A only through values taken on \partial A.

In this post, we look at how this property and some other methods are applied by Deuschel [1] to study the probability that the DGFF on a large box in \mathbb{Z}^d is positive ‘everywhere’. This event can be interpreted in a couple of ways, all of which are referred to there as entropic repulsion. Everything which follows is either taken directly or paraphrased directly from [1]. I have tried to phrase this in a way which avoids repeating most of the calculations, instead focusing on the methods and the motivation for using them.

Fix dimension d\ge 2 throughout. We let P^0_N be the law of the DGFF on V_N:=[-N,N]^d\subset \mathbb{Z}^d with zero boundary conditions. Then for any subset A\subset \mathbb{Z}^d, in an intuitively-clear abuse of notation, we let

\Omega^+(A):= \{ h_x\ge 0, x\in A\},

be the event that some random field h takes only non-negative values on A. The goal is to determine P^0_N ( \Omega^+(V_N)). But for the purposes of this post, we will focus on showing bounds on the probability that the field is non-negative on a thin annulus near the boundary of V_N, since this is a self-contained step in the argument which contains a blog-friendly number of ideas.

We set (L_N) to be a sequence of integers greater than one (to avoid dividing by zero in the statement), for which \frac{L_N}{N}\rightarrow 0. We now define for each N, the annulus

W_N = \{v\in V_N: L_N\le d_{\mathbb{Z}^d}(v, V_N^c)\le 2L_N \}

with radius L_N set a distance L_N inside the box V_N. We aim to control P^N_0 (\Omega^+(W_N)). This forms middle steps of Deuschel’s Propositions 2.5 and 2.9, which discuss P^N_0(\Omega^+(V_{N-L_N})). Clearly there is the upper bound

P^N_0(\Omega^+(V_{N-L_N})) \le P^N_0(\Omega^+(W_N)) (1)

and a lower bound on P^N_0(\Omega^+(V_{N-L_N})) is obtained in the second proposition by considering the box as a union of annuli then combining the bounds on each annulus using the FKG inequality.

Upper bound via odds and evens

After removing step (1), this is Proposition 2.5:

\limsup_{N\rightarrow \infty} \frac{L_N}{N^{d-1} \log L_N} \log P^N_0(\Omega^+(W_N)) < 0. (2)

This is giving a limiting upper bound on the probability of the form L_N^{-CN^{d-1}/L_N}, though as with all LDP estimates, the form given at (2) is more instructive.

Morally, the reason why it is unlikely that the field should be non-negative everywhere within the annulus is that the distribution at each location is centred, and even though any pair of values are positively correlated, this correlation is not strong enough to avoid this event being unlikely. But this is hard to corral into an upper bound argument directly. In many circumstances, we want to prove upper bounds for complicated multivariate systems by projecting to get an unlikely event for a one-dimensional random variable, or a family of independent variables, even if we have to throw away some probability. We have plenty of tools for tail probabilities in both of these settings. Since the DGFF is normal, a one-dimensional RV that is a linear combination (eg the sum) of all the field heights is a natural candidate. But in this case we would have thrown away too much probability, since the only way we could dominate is to demand that the sum \sum_{x\in W_N}h^N_x\ge 0, which obviously has probability 1/2 by symmetry. (3)

So Deuschel splits W_N into W_N^o,W_N^e, where the former includes all vertices with odd total parity in W_N and the latter includes all the vertices with even total parity in the interior of W_N. (Recall that \mathbb{Z}^d is bipartite in exactly this fashion). The idea is to condition on h^N\big|_{W^o_N}. But obviously each even vertex is exactly surrounded by odd vertices. So by the Gibbs-Markov property, conditional on the odd vertices, the values of the field at the even vertices are independent. Indeed, if for each v\in W_N^e we define \bar h_v to be the average of its neighbours (which is measurable w.r.t to the sigma-algebra generated by the odd vertices), then

\{h_v: v\in W_N^e \,\big|\, \sigma(h_w: w\in W_N^o)\},

is a collection of independent normals with variance one, and where the mean of h_v is \bar h_v.

To start finding bounds, we fix some threshold m=m_N\gg 1 to be determined later, and consider the odd-measurable event A_N that at most half of the even vertices v have \bar h_v\ge m. So A_N^c\cap \Omega^+(W_N) says that all the odd vertices are non-negative and many are quite large. This certainly feels like a low-probability event, and unlike at (3), we might be able to obtain good tail bounds by projection into one dimension.

In the other case, conditional on A_N, there are a large number of even vertices with conditional mean at most m, and so we can control the probability that at least one is negative as a product

(1-\varphi(m))^{\frac12 |W_N^e|}. (4)

Note that for this upper bound, we can completely ignore the other even vertices (those with conditional mean greater than m).

So we’ll go back to A_N^c \cap \Omega^+(W_N). For computations, the easiest one-dimensional variable to work with is probably the mean of the \bar h_vs across v\in W_N^e, since on A_N^c\cap \Omega^+(W_N) this is at least \frac{m}{2}. Rather than focus on the calculations themselves involving

\bar S^e_N:= \frac{1}{|W_N^e|} \sum\limits_{v\in W_N^e} \bar h_v,

let us remark that it is certainly normal and centered, and so there are many methods to bound its tail, for example

P^0_N \left( \bar S^e_N \ge \frac{m}{2} \right) \le \exp\left( \frac{-m^2}{8\mathrm{Var}(\bar S^e_N)} \right), (5)

as used by Deuschel just follows from an easy comparison argument within the integral of the pdf. We can tackle the variance using the Green’s function for the random walk (recall the first post in this set). But before that, it’s worth making an observation which is general and useful, namely that \bar S^e_N is the expectation of

S^e_N:= \sum{1}{|W_N^e|}\sum\limits_{v\in W_N^e} h_v

conditional on the odds. Directly from the law of total variance, the variance of any random variable X is always larger than the variance of \mathbb{E}[X|Y].

So in this case, we can replace \mathrm{Var}(\bar S^e_N) in (5) with \mathrm{Var}(S^e_N), which can be controlled via the Green’s function calculation.

Finally, we choose m_N so that the probability at (4) matches the probability at (5) in scale, and this choice leads directly to (2).

In summary, we decomposed the event that everything is non-negative into two parts: either there are lots of unlikely local events in the field between an even vertex and its odd neighbours, or the field has to be atypically large at the odd sites. Tuning the parameter m_N allows us to control both of these probabilities in the sense required.

Lower bound via a sparse sub-lattice

To get a lower bound on the probability that the field is non-negative on the annulus, we need to exploit the positive correlations in the field. We use a similar idea to the upper bound. If we know the field is positive and fairly large in many places, then it is increasingly likely that it is positive everywhere. The question is how many places to choose?

We are going to consider a sub-lattice that lives in a slightly larger region than W_N itself, and condition the field to be larger than m=m_N everywhere on this lattice. We want the lattice to be sparse enough that even if we ignore positive correlations, the chance of this happening is not too small. But we also want the lattice to be dense enough that, conditional on this event, the chance that the field is actually non-negative everywhere in W_N is not too small either.

To achieve this, Deuschel chooses a sub-lattice of width \lfloor\epsilon L_N^{2/d}\rfloor, and sets \Lambda_N(\epsilon) to be the intersection of this with the annulus with radii [N-\frac{5}{2}L_N, N-\frac{1}{2}L_N], to ensure it lives in a slightly larger region than W_N itself. The scaling of this sub-lattice density is such that when a random walk is started at any v\in W_N, the probability that the RW hits \Lambda_N(\epsilon) before \partial V_N is asymptotically in (0,1). (Ie, not asymptotically zero or one – this requires some definitely non-trivial calculations.) In particular, for appropriate (ie large enough) choice of \epsilon, this probability is at least 1/2 for all v\in W_N. This means that after conditioning on event B_N:=\{h_v\ge m : v\in \Lambda_N(\epsilon)\}, the conditional expectation of h_w is at least \frac{m}{2} for all w\in W_N\backslash \Lambda_N(\epsilon). Again this uses the Gibbs-Markov property and the Gaussian nature of the field. In particular, this conditioning means we are left with the DGFF on V_N\backslash \Lambda_N(\epsilon), ie with boundary \partial V_N\cup \Lambda_N(\epsilon), and then by linearity, the mean at non-boundary points is given by the harmonic extension, which is linear (and so increasing) in the boundary values.

At this point, the route through the calculations is fairly clear. Since we are aiming for a lower bound on the probability of the event \Omega^+(W_N), it’s enough to find a lower bound on P^0_N(\Omega^+(W_N)\cap B).

Now, by positive correlation (or, formally, the FKG inequality) we can control P^0_N(B) just as a product of the probabilities that the field exceeds the threshold at each individual site in \Lambda_N(\epsilon). Since the value of the field at each site is normal with variance at least 1 (by definition), this is straightforward.

Finally, we treat P^0_N(\Omega^+(W_N) \,\big|\, B). We’ve established that, conditional on B, the mean at each point of W_N\backslash \Lambda_N(\epsilon) is at least \frac{m}{2}, and we can bound the variance above too. Again, this is a conditional variance, and so is at most the corresponding original variance, which is bounded above by \sigma_N^2:=\mathrm{Var}(h^N_0). (This fact that the variance is maximised at the centre is intuitively clear when phrased in terms of occupation times, but the proof is non-obvious, or at least non-obvious to me.)

Since each of the event h_v^N\ge 0 for v\in W_N\backslash \Lambda_N(\epsilon) is positively correlated with B, we can bound the probability it holds for all v by the product of the probabilities that it holds for each v. But having established that the conditional mean is at least \frac{m_N}{2} for each v, and the variance is uniformly bounded above (including in N), this gives an easy tail bound of the form we require.

Again it just remains to choose the sequence of thresholds m_N to maximise the lower bound on the probability that we’ve found in this way. In both cases, it turns out that taking m_N= \sqrt{C\log N} is sensible, and this turns out to be linked to the scaling of the maximum of the DGFF, which we will explore in the future.

References

[1] – J-D Deuschel, Entropic Repulsion of the Lattice Free Field, II. The 0-Boundary Case. Available at ProjectEuclid.

Questionable Statistics

My four years in the Statistics Department have been characterised by a continual feeling that I don’t know enough about statistics. That said, I can’t help but notice how statistics are used in non-specialist contexts, and of course there are times when eyebrows might reasonably be raised. Andrew Gelman (who gave a very enjoyable departmental seminar on this topic in May) has a notable blog critiquing poor statistical methodology, especially in social science; and the blogosphere is packed with compilations of egregious statistical oxymorons (“All schools *must* be above average” etc).

I’m aiming for more of a middle ground here. I’ve picked three examples of articles I found interesting at some level over the past few months. In each of them, some kind of data presentation or experiment design arose, and in all cases, I think it resulted in more questions rather than more answers, not really in a good way, since in all cases I did actually want to know more about what was going on.

Extreme rounding errors

This Guardian article about inequality in career trajectories for teachers in schools serving more advantaged and more deprived areas raises a lot of interesting questions, and actually proposes answers to several of them. The problem is the double bar chart halfway down.

First a superficial moan. Rounding to the nearest integer is fine, indeed sensible in most contexts, but not when you are treating small numbers. Comparing the extreme bars, we could have 11.5 v 12.5 or we could have 10.5 v 13.5. The headlines would probably read “N% more teachers leave disadvantaged schools”. In the first case N is 9, in the second it’s 29. So it’s not really a superficial moan. Data is supposed to offer a measure of the effect under consideration, and a discussion of how treat this effect requires a veneer of reliability about the size of the effect. We don’t just need to be asking “how can we solve this”; we also need to be asking “is this worth solving?” I’m not saying the answer is no, but based on this graph alone the answer is not ‘definitely yes’. [1]

A question to ask in the context of this report is “why do more teachers leave the profession from schools in deprived areas?” But I think most people could speculate a broadly sensible answer to this. The data raises the more subtle question “why are the rates of leaving the profession so similar across all types of school?” Essentially I think this survey is a negative result – the effect just isn’t as strong as I’d have suspected, nor I imagine the authors nor the Guardian education editorship.

By contrast, even with the amateurish presentation, the rates of moving school clearly are significant and worth talking about based on this survey. Significant data demands further information though. Once an effect is significant, we need to know more details, like to what extent the teachers are moving to schools in the same band or to mostly towards the left end of the graph. Again though, there’s no comment on the magnitude of this effect. Assuming the rate is given per year [2], then among teachers who do not leave the profession, the average tenure of a given job is more than ten years, even in the most deprived category. Maybe this is all very heavy-tailed, and so the average isn’t a good thing to be using. But this doesn’t seem like an unreasonable number to me? If the claim is that this is a major problem for the teaching profession, then we should be told what the numbers are in vaguely comparable industries.

The final thing is pretty bad. Three paragraphs under the graph, it is claimed that the rate of leaving is 70% higher in most- than least-deprived categories. We decided that N was somewhere between 9 and 29. But not 70. I can only assume that this came from the ‘moving’ category instead…

[1] I’m assuming the sample pool was large enough that the effect is statistically significant, because there isn’t a direct link to the source data or even the source report unfortunately.

[2] and is it plausible that this is rate per year? Does the profession really experience ten percent annual turnover? Is it even plausible that the scale is the same? Maybe rate of leaving really is twice rate of moving school, and I have no idea about anything, but this seems questionable.

Many, many categories

The bank Halifax commissions an annual survey about pocket money, and this year’s received widespread coverage. My attention was caught by this article on BBC news. The short summary is that they surveyed about 1,200 children from about 600 families (so in many cases a direct comparison between siblings could have been possible), and it sounds like they restricted to the age range 8-15, and asked how much pocket money they received, and whether they wanted more.

The clickbait summary was that boys receive about 13% more than girls. While it doesn’t give the exact numbers, it also mentions that more boys than girls thought they deserved more money. A psychologist conjectures that this latter effect might explain the former effect, and indeed this seems plausible. The problem is that elsewhere in the article, it says that in the corresponding survey in 2015, boys receive on average only 2% more than girls.

We therefore have to consider a couple of things. 1) Is it plausible that the actual average pocket money rates (among the whole population) have fluctuated in such a gender-dependent way in 12 months? 2) If not, then can we still say something useful, even if our confidence in the methodology of the survey is reduced?

I think the answer to the first question is definitely ‘no’. So we come to the second question. Firstly, how could this have arisen? Well, there’s no indication how the children were chosen, but equally it’s hard to think of a way of choosing them that would increase artificially the boys’ pocket money. In this last sentence, I’m deliberately suggesting the null hypothesis that boys and girls are the same. This is misleading. For now, our null hypothesis is that 2015 should be the same as 2016. Indeed, it’s equally plausible that this year’s summary data was ‘more correct’, and so we are asking why the difference was excessively compressed in 2015.

This is entirely a matter of effect size, and it’s to the article’s credit that so much information is given further down the page that we can actually make the following remarks quantitatively. At the bottom, they show the average pay (across genders) in different regions. There is a lot of variation here. The rate of pay in East Anglia is only 60% what it is in London. At some level, I’m surprised it isn’t even higher. The article tells us that children between the ages of eight and fifteen were surveyed. By what rate did your pocket money change in that interval? Surely it doubled at the very least? Weekly pocket money at age eight is for sweets and an occasional packet of Pokemon cards (substitute 21st century equivalent freely…). Teenagers need to pay for outings and clothes etc, so there’s no reason at all why this should be comparable. For the purposes of definiteness in the next calculation, let’s say that fifteen-year-olds get paid four times as much as eight-year-olds on average, which I think is rather conservative.

So here’s a plausible reason for the fluctuations. Suppose between 2015 and 2016, the survey substituted 10 eight-year-old boys from the Midlands for 10 fifteen-year-old boys from London. The expectation change in the average amongst the 600 boys surveyed is

\frac{10}{600} \times (\sim 6.5) \times (4 \times (0.6)^{-1}) \approx 0.7,

in pounds, which is on the order of the actual change from 2015 to 2016. Choosing the participants in a non-uniform way seems like too basic a mistake to make, but mild fluctuations in the distribution of ages and locations, as well as the occasional outlier (they want to use the mean, so one oligarch’s daughter could make a noticeable difference in a sample size of 600) seems more plausible as an explanation to me than a general change in the population rates. Choosing participants in a uniform way is just hard when there are loads and loads of categories.

I’m not really saying that this survey is bad – they have to report what they found, and without a lot more information, I have no idea whether this year’s is more or less plausible than last year’s. But if you add the phrase “especially in 2016” to everything the psychologist says, it suddenly seems a bit ridiculous. So it would be worth making the remark that even if this effect is statistically significant, that doesn’t mean the effect size is large relative to lots of other less controversial effect sizes visible in the data.

Comparing precious metals

I’ve recently returning from this year’s International Mathematical Olympiad, and now we are well into the swing of its sweatier namesake in Rio. At both events, there have been many opportunities to observe how different people with different ambitions and levels of extroversion display different levels of pleasure at the same medal outcomes. In the light of this, I was drawn to this article, which has been getting quite a lot of traction online.

The basic premise is simple. Silver medallists are less happy than bronze medallists (when – unlike at the IMO – there is only one of each medal), and it’s not hard to come up with a cheap narrative: silver-winners are disappointed to have missed out on gold; bronze-winners are glad to have got a medal at all. This is all pretty convincing, so maybe there’s no need actually to collect any data, especially since asking a crowd of undergraduates to rate faces on the 1-10 agony-ecstacy scale sounds like a bit of an effort. Let’s read a couple of sentences, describing a follow-up study on judo medallists at the Athens Olympics:

Altogether, they found that thirteen of the fourteen gold medal winners smiled immediately after they completed their winning match, while eighteen of the twenty-six bronze medalists smiled. However, none of the silver medalists smiled immediately after their match ended.

You might wonder how come there are almost twice as many bronzes as golds. Well, very full details about the judo repechage structure can be found here, but the key information is that all bronze-medallists won their final match, as, naturally, did the gold-medallists. The silver-medallists lost their final match, ie the gold-medal final. So this study is literally reporting that highly-driven competitive sportsmen are happier straight after they win, than straight after they lose. This doesn’t have much to do with medals, and isn’t very exciting, in my opinion. I would, however, be interested to know what was eating the one gold medallist who didn’t smile immediately.

This isn’t really a gripe about the statistics, more about the writing. They are testing for an effect which seems highly plausible, but is branded as surprising. The study for which they give more methodological details in fact seems to be testing for a related effect, which is not just plausible, but essentially self-evident. So naturally I want to know more about the original study, which is the only place left for there to be any interesting effects, for example if they looked at athletics events which don’t have a binary elimination structure. But we aren’t told enough to find these interesting effects, if they exist. How annoying. The only thing left is to think about is my bronze medal at IMO 2008. They said eight years would be enough for the wounds to heal but I’m still not smiling.

IMO 2016 Diary – Part Four

A pdf of this report is also available here.

Thursday 14th July

I have now spent a while thinking about square-free n in Q3 after rescaling, and I still don’t know what the markscheme should award it. I therefore request that Joe and Warren receive the same score as each other, and any other contestant who has treated this case. In my opinion this score should be at most one, mainly as a consolation, but potentially zero. However, we are offered two, and after they assure me this is consistent, I accept.

There is brief but high drama (by the standards of maths competitions) when we meet Angelo the Australian leader, who confirms that he has just accepted one mark for almost the same thing by his student Johnny. A Polish contestant in a similar situation remains pending, so we all return for a further meeting. I’m unconvinced that many of the coordinators have read all the scripts in question, but they settle on two for everyone, which is consistent if generous. The only drama on Q5 is the ferocious storm that sets in while I’m making final notes in the plaza. Again though, coordinator Gabriele has exactly the same opinion on our work as Geoff and I, apart from offering an additional mark for Lawrence’s now slightly damp partial solution.

And so we are finished well before lunch, with a total UK score of 165 looking very promising indeed. I’m particularly pleased with the attention to detail – Jacob’s 6 on Q4 is the only mark ‘dropped’, which is brilliant, especially since it hasn’t come at the expense of the students’ usual styles. We’ll have to wait until later to see just how well we have done.

It would be nice to meet the students to congratulate them in person, but they are with Jill on the somewhat inaccessible Victoria Peak, so instead I take a brief hike along the trail down the centre of HK Island, ending up at the zoo. This turned out to be free and excellent, though I couldn’t find the promised jaguar. There was, however, a fantastic aviary, especially the striking flock of scarlet ibis. A noisy group of schoolchildren are surrounding the primates, and one lemur with an evil glint in his eye swings over and languidly starts an activity which elicits a yelp from the rather harried teacher, who now has some considerable explaining to do.

With 1000 people all returning to UST at roughly 6.30, dinner is not dissimilar to feeding time at the zoo, and afterwards various leaders lock horns during the final jury meeting. Two countries have brought an unresolved coordination dispute to the final meeting, and for the first time since I became deputy leader, one of them is successful. Congratulations to the Koreans, who now have a third student with a highly impressive perfect score. Andy Loo and Geoff chair the meeting stylishly and tightly, and although there are many technical things to discuss, it doesn’t drag for too long. Eventually it’s time to decide the medal boundaries, and the snazzy electronic voting system makes this work very smoothly. I feel the gold and bronze cutoffs at 29 and 16 are objectively correct, and the 50-50 flexibility at silver swings towards generosity at 22. We can now confirm the UK scores as:

UKscoresThis is pretty much the best UK result in the modern era, placing 7th and with a medal tally tying with the famous food-poisoning-and-impossible-geometry IMO 1996 in India. But obviously this is a human story rather than just a 6×6 matrix with some summary statistics, and Harvey in particular is probably looking at the world and thinking it isn’t fair, while Warren’s gold is the ideal end to his four years at the IMO, two of which have ended one mark short. The American team are pretty keen to let everyone know that they’ve placed first for the second year in succession, and their remarkable six golds will hopefully allow scope for some good headlines. There is much to talk about, celebrate and commiserate, and this continues late into the night.

Friday 15th July

Our morning copy of the IMO Newsletter includes an interview with Joe, with the headline ‘Meh’. Frank Morgan has rather more to say, which is good news, since he’s delivering the IMO lecture on Pentagonal Tilings. He discusses the motivation of regular tilings where the ratio Perimeter/Area is minimised, starting from questions about honeycombs raised by the Roman author Varro! We move onto more mathematical avenues, including the interesting result of L’Huilier that given a valid set of angles, the associated polygon with minimal Perimeter/Area has an incircle, and the corresponding result for in-n-spheres in higher dimension. A brief diversion to the beach on the way home is punctuated with attempts to project the hyperbolic plane onto the sand.

The day’s main event is the closing ceremony, held at the striking Hong Kong Convention Centre. As usual, the adults and our students have been vigorously separated for the journey. As I arrive, it seems the UK boys have been directing a massed gathering behind the EU flag on stage, while the non-European teams are divided into two sides in a giant paper aeroplane dogfight. All attempts by the organisers to quash this jocularity are being ignored, and after bringing everyone here two hours early, I have minimal sympathy. Geoff sits on a secluded bench, and agrees to the many selfie requests from various teams with regal if resigned tolerance.

The ceremony is started by a fantastically charismatic school brass band, and proceeds with some brief speeches, and more astonishing drumming. Then it’s time to award the medals. Lawrence and Jacob get to go up together among the clump of 24-scorers, while Kevin from Australia does an excellent job of untangling his flag and medal while keeping hold of the ubiquitous cuddly koala. Neel has been threatened with death if he appears on stage again with an untucked shirt, but no direction is required for his and Warren’s smiles as they receive the gold medallists’ applause.

P1010513 (3)Afterwards, there is a closing banquet. We get to join British coordinators James and Joseph for a climate-defying carrot soup, followed by a rare diversion onto Western carbohydrates accompanying what is, for many of us, a first taste of caviar. Both Geoff and the American team are forced to make speeches at no notice. It is all generally rather formal, and fewer photographs are taken than usual. An attempt to capture Joe and Harvey looking miserable results in one the biggest grins of the evening. The UK and Australian teams have a thousand stickers and micro-koalas to give out as gifts, and some of the attempts at this descend into silliness. All clothing and body parts are fair game, and Jacob makes sure that Geoff is fully included. The UK and Australian leaders, variously coated, retreat from the carnage to the relative safety of our top-floor balcony as the IMO drifts to an end, until midnight, when it seems sensible to find out what the students are up to.

Saturday 16th July

This is what the students are up to. When we arrived at UST last week, everyone was given food vouchers to redeem at the campus’s various restaurants. Very very many of these are left over, and, despite the haute cuisine on offer earlier, people are hungry. They have therefore bought McDonalds. And I mean this literally. Animated by Jacob and American Michael, they have bought the entire stock of the nearest branch. If you want to know what 240 chicken nuggets looks like, come to common room IX.1, because now is your chance. Fortunately our team have made many friends and so after the Herculean task (I make no comment on which Herculean labour I feel this most resembles) of getting it to their common room, pretty much the entire IMO descends to help. Someone sets up a stopmotion of the slow erosion of the mountain of fries, while the usual card games start, and a group around a whiteboard tries to come up with the least natural valid construction for n=9 on Q2. Around 3.30am everything is gone, even the 30 Hello Kitties that came with the Happy Meals, and we’re pre-emptively well on the way to beating jetlag.

I wake up in time to wave Geoff off, but he’s been bumped to an earlier bus, so the only thing I see is Lawrence and colleagues returning from a suicidal 1500m round the seaside athletics track. Our own departure is mid-morning, and on the coach the contestants are discussing some problems they’ve composed during the trip. They’ll soon be able to submit these, and by the sounds of it, anyone taking BMO and beyond in 2018 has plenty to look forward to. Jacob has already mislaid his room key and phone, and at the airport he’s completed the hat-trick by losing one of the two essential passport insert pages. Fortunately, it turns out that he’s lost the less essential one, so we can clear security and turn thoughts towards lunch.

Jill has given me free licence to choose our dim sum, so the trip ends with pork knuckle and chicken feet. Our aim is to stay awake for the whole flight, and Neel helps by offering round copies of a Romanian contest from 2010, while I start proof-reading. By the time they finish their paper, many rogue commas have been mercilessly expunged. It should be daylight outside, but the windows are all shut, and by the ninth hour time starts to hang drowsily in a way that combinatorial geometry cannot fix, and so the mutual-waking-up pact kicks in, aided by Cathay Pacific’s unlimited Toblerone. Winding through Heathrow immigration, Joe unveils his latest airport trick of sleeping against vertical surfaces. We diverge into the non-humid night.

Reflection

IMG_0468 original (2)There’s a great deal more to life and mathematics than problem-solving competitions, but our contestants and many other people have worked hard to prepare for IMO 2016 over the past months (and years). So I hope I’m allowed to say that I’m really pleased for and proud of our UK team for doing so well! The last three days of an IMO are very busy and I haven’t had as much time as I’d have liked to talk in detail about the problems. But I personally really liked them, and thought the team showed great taste in choosing this as the British annus mirabilis in which to produce lots of beautiful solutions.

But overall, this is really just the icing on the cake of a training progamme that’s introduced lots of smart young people to each other, and to the pleasures of problem-solving, as well as plenty of interesting general mathematics. I have my own questions to address, and (unless I’m dramatically missing something) these can’t be completed in 4.5 hours, but as ever I’ve found the atmosphere of problem discussion totally infectious, so I hope we are doing something right.

Lawrence and Warren are now off to university. I’m sure they’ll thrive in every way at this next stage, and hopefully might enjoy the chance to contribute their energy and expertise to future generations of olympiad students. The other four remain eligible for IMO 2017 in Brazil, and while they will doubtless have high personal ambitions, I’m sure they’ll also relish the position as ideal role models for their younger colleagues over the year ahead. My own life will be rather different for the next two years, but our camp for new students is held in my no-longer-home-town Oxford in a few weeks’ time, and I’m certainly feeling excited about finding some new problems and doing as much as possible of the cycle all over again!