DGFF 4 – Properties of the Green’s function

I’m at UBC this month for the PIMS probability summer school. One of the long courses is being given by Marek Biskup about the Discrete Gaussian Free Field (notes and outline here) so this seems like a good moment to revive the sequence of posts about the DGFF. Here’s DGFF1, DGFF2, DGFF3 from November.

The first draft of this post was about the maximum of the DGFF in a large box V_N, and also about the Green’s function G^{V_N}(x,y), which specifies the covariance structure of the DGFF. This first draft also became too long, so I’m splitting it into two somewhat shorter ones. As we’ll see, some understanding and standard estimates of the Green’s function is enough to say quite a bit about the maximum. In this first post, we’ll explore some ‘low-hanging fruit’ concerning the Green’s function, as defined through a simple random walk, which are useful, but rarely explained in the DGFF literature.

Symmetry of Green’s function

We start with one of these low-hanging fruit. If G^{V_N} is to be a covariance matrix, it has to be symmetric. In the first post, showing that the definition of the DGFF as a random field with given Hamiltonian is equivalent to \mathcal{N}(0,G^{V_N}) certainly can be viewed as a proof of symmetry. However, it would be satisfying if there was a direct argument in the language of the definition of the Green’s function.

To make this self-contained, recall the random walk definition of G^{V_N}(x,y). Let (S_m)_{m\ge 0} be simple random walk on V_N, and \mathbb{P}_x,\,\mathbb{E}_x denote starting the random walk at x\in V_N. As usual, let \tau_y,\,\tau_A denote the hitting time of a vertex y or a set A respectively. Then

G^{V_N}(x,y):= \mathbb{E}_x \left[ \sum_{m=0}^{\tau_{\partial V_N}}1_{(S_m=y) }\right].

That is, G^{V_N}(x,y) is the expected number of visits to y by a random walk from x, before it exits V_N.

Let’s drop the superscript for now, as everything should hold for a more general subset of the lattice. I don’t think it’s immediately obvious at the level of Markov chains why G(x,y)=G(y,x). In particular, it’s not the case that

\mathbb{P}_x(\tau_y < \tau_{D^c}) = \mathbb{P}_y(\tau_x <\tau_{D^c}),

and it feels that we can’t map between paths x \to \partial D and y\to \partial D in a way that preserves the number of visits to y and x, respectively. However, we can argue that for any m

\mathbb{P}_x(S_m=y, \tau_{D^c}>m) = \mathbb{P}_y(S_m=x, \tau_{D^c}>m),

by looking at the suitable paths of (S_m). That is, if we have a path x=S_0,S_1,\ldots,S_m=y that stays within D, then the probability of seeing this path starting from x and its reverse direction starting from y are equal. Why? Because

\mathbb{P}_x(S_0=x,S_1=v_1,\ldots,S_{m-1}=v_{m-1},S_m=y) = \prod_{\ell=0}^{m-1} \frac{1}{\mathrm{deg}(v_\ell)},


\mathbb{P}_y(S_0=y,S_1=v_{m-1},\ldots,S_{m-1}=v_1, S_m=x) = \prod_{\ell=0}^{m-1} \frac{1}{\mathrm{deg}(v_{m-\ell})} = \prod_{\ell=1}^m \frac{1}{\mathrm{deg}(v_\ell)}.

Since D\subset \mathbb{Z}^d and x,y are in the interior of D, we must have \mathrm{deg}(x)=\mathrm{deg}(y), and so these two expressions are equal. Summing over all such two-way paths, and then all m gives the result.

Fixing one argument

We now focus on G^D(\cdot,y), where the second argument is fixed. This is the solution to the Poisson equation

\Delta G^D(\cdot,y) = -\delta_y(\cdot),\quad G^D(x,y)=0,\; \forall x\in \partial D.

To see this, can use a standard hitting probability argument (as here) with the Markov property. This is harmonic in D\backslash \{y\}, and since we know

G^D(y,y)= \frac{1}{\mathbb{P}_y(\text{RW hits }\partial D\text{ before returning to }y)},

this uniquely specifies G^D(\cdot,y). Anyway, since harmonic functions achieve their maxima at the boundary, we have G(y,y)\ge G(x,y) for all x\in D. We can also see this from the SRW definition as

G(x,y)=G(y,x) = \mathbb{P}_y (\tau_x < \tau_{\partial D} ) G(x,x) \le G(x,x).

Changing the domain

Now we want to consider nested domains D\subset E, and compare G^D(\cdot,\cdot) and G^E(\cdot,\cdot) on DxD. The idea is that for SRW started from x\in D, we have \tau_{\partial D}\le \tau_{\partial E}, since one boundary is contained within the other. From this, we get

G^D(x,y)\le G^E(x,y),\quad \forall x,y\in D,

and we will use the particular case y=x.

For example, if x\in V_N, the box with width N, then the box with width 2N centred on x contains the whole of V_N. So, if we set \bar {V}_{2N}:= [-N,N]^d, then with reference to the diagram, we have

G^{V_N}(x,x)\le G^{\bar{V}_{2N}}(0,0),\quad x\in V_N.

As we’ll see when we study the maximum of the DGFF on V_N, uniform control over the pointwise variance will be a useful tool.

Maximising the Green’s function

The idea of bounding G^{V_N}(x,x) by G^{\bar V_{2N}}(0,0) for any x\in V_N is clever and useful. But a more direct approach would be to find the value of x that maximises G^{V_N}(x,x). We would conjecture that when V_N has a central vertex, then this is the maximiser.

We can prove this directly from the definition of the Green’s function in terms of random walk occupation times. Let’s assume we are working with \bar{V}_N for even N, so that 0 is the central vertex. Again, since

G^D(x,x)=\frac{1}{\mathbb{P}_x(\text{RW hits }\partial D\text{ before returning to }x)}, (*)

it would suffice to show that this probability is minimised when x=0. This feels right, since 0 is furthest from the boundary. Other points are closer to the boundary in some directions but further in others, so we can’t condition on the maximum distance from its start point achieved by an excursion of SRW (we’re vertex-transitive, so these look the same from all starting points), as even allowing for the four possible rotations, for an excursion of diameter slightly larger than N, starting at the centre is maximally bad.

However, intuitively it does feel as if being closer to the boundary makes you more likely to escape earlier. In fact, with a bit more care, we can couple the SRW started from 0 and the SRW started from r=(r^x,r^y)\ne 0 such that the latter always exits first. For convenience we’ll assume also that r^x,r^y are both even.

I couldn’t find any reference to this, so I don’t know whether it’s well-known or not. The following argument involves projecting into each axis, and doing separate couplings for transitions in the x-direction and transitions in the y-direction. We assume WLOG that x is in the upper-right quadrant as shown. Then, let 0=S_0,S_1,S_2,\ldots be SRW started from 0, and we will construct r=R_0,R_1,R_2,\ldots on the same probability space as (S_m)_{m\ge 0} as follows. For every m, we set the increment R_{m+1}-R_m to be \pm(S_{m+1}-S_m). It remains to specify the sign, which will be determined by the direction of the S-increment, and a pair of stopping times. The marginal is therefore again an SRW, started from r. Temporarily, we use the unusual notation S_m= (S^x_m,S^y_m) for the coordinates of S_m.

So, if S_{m+1}-S_m=(1,0), (-1,0), ie S moves left or right, then we set

R_{m+1}-R_m = \begin{cases} -(S_{m+1}-S_m) &\quad \text{if }m<T^x\\ S_{m+1}-S_m&\quad \text{if }m>T^x.\end{cases} (*)

where T^x:= \min\{m\,:\, R^x_m=S^x_m\}. That is, R^x moves in the opposing direction to S^x until the first time when they are equal (hence the parity requirement), and then they move together. WLOG assume that r^x>0. Then suppose S^x_m=\pm N and such m is minimal. Then by construction, if m\ge T^x, then R^x_m=\pm N also. If m<T^x, then we must have S^x_m=-N, and so since R^x‘s trajectory is a mirror image of S^x‘s, in fact R^x_m = N+r^x>N, so R^x hit +N first. In both cases, we see that R^x hits \pm N at the same time or before S^x.

In other words, when S^x_m has non-negative x coordinate, the lazy random walk R^x follows the same trajectory as S^x, and when it has negative x coordinate, the R^x mirrors S^x. At some time, it may happen that S^x_m= R^x_m=0 (recall the parity condition on r). Call this time T^x. We then adjust the description of the coupling so that (*) is the mechanism for m<T^x, and then for m\ge T^x, we take S^x_m=R^x_m.

Similarly, if S_{m+1}-S_m =(0,1), (0,-1), ie S moves up or down, then we set

R_{m+1}-R_m = \begin{cases} -(S_{m+1}-S_m)&\quad \text{ if }m<T^y\\  S_{m+1}-S_m&\quad \text{if }m\le T^y,\end{cases}

with corresponding definition of the stopping time T^y.

This completes the coupling, and by considering T^x\wedge T^y, we have shown what that the exit time for the walk started from zero dominates the exit time for walk started from r. Recall that so far we are in the case where the box has even width and r=(r^x,r^y) has even coordinates.

This exit time comparison isn’t exactly what we need to compare G^N(0,0) and G^N(x,x). It’s worth remarking at this stage that if all we cared about was the Green’s function on the integer line [-N,N], we would have an easier argument, as by the harmonic property of G(\cdot,y)


G^{[-N,N]}(r,0) = \frac{N}{N+r}G^{[-N,N]}(r,r),

and so G(0,0)>G(r,r) follows by symmetry. To lift from 1D to 2D directly, we need a bit more than this. It’s possible that S returns in both x- and y- coordinates more often than R, but never at the same time. Fortunately, the coupling we defined slightly earlier does give us a bit more control.

Let \tau^x(S), \tau^x(R) be the first times that S^x, R^x hit \pm N. Under this coupling, for any m\ge 0

\mathbb{P}(S^x_m=0, m<T^x) = \mathbb{P}(R^x_m=r^x, m<T^x)

since these events are literally equal. Since we showed that \tau^x(R)\le \tau^x(S) almost surely, we can further deduce

\mathbb{P}(S^x_m=0,m<T^x\wedge \tau^x(S)) \ge \mathbb{P}(S^x_m=0,m<T^x\wedge \tau^x(R))

=\mathbb{P}(R^x_m=r^x, m <T^x \wedge \tau^x(R)).

To address the corresponding events for which m\ge T^x, we apply the strong Markov property at T^x, to obtain SRW Z_m started from r/2, and let \tau_{-N},\tau_{+N} be the hitting times of -N,+N respectively and \tau_{\pm N}=\tau_{-N}\wedge \tau_{+N}. It will now suffice to prove that

\mathbb{P}(Z_m=0, m< \tau_{\pm N}) \ge \mathbb{P}(Z_m=r,m<\tau_{\pm N}), (**)

as then we can apply the law of total probability and sum over values of T^x and m\ge 0.

To prove this result, we consider the following bijection between trajectories of length m from r/2 to {0,r}. We decompose the trajectories into excursions away from r/2, and then a final meander from r/2 to {0,r} that stays on the same side of r/2. We construct the new trajectory by preserving all the initial excursions, but reversing all the steps of the final meander. So if the original trajectory ended up at 0, the image ends up at r. Trivially, the initial excursions in the image only hit \pm N if the excursions in the original trajectory did this too. But it’s also easy to see, by a similar argument to the coupling at the start of this section, that if the original trajectory ends at r and does not hit \pm N, then so does the image. However, the converse is not true. So we conclude (**), and thus

\mathbb{P}(S_m^x=0) \ge \mathbb{P}(R_m^x=0)

for all m by combining everything we have seen so far. And so we can now lift to a statement about S_m itself, that is considering both coordinates separately.


The remaining cases for r require a little more care over the definition of T^x, though the same projection argument works, for fundamentally the same reason. (Note that in the above argument, if S^x_m=-N and m<T^x, then in fact R^x_m\ge N+2, and so it’s not hard to convince yourself that a sensible adjustment to the stopping time will allow a corresponding result with R^x_m\ge N+1 in the odd r^x case.) The case for N odd is harder, since in one dimension there are two median sites, and it’s clear by symmetry that we can’t couple them such that RW from one always exits at least as early as RW from the other. However, the distributions of exit times started from these two sites are the same (by symmetry), and so although we can’t find a coupling, we can use similar stopping times to obtain a result in probability.

In the next post, we’ll see how to apply this uniform bound on G^{V_N}(x,x) to control the maximum of the DGFF on V_N. In particular, we address how the positive correlations of DGFF influence the behaviour of the maximum by comparison with independent Gaussians at each site.

Generating uniform trees

A long time ago, I wrote quite a few a things about uniform trees. That is, a uniform choice from the n^{n-2} unrooted trees with vertex set [n]. This enumeration, normally called Cayley’s formula, has several elegant arguments, including the classical Prufer bijection. But making a uniform choice from a large set is awkward, and so we seek more probabilistic methods to sample such a tree, which might also give insight into the structure of a ‘typical’ uniform tree.

In another historic post, I talked about the Aldous-Broder algorithm. Here’s a quick summary. We run a random walk on the complete graph K_n started from a uniformly-chosen vertex. Every time we arrive at a vertex we haven’t visited before, we record the edge just traversed. Eventually we have visited all n vertices, so have recorded n-1 edges. It’s easy enough to convince yourself that these n-1 edges form a tree (how could there be a cycle?) and a bit more complicated to decide that the distribution of this tree is uniform.

It’s worth noting that this algorithm works to construct a uniform spanning tree on any connected base graph.

This post is about a few alternative constructions and interpretations of the uniform random tree. The first construction uses a Galton-Watson process. We take a Galton-Watson process where the offspring distribution is Poisson(1), and condition that the total population size is n. The resulting random tree has a root but no labels, however if we assign labels in [n] uniformly at random, the resulting rooted tree has the uniform distribution among rooted trees on [n].


This is all about moving from ordered trees to non-ordered trees. That is, when setting up a Galton-Watson tree, we distinguish between the following two trees, drawn extremely roughly in Paint:

That is, it matters which of the first-generation vertices have three children. Anyway, for such a (rooted) ordered tree T with n vertices, the probability that the Galton-Watson process ends up equal to T is

\mathbb{P}(GW = T) = \prod_{v\in T} \frac{e^{-1}}{C(v)!} = e^{-n} \prod_{v\in T}\frac{1}{C(v)!},

where C(v) is the number of children of a vertex v\in T. Then, since \mathbb{P}( |GW|=n ) is a function of n, we find

\mathbb{P}(GW=T \,\big|\, |GW|=n) = f(n)\prod_{v\in T} \frac{1}{C(v)!},

where f(n) is a function of n alone (ie depends on T only through its size n).

But given an unordered rooted tree t, labelled by [n], there are \prod_{v \in t} C(v)! ordered trees associated to t in the natural way. Furthermore, if we take the Poisson Galton-Watson tree conditioned to have total population size n, and label uniformly at random with [n], we obtain any one of these ordered trees with probability \frac{f(n)}{n!} \prod_{v\in t} \frac{1}{C(v)!}. So the probability that we have t after we forget about the ordering is \frac{f(n)}{n!}, which is a function of n alone, and so the distribution is uniform among the set of rooted unordered trees labelled by [n], exactly as required.

Heuristic for Poisson offspring distribution

In this proof, the fact that \mathbb{P}(C(v)=k)\propto \frac{1}{k!} exactly balances the number of orderings of the k children explains why Poisson(1) works out. Indeed, you can see in the proof that Poisson(c) works equally well, though when c\ne 1, the event we are conditioning on (namely that the total population size is n) has probability decaying exponentially in n, whereas for c=1, the branching process is critical, and the probability decays polynomially.

We can provide independent motivation though, from the Aldous-Broder construction. Both the conditioned Galton-Watson construction and the A-B algorithm supply the tree with a root, so we’ll keep that, and look at the distribution of the degree of the root as constructed by A-B. Let \rho=v_1,v_2,v_3,\ldots be the vertices [n], ordered by their discovery during the construction. Then \rho is definitely connected by an edge to v_2, but thereafter it follows by an elementary check that the probability \rho is connected to v_m is \frac{1}{n-1}, independently across all m. In other words, the distribution of the degree of \rho in the tree as constructed by A-B is

1+ \mathrm{Bin}\left(n-2,\frac{1}{n-1}\right) \approx 1+\mathrm{Poisson}(1).

Now, in the Galton-Watson process, conditioning the tree to have fixed, large size changes the offspring distribution of the root. Conveniently though, in a limiting sense it’s the same change as conditioning the tree to have size at least n. Since these events are monotone in n, it’s possible to take a limit of the conditioning events, and interpret the result as the Galton-Watson tree conditioned to survive. It’s a beautiful result that this interpretation can be formalised as a local limit. The limiting spine decomposition consists of an infinite spine, where the offspring distribution is a size-biased version of the original offspring distribution (and so in particular, always has at least one child) and where non-spine vertices have the original distribution.

In particular, the number of the offspring of the root is size-biased, and it is well-known and not hard to check that size-biasing Poisson(c) gives 1+Poisson(c) ! So in fact we have, in an appropriate limiting sense in both objects, a match between the degree distribution of the root in the uniform tree, and in the conditioned Galton-Watson tree.

This isn’t supposed to justify why a conditioned Galton-Watson tree is relevant a priori (especially the unconditional independence of degrees), but it does explain why Poisson offspring distributions are relevant.

Construction via G(N,p) and the random cluster model

The main reason uniform trees were important to my thesis was their appearance in the Erdos-Renyi random graph G(N,p). The probability that vertices {1, …, n} form a tree component in G(N,p) with some particular structure is

p^{n-1} (1-p)^{\binom{n}{2}-(n-1)} \times (1-p)^{n(N-m)}.

Here, the first two terms give the probability that the graph structure on {1, …, n} is correct, and the the final term gives the probability of the (independent) event that these vertices are not connected to anything else in the graph. In particular, this has no dependence on the tree structure chosen on [n] (for example, whether it should be a path or a star – both examples of trees). So the conditional distribution is uniform among all trees.

If we work in some limiting regime, where pn\rightarrow 0 (for example if n is fixed and p=\frac{1}{N}\rightarrow 0), then we can get away asymptotically with less strong conditioning. Suppose we condition instead just that [n] form a component. Now, there are more ways to form a connected graph with one cycle on [n] than there are trees on [n], but the former all require an extra edge, and so the probability that a given one such tree-with-extra-edge appears as the restriction to [n] in G(N,p) is asymptotically negligible compared to the probability that the restriction to [n] of G(N,p) is a tree. Naturally, the local limit of components in G(N,c/N) is a Poisson(c) Galton-Watson branching process, and so this is all consistent with the original construction.

One slightly unsatisfying aspect to this construction is that we have to embed the tree of size [n] within a much larger graph on [N] to see uniform trees. We can’t choose a scaling p=p(n) such that G(n,p) itself concentrates on trees. To guarantee connectivity with high probability, we need to take p> \frac{\log n}{n}, but by this threshold, the graph has (many) cycles with high probability.

At this PIMS summer school in Vancouver, one of the courses is focusing on lattice spin models, including the random cluster model, which we now briefly define. We start with some underlying graph G. From a physical motivation, we might take G to be \mathbb{Z}^d or some finite subset of it, or a d-ary tree, or the complete graph K_N. As in classical bond percolation (note G(N,p) is bond percolation on K_N), a random subset of the edges of G are included, or declared open. The probability of a given configuration w, with e open edges is proportional to

p^e (1-p)^{|E(G)| - e} q^{k(w)}, (*)

where the edge-weight p\in(0,1) as usual, and cluster weight q\in (0,\infty), and k(w) counts the number of connected components in configuration w. When q=1, we recover classical bond percolation (including G(N,p) ), while for q>1, this cluster-reweighting favours having more components, and q<1 favours fewer components. Note that in the case q\ne 1, the normalising constant (or partition function) of (*) is generally intractable to calculate explicitly.

As in the Erdos-Renyi graph, consider fixing the underlying graph G, and taking p\rightarrow 0, but also taking \frac{q}{p}\rightarrow 0. So the resulting graph asymptotically ‘wants to have as few edges as possible, but really wants to have as few components as possible’. In particular, 1) all spanning trees of G are equally likely; 2) any configuration with more than one component has asymptotically negligible probability relative to any tree; 3) any graph with a cycle has #components + #edges greater than that of a tree, and so is asymptotically negligible probability relative to any tree.

In other words, the limit of the distribution is the uniform spanning tree of G, and so this (like Aldous-Broder) is a substantial generalisation, which constructs the uniform random tree in the special case where G=K_n.


Characterising fixed points in geometry problems

There’s a risk that this blog is going to become entirely devoted to Euclidean geometry, but for now I’ll take that risk. I saw the following question on a recent olympiad in Germany, and I enjoyed it as a problem, and set it on a training sheet for discussion with the ten British students currently in contention for our 2017 IMO team.

Given a triangle ABC for which AB\ne AC. Prove there exists a point D\ne A on the circumcircle satisfying the following property: for any points M,N outside the circumcircle on rays AB, AC respectively, satisfying BM=CN, the circumcircle of AMN passes through D.

Proving the existence of a fixed point/line/circle which has a common property with respect to some other variable points/lines/circles is a common style of problem. There are a couple of alternative approaches, but mostly what makes this style of problem enjoyable is the challenge of characterising what the fixed point should be. Sometimes an accurate diagram will give us everything we need, but sometimes we need to be clever, and I want to discuss a few general techniques through the context of this particular question. I don’t want to make another apologia for geometry as in the previous post, but if you’re looking for the ‘aha moment’, it’ll probably come from settling on the right characterisation.

At this point, if you want to enjoy the challenge of the question yourself, don’t read on!

Reverse reconstruction via likely proof method

At some point, once we’ve characterised D in terms of ABC, we’ll have to prove it lies on the circumcircle of any AMN. What properties do we need it to have? Well certainly we need the angle relation BDC = A, but because MDAN will be cyclic too, we also need the angle relation MDN = A. After subtracting, we require angles MDB = NDC.

Depending on your configuration knowledge, this is all quite suggestive. At the very least, when you have equal angles and equal lengths, you might speculate that the corresponding triangles are congruent. Here that would imply BD=CD, which characterises D as lying on the perpendicular bisector of BC. D is also on the circumcircle, so in fact it’s also on the angle bisector of BAC, here the external angle bisector. This is a very common configuration (normally using the internal bisector) in this level of problem, and if you see this coming up without prompting, it suggests you’re doing something right.

So that’s the conjecture for D. And we came up with the conjecture based on a likely proof strategy, so to prove it, we really just need to reverse the steps of the previous two paragraphs. We now know BD=CD. We also know angles ABD = ACD, so taking the complementary angles (ie the obtuse bit in the diagram) we have angles DBM = DCN, so we indeed have congruent triangles. So we can read off angles MDB = NDC just as in our motivation, and recover that MDAN is cyclic.

Whatever other methods there are to characterise point D (to follow), all methods will probably conclude with an argument like the one in this previous paragraph, to demonstrate that D does have the required property.


We have one degree of freedom in choosing M and N. Remember that initially we don’t know what the target point D is. If we can’t see it immediately from drawing a diagram corresponding to general M and N, it’s worth checking some special cases. What special cases might be most relevant depends entirely on the given problem. The two I’m going to mention here both correspond to some limiting configuration. The second of these is probably more straightforward, and was my route to determining D. The first was proposed by one of my students.

First, we conjecture that maybe the condition that M and N lie outside the circumcircle isn’t especially important, but has been added to prevent candidates worrying about diagram dependency. The conclusion might well hold without this extra stipulation. Remember at this stage we’re still just trying to characterise D, so even if we have to break the rules to find it, this won’t damage the solution, since we won’t be including our method for finding D in our written-up solution!

Anyway, WLOG AC < AB. If we take N very close to A, then the distances BM and MA are c and b-c respectively. The circumcircle of AMN is almost tangent to line AC. At this point we stop talking about ‘very close’ and ‘almost tangent’ and just assume that N=A and the so the circle AMN really is the circle through M, tangent to AC at A. We need to establish where this intersects the circumcircle for a second time.

To be clear, I found what follows moderately tricky, and this argument took a while to find and was not my first attempt at all. First we do some straightforward angle-chasing, writing A,B,C for the measures of the angles in triangle ABC. Then the angle BDC is also A and angle BDA is 180-C. We also have the tangency relation from which the alternate segment theorem gives angle MDA = A. Then BDM = BDA – MDA = 180 – C – A = B. So we know the lengths and angles in the configuration BDAM.

At this point, I had to use trigonometry. There were a couple of more complicated options, but the following works. In triangle BDM, a length b is subtended by angle B, as is the case for the original triangle ABC. By the extended sine rule, BDM then has the same circumradius as ABC. But now the length BD is subtended by angle DMB in one of these circumcircles, and by DAB in the other. Therefore these angles are either equal or complementary (in the sense that they sum to 180). Clearly it must be the latter, from which we obtain that angles DMA = MAD = 90 – A/2. In other words, D lies on the external angle bisector of A, which is the characterisation we want.

Again to clarify, I don’t think this was a particularly easy or particularly natural argument for this exact problem, but it definitely works, and the idea of getting a circle tangent to a line as a limit when the points of intersection converge is a useful one. As ever, when an argument uses the sine rule, you can turn it into a synthetic argument with enough extra points, but of the options I can currently think of, I think this trig is the cleanest.

My original construction was this. Let M and N be very very far down the rays. This means triangle AMN is large and approximately isosceles. This means that the line joining A to the circumcentre of AMN is almost the internal angle bisector of MAN, which is, of course, also the angle bisector of BAC. Also, because triangle AMN is very large, its circumcircle looks, locally, like a line, and has to be perpendicular to the circumradius at A. In other words, the circumcircle of AMN is, near A, approximately line perpendicular to the internal angle bisector of BAC, ie the external angle bisector of BAC. My ‘aha moment’ factor on this problem was therefore quite high.

Direct arguments

A direct argument for this problem might consider a pairs of points (M,N) and (M’,N’), and show directly that the circumcircles of ABC, AMN and AM’N’ concur at a second point, ie are coaxal. It seems unlikely to me that an argument along these lines wouldn’t find involve some characterisation of the point of concurrency along the way.

Do bear in mind, however, that such an approach runs the risk of cluttering the diagram. Points M and N really weren’t very important in anything that’s happened so far, so having two pairs doesn’t add extra insight in any of the previous methods. If this would have been your first reaction, ask yourself whether it would have been as straightforward or natural to find a description of D which led to a clean argument.

Another direct argument

Finally, a really neat observation, that enables you to solve the problem without characterising D. We saw that triangles DBM and DCN were congruent, and so we can obtain one from the other by rotating around D. We say D is the centre of the spiral similarity (here in fact with homothety factor 1 ie a spiral congruence) sending BM to CN. Note that in this sort of transformation, the direction of these segments matters. A different spiral similarity sends BM to NC.

But let’s take any M,N and view D as this spiral centre. The transformation therefore maps line AB to AC and preserves lengths. So in fact we’ve characterised D without reference to M and N ! Since everything we’ve said is reversible, this means as M and N vary, the point we seek, namely D, is constant.

This is only interesting as a proof variation if we can prove that D is the spiral centre without reference to one of the earlier arguments. But we can! In general a point D is the centre of spiral similarity mapping BM to CN iff it is also the centre of spiral similarity mapping BC to MN. And we can find the latter centre of spiral similarity using properties of the configuration. A is the intersection of MB and CN, so we know precisely that the spiral centre is the second intersection point of the two circumcircles, exactly as D is defined in the question.

(However, while this is cute, it’s somehow a shame not to characterise D as part of a solution…)

Random walks conditioned to stay positive

In this post, I’m going to discuss some of the literature concerning the question of conditioning a simple random walk to lie above a line with fixed gradient. A special case of this situation is conditioning to stay non-negative. Some notation first. Let (S_n)_{n\ge 0} be a random walk with IID increments, with distribution X. Take \mu to be the expectation of these increments, and we’ll assume that the variance \sigma^2 is finite, though at times we may need to enforce slightly stronger regularity conditions.

(Although simple symmetric random walk is a good example for asymptotic heuristics, in general we also assume that if the increments are discrete they don’t have parity-based support, or any other arithmetic property that prevents local limit theorems holding.)

We will investigate the probability that S_n\ge 0 for n=0,1,…,N, particularly for large N. For ease of notation we write T=\inf\{n\ge 0\,:\, S_n<0\} for the hitting time of the negative half-plane. Thus we are interested in S_n conditioned on T>N, or T=N, mindful that these might not be the same. We will also discuss briefly to what extent we can condition on T=\infty.

In the first paragraph, I said that this is a special case of conditioning SRW to lie above a line with fixed gradient. Fortunately, all the content of the general case is contained in the special case. We can repose the question of S_n conditioned to stay above n\alpha until step N by the question of S_n-n\alpha (which, naturally, has drift \mu-\alpha) conditioned to stay non-negative until step N, by a direct coupling.


Simple random walk is a perfectly interesting object to study in its own right, and this is a perfectly natural question to ask about it. But lots of probabilistic models can be studied via naturally embedded SRWs, and it’s worth pointing out a couple of applications to other probabilistic settings (one of which is the reason I was investigating this literature).

In many circumstances, we can desribe random trees and random graphs by an embedded random walk, such as an exploration process, as described in several posts during my PhD, such as here and here. The exploration process of a Galton-Watson branching tree is a particularly good example, since the exploration process really is simple random walk, unlike in, for example, the Erdos-Renyi random graph G(N,p), where the increments are only approximately IID. In this setting, the increments are given by the offspring distribution minus one, and the hitting time of -1 is the total population size of the branching process. So if the expectation of the offspring distribution is at most 1, then the event that the size of the tree is large is an atypical event, corresponding to delayed extinction. Whereas if the expectation is greater than one, then it is an event with limiting positive probability. Indeed, with positive probability the exploration process never hits -1, corresponding to survival of the branching tree. There are plenty of interesting questions about the structure of a branching process tree conditional on having atypically large size, including the spine decomposition of Kesten [KS], but the methods described in this post can be used to quantify the probability, or at least the scale of the probability of this atypical event.

In my current research, I’m studying a random walk embedded in a construction of the infinite-volume DGFF pinned at zero, as introduced by Biskup and Louidor [BL]. The random walk controls the gross behaviour of the field on annuli with dyadically-growing radii. Anyway, in this setting the random walk has Gaussian increments. (In fact, there is a complication because the increments aren’t exactly IID, but that’s definitely not a problem at this level of exposition.) The overall field is decomposed as a sum of the random walk, plus independent DGFFs with Dirichlet boundary conditions on each of the annuli, plus asymptotically negligible corrections from a ‘binding field’. Conditioning that this pinned field be non-negative up to the Kth annulus corresponds to conditioning the random walk to stay above the magnitude of the minimum of each successive annular DGFF. (These minima are random, but tightly concentrated around their expectations.)

Conditioning on \{T > N\}

When we condition on \{T>N\}, obviously the resulting distribution (of the process) is a mixture of the distributions we obtain by conditioning on each of \{T=N+1\}, \{T=N+2\},\ldots. Shortly, we’ll condition on \{T=N\} itself, but first it’s worth establishing how to relate the two options. That is, conditional on \{T>N\}, what is the distribution of T?

Firstly, when \mu>0, this event always has positive probability, since \mathbb{P}(T=\infty)>0. So as N\rightarrow\infty, the distribution of the process conditional on \{T>N\} converges to the distribution of the process conditional on survival. So we’ll ignore this for now.

In the case \mu\le 0, everything is encapsulated in the tail of the probabilities \mathbb{P}(T=N), and these tails are qualitatively different in the cases \mu=0 and \mu<0.

When \mu=0, then \mathbb{P}(T=N) decays polynomially in N. In the special case where S_n is simple symmetric random walk (and N has the correct parity), we can check this just by an application of Stirling’s formula to count paths with this property. By contrast, when \mu<0, even demanding S_N=-1 is a large deviations event in the sense of Cramer’s theorem, and so the probability decays exponentially with N. Mogulskii’s theorem gives a large deviation principle for random walks to lie above a line defined on the scale N. The crucial fact here is that the probabilistic cost of staying positive until N has the same exponent as the probabilistic cost of being positive at N. Heuristically, we think of spreading the non-expected behaviour of the increments uniformly through the process, at only polynomial cost once we’ve specified the multiset of values taken by the increments. So, when \mu<0, we have

\mathbb{P}(T\ge(1+\epsilon)N) \ll \mathbb{P}(T= N).

Therefore, conditioning on \{T\ge N\} in fact concentrates T on N+o(N). Whereas by contrast, when \mu=0, conditioning on \{T\ge N\} gives a nontrivial limit in distribution for T/N, supported on [1,\infty).

A related problem is the value taken by S_N, conditional on {T>N}. It’s a related problem because the event {T>N} depends only on the process up to time N, and so given the value of S_N, even with the conditioning, after time N, the process is just an unconditioned RW. This is a classic application of the Markov property, beloved in several guises by undergraduate probability exam designers.

Anyway, Iglehart [Ig2] shows an invariance principle for S_N | T>N when \mu<0, without scaling. That is S_N=\Theta(1), though the limiting distribution depends on the increment distribution in a sense that is best described through Laplace transforms. If we start a RW with negative drift from height O(1), then it hits zero in time O(1), so in fact this shows that conditonal on \{T\ge N\}, we have T= N +O(1) with high probability. When \mu=0, we have fluctuations on a scale \sqrt{N}, as shown earlier by Iglehart [Ig1]. Again, thinking about the central limit theorem, this fits the asymptotic description of T conditioned on T>N.

Conditioning on T=N

In the case \mu=0, conditioning on T=N gives

\left[\frac{1}{\sqrt{N}}S(\lfloor Nt\rfloor ) ,t\in[0,1] \right] \Rightarrow W^+(t), (*)

where W^+ is a standard Brownian excursion on [0,1]. This is shown roughly simultaneously in [Ka] and [DIM]. This is similar to Donsker’s theorem for the unconditioned random walk, which converges after rescaling to Brownian motion in this sense, or Brownian bridge if you condition on S_N=0. Skorohod’s proof for Brownian bridge [Sk] approximates the event \{S_N=0\} by \{S_N\in[-\epsilon \sqrt{N},+\epsilon \sqrt{N}]\}, since the probability of this event is bounded away from zero. Similarly, but with more technicalities, a proof of convergence conditional on T=N can approximate by \{S_m\ge 0, m\in[\delta N,(1-\delta)N], S_N\in [-\epsilon \sqrt{N},+\epsilon\sqrt{N}]\}. The technicalities here emerge since T, the first return time to zero, is not continuous as a function of continuous functions. (Imagine a sequence of processes f^N for which f^N(x)\ge 0 on [0,1] and f^N(\frac12)=\frac{1}{N}.)

Once you condition on T=N, the mean \mu doesn’t really matter for this scaling limit. That is, so long as variance is finite, for any \mu\in\mathbb{R}, the same result (*) holds, although a different proof is in general necessary. See [BD] and references for details. However, this is particularly clear in the case where the increments are Gaussian. In this setting, we don’t actually need to take a scaling limit. The distribution of Gaussian *random walk bridge* doesn’t depend on the mean of the increments. This is related to the fact that a linear transformation of a Gaussian is Gaussian, and can be seen by examining the joint density function directly.

Conditioning on T=\infty

When \mu>0, the event \{T=\infty\} occurs with positive probability, so it is well-defined to condition on it. When \mu\le 0, this is not the case, and so we have to be more careful.

First, an observation. Just for clarity, let’s take \mu<0, and condition on \{T>N\}, and look at the distribution of S_{\epsilon N}, where \epsilon>0 is small. This is approximately given by

\frac{S_{\epsilon N}}{\sqrt{N}}\stackrel{d}{\approx}W^+(\epsilon).

Now take \epsilon\rightarrow\infty and consider the RHS. If instead of the Brownian excursion W^+, we instead had Brownian motion, we could specify the distribution exactly. But in fact, we can construct Brownian excursion as the solution to an SDE:

\mathrm{d}W^+(t) = \left[\frac{1}{W^+(t)} - \frac{W^+(t)}{1-t}\right] \mathrm{d}t + \mathrm{d}B(t),\quad t\in(0,1) (**)

for B a standard Brownian motion. I might return in the next post to why this is valid. For now, note that the first drift term pushes the excursion away from zero, while the second term brings it back to zero as t\rightarrow 1.

From this, the second drift term is essentially negligible if we care about scaling W^+(\epsilon) as \epsilon\rightarrow 0, and we can say that W^+(\epsilon)=\Theta(\sqrt{\epsilon}).

So, returning to the random walk, we have

\frac{S_{\epsilon N}}{\sqrt{\epsilon N}}\stackrel{d}{\approx} \frac{W^+(\epsilon)}{\sqrt{\epsilon}} = \Theta(1).

At a heuristic level, it’s tempting to try ‘taking N\rightarrow\infty while fixing \epsilon N‘, to conclude that there is a well-defined scaling limit for the RW conditioned to stay positive forever. But we came up with this estimate by taking N\rightarrow\infty and then \epsilon\rightarrow 0 in that order. So while the heuristic might be convincing, this is not the outline of a valid argument in any way. However, the SDE representation of W^+ in the \epsilon\rightarrow 0 regime is useful. If we drop the second drift term in (**), we define the three-dimensional Bessel process, which (again, possibly the subject of a new post) is the correct scaling limit we should be aiming for.

Finally, it’s worth observing that the limit \{T=\infty\}=\lim_{N\rightarrow\infty} \{T>N\} is a monotone limit, and so further tools are available. In particular, if we know that the trajectories of the random walk satisfy the FKG property, then we can define this limit directly. It feels intuitively clear that random walks should satisfy the FKG inequality (in the sense that if a RW is large somewhere, it’s more likely to be large somewhere else). You can do a covariance calculation easily, but a standard way to show the FKG inequality applies is by verifying the FKG lattice condition, and unless I’m missing something, this is clear (though a bit annoying to check) when the increments are Gaussian, but not in general. Even so, defining this monotone limit does not tell you that it is non-degenerate (ie almost-surely finite), for which some separate estimates would be required.

A final remark: in a recent post, I talked about the Skorohod embedding, as a way to construct any centered random walk where the increments have finite variance as a stopped Brownian motion. One approach to conditioning a random walk to lie above some discrete function is to condition the corresponding Brownian motion to lie above some continuous extension of that function. This is a slightly stronger conditioning, and so any approach of this kind must quantify how much stronger. In Section 4 of [BL], the authors do this for the random walk associated with the DGFF conditioned to lie above a polylogarithmic curve.


[BD] – Bertoin, Doney – 1994 – On conditioning a random walk to stay nonnegative

[BL] – Biskup, Louidor – 2016 – Full extremal process, cluster law and freezing for two-dimensional discrete Gaussian free field

[DIM] – Durrett, Iglehart, Miller – 1977 – Weak convergence to Brownian meander and Brownian excursion

[Ig1] – Iglehart – 1974 – Functional central limit theorems for random walks conditioned to stay positive

[Ig2] – Iglehart – 1974 – Random walks with negative drift conditioned to stay positive

[Ka] – Kaigh – 1976 – An invariance principle for random walk conditioned by a late return to zero

[KS] – Kesten, Stigum – 1966 – A limit theorem for multidimensional Galton-Watson processes

[Sk] – Skorohod – 1955 – Limit theorems for stochastic processes with independent increments

Symmedians and Balkan MO 2017 Q2

While I was away, I wrote about my latest approach to teaching geometry at olympiad camps. This post will end up being about Q2 from the Balkan MO which took place yesterday in Macedonia, but first there is quite a long prelude. My solution, and probably many solutions, to this problem made use of a standard configuration in triangle geometry, namely the symmedian. I want to introduce the configuration, give some simpler examples in practice, and along the way talk about my slightly patched-together philosophy about the merits of practising Euclidean geometry.

The symmedian

Draw a triangle ABC, with A at the top of the page, and extend the rays AB and AC. The median is the line from A through M, the midpoint of BC. Now take points D and E on AB and AC respectively. The following properties are equivalent:

  • DE is parallel to BC;
  • triangle ADE is similar to triangle ABC;
  • the median of ABC passes through the midpoint of DE, and thus is also the median of ADE.

I think it’s a little awkward to prove either of the first two from the third – ratios of areas works – but the rest of the equivalences are straightforward. Later I’m going to talk about the difference between an exercise and a problem. These are all, at best, exercises.

Now take B’ on the ray AC, and C’ on the ray AB such that triangle AB’C’ is similar to triangle ABC. One way to achieve this is to take B’ and C’ to be the reflections in the angle bisector of A of B and C respectively (so then AB’=AB and AC’=AC). We say the line B’C’ is antiparallel to BC, as is any other line DE parallel to B’C’. (Probably this should say ‘with respect to triangle ABC’ or similar, but the context here is very clear, and I want this to seem natural rather than opaque.) Note that DE is an antiparallel line iff BCED is a cyclic quadrilateral. We should remember that, as cyclic quadrilaterals are the signposts for progress in both exercises and problems.

The median of triangle AB’C’ obeys the same equivalences as described above, and so bisects any antiparallel segment. We call the median of triangle AB’C’ the symmedian of triangle ABC. Using the first set of equivalences, the symmedian of triangle ABC bisects any line antiparallel to BC. Furthermore, by construction, the symmedian is the image of the median of ABC under reflection in the bisector of the angle at A. We sometimes say that the symmedian is the isogonal conjugate of the median.

That’s my definition. Note that there was essentially one definition then a couple of easy equivalent definitions. At no point again will I discuss the equivalence of these definitions – we have to take that for granted if we want to get on to more interesting things.

Intersection of tangents + concurrency

Now, in triangle ABC, draw the tangents to the circumcircle at B and C. These meet at P. It turns out that AP is the symmedian. This could have been our definition of a symmedian, but it wasn’t, so let’s quickly prove this.

Trigonometric arguments are very accessible, but I’ll give a Euclidean argument. Draw the antiparallel DE through P, as shown. Our task is to show that EP=PD. At this point, I would again say that this is an exercise.

We colour the angle ABC in green. Two angles around point C share this measure by the alternate segment theorem. The angle at E shares this measure because DE is antiparallel. Therefore CPE is isosceles, and so EP=CP. But CP=BP, so by applying the same argument for the orange angles, we get EP=CP=BP=DP as required.

Pause to regroup. Proving this wasn’t hard, but it was perhaps surprising. If this was all new to you, and I told you to consider the reflection of the median in the angle bisector, you probably wouldn’t instantly exclaim “it goes through the tangent intersection!” So this is a useful piece of knowledge to have gained, in case we ever have to work with the intersection of two tangents like this. Maybe it won’t be useful, but maybe it will. Maybe the statement itself won’t but some extra insights from the proof will be useful, like the fact that we actually showed P is the centre of the circle BCED, and thus angles ECD=EBD=90.

A second property is that in a triangle ABC, the symmedian from A, the symmedian from B and the symmedian from C intersection at, naturally, the symmedian point, which is usually denoted K. This comes from the fact that each symmedian is the isogonal conjugate of the respective median, and the medians are known to concur at the centroid. I’m not going to get into this now.

Configurations – an example

Here’s a problem. Take an isosceles trapezium ABCD as shown (ie throughout I don’t want to worry about alternative diagrams).

Let M be the midpoint of AD, and let E be the point on CM such that angle DBM = EBA. Prove that ABCDE is cyclic.

Well, certainly ABCD is cyclic. So we just need to show E also lies on that circle. And we have two equal angles, but they aren’t in the right place to conclude this immediately. However, we have angle MCA = DBM = EBA, so ABCE is cyclic, and the result follows.

Why is angle MCA = DBM? Well, the isosceles trapezium has an axis of (reflective) symmetry, and MCA is the is image of DBM under that reflection. Simple. If we wanted to do it with congruent triangles, this would all be a bit more laborious. First have to show BD=AC using one set of congruent triangles, then CM=BM using another, finally finishing using DM=MA. This is much less interesting. The symmetry of the configuration is a higher-level observation which could be proved from the axioms of geometry if necessary, but gives us more information more quickly. When we use a configuration like the symmedian configuration, we are really doing a higher-again-level version of this.

Anyway, that problem is fine, but it’s not especially difficult.

Consider instead the following problem. (I saw this online, possibly with slightly different notation, a few days ago and can no longer find the link. If anyone can help, I will add the link.)

Let AB be a chord of a circle, with midpoint M, and let the tangents at A and B meet at P. Consider a line through P which meets the circle at C and D in that order. Extend CM to meet the circle again at E. Show DME is isosceles.

Here’s a diagram, though it includes some clues.

I thought this was a fun problem, and for a while I couldn’t do it because despite lots of equal angles and equal lengths, I couldn’t conjure any congruent triangles in the right places, and I didn’t care enough about solving it to get involved in trigonometry. Then came the moment of insight. We have a midpoint, and also the intersection of the tangents. So DP is the symmedian of triangle DAB, and DM is the median. This gives us the two equal orange angles. Cyclicity gives us an extra equal angle at E as well.

Note now that the situation is very very similar to the previous question (after changing some of the labels), only this time we know ACBDE is cyclic, but don’t know that ABDE is an isosceles trapezium. If ABDE is an isosceles trapezium, we are clearly finished, as then by the same symmetry argument, EM=DM. This direction is probably harder to prove than the direction of the previous problem. Again there are a couple of ways to proceed, but one way is to consider the point E’ such that ABDE’ is an isosceles trapezium, and arguing that E’ lies on the given circle, and the circle through BME, and thus must coincide with E, in a reverse reconstruction argument.

Anyway, this is all slightly a matter of taste, but I would say the second problem is much much more fun than the first problem, even though the second part of the solution is basically the first problem but in a more awkward direction. If you’re going to do Euclidean geometry at all (very much another question), I think you should do questions like the second question wherever possible. And the enjoyable ‘aha moment’ came from knowing about the symmedian configuration. Is it really plausible that you’d look at the original diagram (without the dashed orange lines) and think of the antiparallel to AB in triangle DAB through point P? Probably not. So knowing about the configuration gave access to the good bit of a nice problem.

‘Philosophy of this sort of thing’

If the goal was to solve the second problem in a competition, knowing about the symmedian configuration would be a big advantage. I’ve tried to justify a related alternative view that knowing about the configuration gave access to an enjoyable problem. The question is how many configurations to study, and how hard to study them?

We tend not to think of cyclic quadrilaterals as a special configuration, but that is what they are. We derived circle theorems from the definition of a circle so that we don’t always have to mark on the centre, every single time we have a cyclic quadrilateral. So becoming familiar with a few more is not unreasonable. In particular, there are times when proofs are more important than statements. In research (certainly mine), understanding how various proofs work is the most important aspect, for when you try to extend them or specialise. And in lots of competition problems, the interesting bit is normally finding the argument rather than basking in wonder at the statement (though sometimes the latter is true too!).

To digress briefly. In bridge, I don’t know enough non-obvious motifs in bidding or gameplay to play interesting hands well. I trust that if I thought about some of it very very carefully, I could come up with some of them, especially in gameplay, but not in real time. And it is supposed to be fun right?! Concentrating very very hard to achieve a basic level of competence is not so enjoyable, especially if it’s supposed to be a break from regular work. The end result of this is that I don’t play bridge, which is a shame, because I think the hurdles between where I am currently and a state where I enjoy playing bridge are quite low. If I knew I was going to play bridge regularly, a bit of time reading about conventions would be time well spent. And obviously this applies equally in pursuits which aren’t directly intellectual. Occasionally practising specific skills in isolation broadens overall enjoyment in sport, music, and probably everything. As anyone who’s played in an orchestra knows, there are standard patterns that come up all the time. If you practise these occasionally, you get to a stage where you don’t really need to concentrate that hard in the final movement of Beethoven 5, and instead can listen to the horns, make funny faces at the first violins, and save your mental energy for the handful of non-standard tricky bits. And of course, then move on to more demanding repertoire, where maybe the violas actually get a tune.

This is highly subjective, but my view is that in all these examples are broadly similar to configurations in geometry, and in all of them a little goes a long way.

How? In lots of the geometry configurations you might meet in, for example, a short session at a training camp, most of the conclusions about the configurations have proofs which, like in our symmedian case, are simple exercises. Once you’ve got over some low initial experience hurdles, you have to trust that you can normally solve any simple exercise if required. If you can’t, moving on and returning later, or asking for help is a good policy. The proof shown above that symmedians pass through tangent meet points (and especially a trigonometric alternative) really isn’t interesting enough to spend hours trying to find it. The statements themselves are more useful and interesting here. And it can often be summarised quite quickly: “symmedians are the isogonal conjugates of the medians, so they bisect antiparallels, meet at K, and pass through the alternate tangent meeting points.” Probably having a picture in your mind is even simpler.

There’s a separate question of whether this is worthwhile. I think solving geometry problems occasionally is quite fun, so I guess yes I do think it is worthwhile, but I understand others might not. And if you want to win maths competitions, in the current framework you have to solve geometry problems under time pressure. But from an educational point of view, even though the statements themselves have no real modern research value, I think a) that’s quite a high bar to set, and there’s no a priori reason why they should – >99.9% of things anyone encounters before university have no value to modern research maths; b) in terms of knowledge acquisition, it’s similar in spirit to lots of things that are relevant to later study. I don’t have to solve PDEs very often, but when I do, I hope they are equivalent or similar to one of the small collection of PDEs I do know how to solve. If I worked more with PDEs, the size of this collection would grow naturally, after some initial struggles, and might eventually match my collection of techniques for showing scaling limits of random processes, which is something I need to use often, so the collection is much larger. Maybe that similarity isn’t enough justification by itself, but I think it does mean it can’t be written off as educationally valueless.

Balkan MO 2017 Question Two

An acute angled triangle ABC is given, with AB<AC, and \omega is its circumcircle. The tangents t_B,t_C at B,C respectively meet at L. The line through B parallel to AC meets t_C at D. The line through C parallel to AB meets t_B at E. The circumcircle of triangle BCD meets AC internally at T. The circumcircle of triangle BCE meets AB extended at S. Prove that ST, BC and AL are concurrent.

Ok, so why have I already written 1500 words about symmedians as a prelude to this problem? Because AL is a symmedian. This was my first observation. This observation is then a route into non-Euclidean solutions. It means, for example, that you can describe the point of concurrency fairly explicitly with reference to triangle ABC. If you wish, you can then proceed using areal coordinates. One member of the UK team, whom I know is perfectly capable of finding a synthetic solution, did this. And why not? It’s a competition, and if you can see a method that will definitely work, and definitely take 45 minutes (or whatever) then that’s good.

I was taking a break from work in my office, and had no interest in spending the time evaluating determinants because that isn’t enjoyable at any level, so I focused on the geometry.

I think there’s a good moral from the diagram above, which is the first moderately correct one I drew. I often emphasise that drawing an accurate diagram is important, as it increases the chance that you’ll spot key properties. In this case though, where you’re trying to examine a known configuration, I think it’s more important what you choose to include on your diagram, than how accurately you draw it. (In a moment, we’ll see why it definitely wasn’t very accurate.)

In particular, what’s not on the diagram? E is not on the diagram, and S got added later (as did the equal length signs in TB and CS, which rather spoil what’s about to happen). My first diagram was wildly incorrect, but it also suggested to me that the line ST was hard to characterise, and that I should start by deducing as much as possible about S and T by themselves. So by symmetry, probably it was enough just to deduce as much as possible about T.

Label the angles of triangle ABC as <A, <B, And therefore TB is an antiparallel in triangle ABC. (Note this doesn’t look antiparallel on my diagram at all, but as I said, this didn’t really matter.) Obviously you then guess that CS is also an antiparallel, and on a different diagram I checked this, for essentially the same reasons.

We haven’t yet made any use of the symmedian, but this is clearly where it’ll be useful. Note that if we didn’t know about everything in the prelude, we might well have deduced all of this, but we might not have thought to prove that AL bisects TB unless we’d drawn a very accurate diagram.

At this point, we have to trust that we have enough information to delete most of the diagram, leaving just {A,B,C,S,T} and the line AL. There are a few ways to finish, including similar triangles if you try very hard or trigonometry if you do it right, but again knowledge of some standard configurations is useful. Probably the quickest way is to use Ceva’s theorem in triangle ACS. You can also use Menelaus’ theorem in ABC, so long as you know a little bit about where the symmedian meets the opposite side.

An alternative is the following. We have a complete quadrilateral here, namely BTCS, and the intersection of all its diagonals. One is A, one is the proposed point of concurrency, and one is the point at infinity, since TB || CS. You can chase that, but I found it more clear to let P be the intersection of ST and BC (which we want to prove lies on AL), then look at the complete quadrilateral ATPB. Then AT and BP meet at C, and AB and TP meet at S. So if we look at where the diagonals of ATPB meet the line CS, we have a harmonic range.

If I’d wanted, I could instead have written the prelude about harmonic ranges, but I had fewer ideas how to set these up in a slick Euclidean way. Also, it feels better to talk about the start, rather than the end of a proof, especially when there were alternative endings. Anyway, a harmonic range is a collection of two pairs of points on a line (A, B; C, D), satisfying the following ratio of directed lengths:

\frac{AC}{BC} = -\frac{AD}{BD}.

A classic example is when D is the point at infinity, the RHS is -1, and so C is the midpoint of AB. Being happy about using the point at infinity is a property of projective geometry, of which this is a good first example. Anyway, returning to the problem, we are looking at where the diagonals of ATPB meet line CS, and this pair of points forms a harmonic range with (C,S). TB meets CS at the point at infinity, and so AP meets CS at the midpoint of CS. But from the symmedian configuration, AL bisects CS, so AP and AL are in fact the same line, and so P lies on AL as required.

I think was a brilliant example of when knowing a bit of theory is enjoyable. It wasn’t at all obvious initially how to use the symmedian property, but then the observation that TB is antiparallel felt like a satisfying breakthrough, but didn’t immediately kill the problem.

Balkan MO 2017 – Qs 1, 3 and 4

The UK is normally invited to participate as a guest team at the Balkan Mathematical Olympiad, an annual competition between eleven countries from South-Eastern Europe. I got to take part in Rhodes almost exactly ten years ago, and this year the competition was held in Ohrid, in Macedonia. There’s one paper, comprising four questions, normally one from each of the agreed olympiad topic areas, with 4.5 hours for students to address them. The contest was sat this morning, and I’m going to say quite a bit about the geometric Q2, and a little bit about Qs 1 and 3 also. In all cases, this discussion will include most of a solution, with some commentary, so don’t read these if you are planning to try the problems yourself.

I’m not saying anything about Q4, because I haven’t solved it. (Edit: I have solved it now, so will postpone Q2 until later today.)

Question One

Find all ordered pairs of positive integers (x,y) such that


The first thought is that if either of x or y is ‘large’, then the LHS is bigger than the RHS, and so equality can’t hold. That is, there are only finitely many solutions. The smallest possible value of y is, naturally, 1, and substituting y=1 is convenient as then y^2=y^3, and it’s straightforward to derive x=7 as a solution.

Regarding the non-existence of large solutions, you can make this precise by factorising the LHS as

(x+y)(x^2-xy+y^2) = x^2+42xy+y^2.

There are 44 terms of degree two on the RHS, and one term of degree in the second bracket on the LHS. With a bit of AM-GM, you can see then that if x+y>44, you get a contradiction, as the LHS will be greater than the RHS. But that’s still a lot of possibilities to check.

It struck me that I could find ways to reduce the burden by reducing modulo various primes. 2, 3 and 7 all divide 42, and furthermore cubes are nice modulo 7 and squares are nice modulo 3, so maybe that would bring the number of possibilities down. But my instinct was that this wasn’t the right way to use the fact that we were solving over positive integers.

The second bracket in the factorisation looks enough like the RHS, that it’s worth exploring. If we move x^2-xy+y^2 from the right to the left, we get

(x+y-1)(x^2-xy+y^2) = 43xy. (1.1)

Now it suddenly does look useful that we are solving over positive integers, because 43 is a prime, so has to appear as a factor somewhere on the LHS. But it’s generally quite restrictive that x^2-xy+y^2 | 43xy. This definitely looks like something that won’t hold often. If x and y are coprime, then certainly x^2-xy+y^2 and y are coprime also. But actually if x and y have a non-trivial common factor d, we can divide both sides by d^2, and it still holds. Let’s write

x=dm,\quad y=dn,\quad\text{where }d=\mathrm{gcd}(x,y).

Then m^2 -mn+n^2 really does divide 43, since it is coprime to both m and n. This is now very restrictive indeed, since it requires that m^2-mn+n^2 be equal to 1 or 43. A square-sandwiching argument gives m^2-mn+n^2=1 iff m=n=1. 43 requires a little bit more work, with (at least as I did it) a few cases to check by hand, but again only has one solution, namely m=7, n=1 and vice versa.

We now need to add the common divisor d back into the mix. In the first case, (1.1) reduces to (2d-1)=43, which gives (x,y)=(22,22). In the second case, after cancelling a couple of factors, (1.1) reduces to (8d-1)=7, from which (x,y)=(7,1),(1,7) emerges, and these must be all the solutions.

The moral here seemed to be that divisibility was a stronger tool than case-reduction. But that was just this question. There are other examples where case-reduction is probably more useful than chasing divisibility.

Question Three

Find all functions f:\mathbb{N}\rightarrow\mathbb{N} such that

n+f(m) \,\big|\, f(n)+nf(m)

for all m,n\in\mathbb{N}.

What would be useful here? There are two variables, and a function. It would be useful if we could reduce the number of variables, or the number of occurences of f. We can reduce the number of variables by taking m=n, to get

n+f(n) \,\big|\, f(n) [1+n]. (3.1)

From this, we might observe that f(n)\equiv 1 is a solution. Of course we could analyse this much more, but this doesn’t look like a 10/10 insight, so I tried other things first.

In general, the statement that a\,|\,b also tells us that a\,|\, b-ka. That is, we can subtract arbitrary multiples of the divisor, and the result is still true. A recurring trope is that the original b is elegant, but an adjusted b-ka is useful. I don’t think we can do the latter, but by subtracting n^2 +nf(m) from the problem statement, we get

n+f(m) \,\big|\, n^2-f(n). (3.2)

There’s now no m on the RHS, but this relation has to hold for all m. One option is that f(n)=n^2 everywhere, then what we’ve deduced always holds since the RHS is zero. But if there’s a value of n for which f(n)\ne n^2, then (3.2) is a very useful statement. From now on, we assume this. Because then as we fix n and vary m, we need n+f(m) to remain a divisor of the RHS, which is fixed, and so has finitely many divisors. So f(m) takes only finitely many values, and in particular is bounded.

This ties to the observation that f\equiv 1 is a solution, which we made around (3.1), so let’s revisit that: (Note, there might be more elegant ways to finish from here, but this is what I did. Also note, n is no longer fixed as in previous paragraph.)

n+f(n) \,\big|\, f(n) [1+n]. (3.1)

Just to avoid confusion between the function itself, and one of the finite collection of values it might take, let’s say b is a value taken by f. So there are values of n for which

n+b \,\big|\, b(1+n).

By thinking about linear equations, you might be able to convince yourself that there are only finitely many solutions (in n) to this relation. There are certainly only finitely many solutions where LHS=RHS (well, at most one solution), and only finitely many where 2xLHS=RHS etc etc. But why do something complicated, when we can actually repeat the trick from the beginning, and subtract b(n+b), to obtain

n+b \,\big|\, b^2-b.

For similar reasons to before, this is a great deduction, because it means if b\ne 1, then the RHS is positive, which means only finitely many n can satisfy this relation. Remember we’re trying to show that no n can satisfy this relation if b\ne 1, so this is definitely massive progress!

If any of what’s already happened looked like magic, I hope we can buy into the idea that subtracting multiples of the divisor from the RHS is the only tool we used, and that making the RHS fixed gives a lot of information about the LHS as the free variable varies. The final step is not magic either. We know that f is eventually 1. If you prefer “for large enough n, f(n)=1,” since all other values appear only finitely often. I could write this with quantifiers, but I don’t want to, because that makes it seem more complicated than it is. We genuinely don’t care when the last non-1 value appears.

Anyway, since we’ve deduced this, we absolutely have to substitute this into something we already have. Why not the original problem statement? Fix m, then for all large enough n

n+f(m) \,\big|\, 1+nf(m). (3.3)

To emphasise, (3.3) has to hold for all large enough n. Is it possible that f(m)=2? Again, it’s easy to convince yourself not. But, yet again, why not use the approach we’ve used so profitably before to clear the RHS? In fact, we already did this, and called it (3.2), and we can make that work [3.4], but in this setting, because f(m) is fixed and we’re working with variable large n, it’s better to eliminate n, to get

n+f(m)\,\big|\, f(m)^2-1,

again for all large enough n. By the same size argument as before, this is totally impossible unless f(m)=1. Which means that in fact f(m)=1 for all m. Remember ages ago we assumed that f(n) was not n^2 everywhere, so this gives our two solutions: f(n)=1,\, f(n)=n^2.

Moral: choosing carefully which expression to work with can make life much more interesting later. Eliminating as many variables or difficult things from one side is a good choice. Playing with small values can help you understand the problem, but here you need to think about soft properties of the expression, in particular what happens when you take one variable large while holding another fixed.

[3.4] – if you do use the original approach, you get n^2-1 on the RHS. There’s then the temptation to kill the divisibility by taking n to be the integer in the middle of a large twin prime pair. Unfortunately, the existence of such an n is still just a conjecture

Question Four

(Statement copied from Art of Problem Solving. I’m unsure whether this is the exact wording given to the students in the contest.)

On a circular table sit n>2 students. First, each student has just one candy. At each step, each student chooses one of the following actions:

(A) Gives a candy to the student sitting on his left or to the student sitting on his right.

(B) Separates all its candies in two, possibly empty, sets and gives one set to the student sitting on his left and the other to the student sitting on his right.

At each step, students perform the actions they have chosen at the same time. A distribution of candy is called legitimate if it can occur after a finite number of steps.
Find the number of legitimate distributions.

My moral for this question is this: I’m glad I thought about this on the bus first. What I found hardest here was getting the right answer. My initial thoughts:

  • Do I know how to calculate the total number of possibilities, irrespective of the algorithm? Fortunately yes I do. Marbles-in-urns = barriers between marbles on a line (maybe add one extra marble per urn first). [4.1]
  • What happens if you just use technique a)? Well first you can get into trouble because what happens if you have zero sweets? But fine, let’s temporarily say you can have a negative number of sweets. If n is even, then there’s a clear parity situation developing, as if you colour the children red and blue alternately, at every stage you have n/2 sweets moving from red children to blue and vice versa, so actually the total number of sweets among the red children is constant through the process.
  • What happens if you just use technique b)? This felt much more promising.
  • Can you get all the sweets to one child? I considered looking at the child directly opposite (or almost-directly opposite) and ‘sweeping’ all the sweets away from them. It felt like this would work, except if for some parity reason we couldn’t prevent the final child having one (or more, but probably exactly one) sweets at the crucial moment when all the other sweets got passed to him.

Then I got home, and with some paper, I felt I could do all possibilities with n=5, and all but a few when n=6. My conjecture was that all are possible with n odd, and all are possible with n even, except those when none of the red kids or none of the kids get a sweet. I tried n=8, and there were a few more that I couldn’t construct, but this felt like my failure to be a computer rather than a big problem. Again there’s a trade-off between confirming your answer, and trying to prove it.

Claim: If n is even, you can’t achieve the configurations where either the red children or the blue children have no sweets.

Proof: Suppose you can. That means there’s a first time that all the sweets were on one colour. Call this time T. Without loss of generality, all the sweets are on red at T. Where could the sweets have been at time T-1? I claim they must all have been on blue, which contradicts minimality. Why? Because if at least one red child had at least one sweet, they must have passed at least one sweet to a blue neighbour.

Now it remains to give a construction for all other cases. In the end, my proof has two stages:

Step One: Given a configuration, in two steps, you can move a candy two places to the right, leaving everything else unchanged.

This is enough to settle the n odd case. For the even case, we need an extra step, which really corresponds to an initial phase of the construction.

Step Two: We can make some version of the ‘sweeping’ move precise, to end up in some configuration where the red number of children have any number of sweets except 0 or n.

Step one is not so hard. Realising that step one would be a useful tool to have was probably the one moment where I shifted from feeling like I hadn’t got into the problem to feeling that I’d mostly finished it. As ever in constructions, working out how to do a small local adjustment, which you plan to do lots of times to get a global effect, is great. (Think of how you solve a Rubik’s cube for example.)

Step two is notationally fiddly, and I would think very carefully before writing it up. In the end I didn’t use the sweeping move. Instead, with the observation that you can take an adjacent pair and continually swap their sweets it’s possible to set up an induction.

Actual morals: Observing the possibility to make a small change in a couple of moves (Step one above) was crucial. My original moral does still hold slightly. Writing lots of things down didn’t make life easier, and in the end the ideas on the bus were pretty much everything I needed.

[4.1] – one session to a group of 15 year olds is enough to teach you that the canon is always ‘marbles in urns’ never ‘balls’ nor ‘bags’, let alone both.

EGMO 2017 – Paper One – Geometric subconfigurations

I’ve recently been in Cambridge, running the UK’s annual training and selection camp for the International Mathematical Olympiad. My memories of living and studying in Cambridge are very pleasant, and it’s always nice to be back.

Within olympiad mathematics, the UK has traditionally experienced a weakness at geometry. By contrast to comparable nations, for example those from Eastern Europe, our high school curriculum does not feature much Euclidean geometry, except for the most basic of circle theorems and angle equalities, which normally end up as calculation exercises, rather than anything more substantial. So to arrive at the level required to be in with a chance of solving even the easier such questions at international competitions, our students have to do quite a lot of work for themselves.

I’ve spent a bit of time in the past couple of years thinking about this, and how best to help our students achieve this. The advice “go away and do as many problems as you can, building up to IMO G1, then a bit further” is probably good advice, but we have lots of camps and correspondence training, and I want to offer a bit more.

At a personal level, I’m coming from a pragmatic point of view. I don’t think Euclidean geometry is particularly interesting, even though it occasionally has elegant arguments. My main concern is taming it, and finding strategies for British students (or anyone else) to tame it too [1].

Anyway, I’m going to explain my strategy and thesis as outlined at the camp, then talk about Question 1 from EGMO 2017, a competition held in Zurich this year, the first paper of which was sat earlier today (at time of writing). The UK sent a strong team of four girls, and I’m looking forward to hearing all about their solutions and their adventures, but later. I had intended to talk about the other two questions too, but I can’t think of that much to say, so have put this at the end.

My proposed strategy

Before explaining my proposed strategy, let me discuss a couple of standard approaches that sometimes, but rarely, work at this level:

  • Angle chase (or length chase) forwards directly from the configuration. Consider lots of intersection points of lines. Consider angles and lengths as variables, and try to find relations.
  • Exactly as above, but working back from the conclusion.
  • Doing both, and attempting to meet in the middle.

The reason why this doesn’t work is that by definition competitions are competitive, and all participants could probably do this. For similar reasons competition combinatorics problems tend not to reduce instantly to an exhaustive search.

It’s also not very interesting. I’m certainly unlikely to set a problem if it’s known to yield to such an approach. When students do try this approach, common symptoms and side-effects involve a lot of chasing round conditions that are trivially equivalent to conditions given in the statement. For example, if you’re given a cyclic quadrilateral, and you mark on opposing complementary angles, then chase heavily, you’ll probably waste a lot of time deducing other circle theorems which you already knew.

So actually less is more. You should trust that if you end up proving something equivalent to the required conclusion, you’ll notice. And if you are given a cyclic quadrilateral, you should think about what’s the best way to use it, rather than what are all the ways to use it.

On our selection test, we used a problem which essentially had two stages. In the first stage, you proved that a particular quadrilateral within the configuration was cyclic; and in the second stage, you used this to show the result. Each of these stages by themselves would have been an easy problem, suitable for a junior competition. What made this an international-level problem was that you weren’t told that these were the two stages. This is where a good diagram is useful. You might well guess from an accurate figure that TKAD was cyclic, even if you hadn’t constructed it super-accurately with ruler and compasses.

So my actual strategy is to think about the configuration and the conclusion separately, and try and conjecture intermediate results which might be true. Possibly such an intermediate result might involve an extra point or line. This is a standard way to compose problems. Take a detailed configuration, with some interesting properties within it, then delete as much as possible while keeping the properties. Knowing some standard configurations will be useful for this. Indeed, recognising parts of the original diagram which resemble known configurations (possibly plus or minus a point or line) is a very important first step in many settings.

Cyclic quadrilaterals and isosceles triangles are probably the simplest examples of such configurations. Think about how you often use properties of cyclic quadrilaterals without drawing in either the circle or its centre. The moral is that you don’t need every single thing that’s true about the configuration to be present on the diagram to use it usefully. If you know lots of configurations, you can do this sort of thing in other settings too. Some configurations I can think up off the top of my head include: [2]

  • Parallelograms. Can be defined by corresponding angles, or by equal opposite lengths, or by midpoint properties of the centre. Generally if you have one of these definitions, you should strongly consider applying one of the other definitions!
  • The angle bisector meets the opposite perpendicular bisector on the circumcircle.
  • Simson’s line: the feet of the three perpendiculars from a point to the sides (extended if necessary) of a triangle are collinear precisely when the point is on the circumcircle.
  • The incircle touch point and the excircle touch point are reflections of each other in the corresponding midpoint. Indeed, all the lengths in this diagram can be described easily.
  • The spiral similarity diagram.
  • Pairs of isogonal conjugates, especially altitudes and radii; and medians and symmedians.

Note, all of these can be investigated by straightforward angle/length-chasing. We will see how one configuration turned out to be very useful in EGMO. In particular, the configuration is simple, and its use in the problem is simple, but it’s the idea to focus on the configuration as often as possible that is key. It’s possible but unlikely you’d go for the right approach just by angle-analysis alone.

EGMO 2017 Question 1

Let ABCD be a convex quadilateral with <DAB=<BCD=90, and <ABC > <CDA. Let Q and R be points on segments BC and CD, respectively, such that line QR intersects lines AB and AB at points P and S, respectively. It is given that PQ=RS. Let the midpoint of BD be M, and the midpoint of QR be N. Prove that the points M, N, A and C lie on a circle.

First point: as discussed earlier, we understand cyclic quadrilaterals well, so hopefully it will be obvious once we know enough to show these four points are concyclic. There’s no point guessing at this stage whether we’ll do it by eg opposite angles, or by power of a point, or by explicitly finding the centre.

But let’s engage with the configuration. Here are some straightforward deductions.

  • ABCD is cyclic.
  • M is the centre.

We could at this stage draw in dozens of equal lengths and matching angles, but let’s not do that. We don’t know yet which ones we’ll need, so we again have to trust that we’ll use the right ones when the time comes.

What about N? If we were aiming to prove <AMC = <ANC, this might seem tricky, because we don’t know very much about this second angle. Since R and Q are defined (with one degree of freedom) by the equal length condition, it’s hard to pin down N in terms of C. However, we do know that N is the midpoint opposite C in triangle QCR, which has a right angle at C. Is this useful? Well, maybe it is, but certainly it’s reminiscent of the other side of the diagram. We have four points making up a right-angled triangle, and the midpoint of the hypotenuse here, but also at (A,B,D,M). Indeed, also at (C,B,D,M). And now also at (C,Q,R,N). This must be a useful subconfiguration right?

If you draw this subdiagram separately, you have three equal lengths (from the midpoint to every other point), and thus two pairs of equal angles. This is therefore a very rich subconfiguration. Again, let’s not mark on everything just yet – we trust we’ll work out how best to use it later.

Should we start angle-chasing? I think we shouldn’t. Even though we have now identified lots of potential extra pairs of equal angles, we haven’t yet dealt with the condition PQ=RS at all.

Hopefully as part of our trivial equivalences phase, we said that PQ=RS is trivially equivalent to PR=QS. Perhaps we also wrote down RN=NQ, and so it’s also trivially equivalent to PN=NS. Let’s spell this out: N is the midpoint of PS. Note that this isn’t how N was defined. Maybe this is more useful than the actual definition? (Or maybe it isn’t. This is the whole point of doing the trivial equivalences early.)

Well, we’ve already useful the original definition of N in the subconfiguration (C,Q,R,N), but we can actually also use the subconfiguration (A,P,S,N) too. This is very wordy and makes it sound complicated. I’ve coloured my diagram to try and make this less scary. In summary, the hypotenuse midpoint configuration appears four times, and this one is the least obvious. If you found it, great; if not, I hope this gave quite a lot of motivation. Ultimately, even with all the motivation, you still had to spot it.

Why is this useful? Because a few paragraphs earlier, I said “we don’t know very much about this second angle <ANC”. But actually, thanks to this observation about the subconfiguration, we can decompose <ANC into two angle, namely <ANP+<QNC which are the apex angle in two isosceles triangles. Now we can truly abandon ourselves to angle-chasing, and the conclusion follows after a bit of work.

I’m aware I’ve said it twice in the prelude, and once in this solution, but why not labour my point? The key here was that spotting that a subconfiguration appeared twice led you to spot that it appeared a further two times, one of which wasn’t useful, and one of which was very useful. The subconfiguration itself was not complicated. To emphasise its simplicity, I can even draw it in the snow:

Angle-chasing within the configuration is easy, even with hiking poles instead of a pen, but noticing it could be applied to point N was invaluable.

Other questions

Question 2 – My instinct suggested the answer was three. I find it hard to explain why. I was fairly sure they wouldn’t have asked if it was two. Then I couldn’t see any reason why k would be greater than 3, but still finite. I mean, is it likely that k=14 is possible, but k=13 is not.

In any case, coming up with a construction for k=3 is a nice exercise, and presumably carried a couple of marks in the competition. My argument to show k=2 was not possible, and most arguments I discussed with others were not overwhelmingly difficult, but didn’t really have any key steps or insight, so aren’t very friendly in a blog context, and I’ll probably say nothing more.

Question 3 – Again, I find it hard to say anything very useful, because the first real thing I tried worked, and it’s hard to motivate why. I was confused how the alternating turn-left / turn-right condition might play a role, so I ignored it initially. I was also initially unconvinced that it was possible to return to any edge in any direction (ie it must escape off to infinity down some ray), but I was aware that both of these were too strong a loosening of the problem to be useful, in all likelihood.

Showing that you can go down an edge in one direction but not another feels like you’re looking for some binary invariant, or perhaps a two-colouring of the directed edges. I couldn’t see any way to colour the directed edges, so I tried two-colouring the faces, and there’s only one way to do this. Indeed, on the rare occasions (ahem) I procrastinate, drawing some lines then filling in the regions they form in this form is my preferred doodle. Here’s what it looks like:

and it’s clear that if the path starts with a shaded region on its right, it must always have a shaded region on its right. As I say, this just works, and I find it hard to motivate further.

A side remark is that it turns out that my first loosening is actually valid. The statement remains true with arbitrary changes of direction, rather than alternating changes. The second loosening is not true. There are examples where the trajectory is periodic. I don’t think they’re hugely interesting though, so won’t digress.


[1] – “To you, I am nothing more than a fox like a hundred thousand other foxes. But if you tame me, then we shall need each other. To me, you will be unique in all the world. To you, I shall be unique in all the world,” said the Fox to the Little Prince. My feelings on taming Euclidean geometry are not this strong yet.

[2] – Caveat. I’m not proposing learning a big list of standard configurations. If you do a handful of questions, you’ll meet all the things mentioned in this list several times, and a few other things too. At this point, your geometric intuition for what resembles what is much more useful than exhaustive lists. And if you’re anxious about this from a pedagogical point of view, it doesn’t seem to me to be a terribly different heuristic from lots of non-geometry problems, including in my own research. “What does this new problem remind me of?” is not unique to this area at all!

RMM 2017 – UK Team Blog

This is the customary and slightly frivolous account of a trip to Bucharest for the ninth edition of the Romanian Master of Mathematics, an annual competition for school students, widely recognised as the hardest of its kind.

I discuss the problems in two previous posts (here and here), and there is also a pdf with fewer pictures, which includes both the discussion and this diary, as well as some more formal comments about the competition itself, the results, and thanks.

Wednesday 22 February

Did you know that trains in Moldova use different width tracks to trains in Romania? Well, I didn’t know either, but I found out at 1am today, as my wagon lit from Chisinau was painstakingly jacked up to allow the transfer from ex-Soviet gauge to Western gauge. Outside, a man in a smart uniform and epaulettes shouted loudly and continuously at a group of men in smart uniforms without epaulattes. When their task was done, four sets of border and custom checks remained before the opportunity for another visit to the samovar, and finally a chance to sleep.

All of which is to say that I have arrived at maths competitions in better mental shape than 6am today at Gara de Nord. The UK students have a more conventional itinerary, but their flight from Luton doesn’t arrive until mid-afternoon. After my first Haifa ‘winter’, I’m craving pork and snow, and find both in the mountain town of Sinaia, an hour away by train in Transylvania. I also find a bear. The bear seems very scared.

I return in time to meet the UK students as well as James and MT. Some of our contestants are now into their fourth year of attending international competitions, and the labour of finding them fresh material resembles Hercules against the hydra, but some problems on combinatorial geometry with convexity seem to have kept everyone entertained on the flight. Dinner is at the Moxa campus of the University of Economics, and features chicken with one of two possible carbohydrates, as in fact do the next six meals. However, today is Thomas’s 18th birthday, and so his parents have arranged a delicious cake, which elicits considerably more enthusiasm. On the short walk back to our meeting, we notice it is possible both to buy fireworks and get a tattoo among other options, so Thomas is spoiled for choice about how to take advantage of his majority.

The team’s activities remain a mystery to James and me though, as we have to join the other leaders for the first meeting, to receive the proposed problems. We spend some time thinking about them separately then together, and our initial impression is that it’s a very suitable paper, that hopefully our team will enjoy.

Thursday 23 February

The leaders meet to finalise the choice and statement of the problems. With a bit more time this morning, I’ve solved Q1, Q2, Q5, and proved Q3 once I’d looked up the correct bound. James eats conics for breakfast and shows me a glorious range of interpretations of Q4. We feel happy that our students will have a chance at all of these, while Q6 may prove more restricting. Either way, it’s clearly an appropriate set for this competition, and is approved quickly. So it’s time to finalise the English version of the paper, or finalize the American version. Many alternatives to the word sieve are proposed. Andrea from Italy is clearly already craving home comforts, but his suggestion of cheese grater is not taken up. This time I’m sorting the LaTeX, so get to settle the commas, but also take the blame for inconsistently spacing the rubric between the two papers. I’m sure everyone noticed.

While all this has been happening, the students have been at a lecture by Sergiu Moroianu at the Institute of Mathematics. Joe Benton gives an account of what they learned in the longer pdf version of this report.

For all the charms of Chipping Norton, I sense MT is enjoying the grittier nature of Bucharest Sector 1, and has been shepherding the students round various sites in between attempts at practice problems. I join them for a brief visit to a geology museum. I am very cynical, but it slightly exceeds my expectations, and is infinitely better than the nearby Museum of the Romanian Peasant, which currently ties with the Hanoi Ethnology Museum as my least favourite olympiad excursion of all time.

The opening ceremony is held in the grand hall of the university, and includes several welcoming and thoughtful speeches from the Mayor of Bucharest and the headteacher of Tudor Vianu, the school which hosts this competition every year. Each team briefly presents themselves on stage. Joe and Neel have accumulated a large collection of UK flags from previous competitions, and should hereby consider themselves publicly shamed for forgetting their promise to bring them. It is over soon, and while the students enjoy a quiet evening and an early night, the leaders have to finalise markschemes for all the problems. The walk back takes us through Victory Square, and past the protesters whose fires and slogans have been on front pages around the world in the past months. It’s an interesting time, and the atmosphere of this city feels very different from my first visit, for the inaugural edition of this competition in 2008.

Friday 24 February

The first day of the contest starts at 9am. The British students seem fairly relaxed, and hopefully are aiming high. Contestants may ask questions of clarification during the first 30 minutes. Rosie does this, and I send my reply to her two queries back via the courier. Five minutes later it is returned to me with the explanation that the student does not understand the answer. Even under competition pressure this seems unlikely, given that my answers are, respectively ‘yes’, and putting a ring around one of three options she has listed. It turns out that actually the student courier did not understand what to do with the answer, and the situation is quickly corrected.

We approve more markschemes. The US deputy leader Po-Shen and I share our views on the challenge of correctly finding the bound in Q3, and our suggestion that this instead be worth 2 points is upheld. Various further discussions fill the morning, and we return just in time to meet the students at the end of the exam. Harvey claims all three problems with a relaxed grin, while Joe claims all three problems with the haunted look of a man whose twelfth espresso of the day has just worn off. Alexander and Thomas say that they spent most of the time making sure their solutions to Q1 were totally watertight, which, given the intricacy of the arguments, was clearly a very sensible strategy.

To provide a distraction, if not actually a break from time-pressured problem-solving, I’ve booked a pair of escape rooms for the UK students later in the afternoon. Bucharest is the home of these games, where the aim is to solve themed puzzles as part of a story in time to escape a locked room. I join one of the rooms, where there are some theatrical reveals involving wrenches, and clues hidden in combination-locked cabinets, where ability to add three-digit numbers proves useful. Someone’s carrying voice means we get to enjoy some of the drama and twists of the other room too. Anyway, this proved an ideal way to avoid useless post-mortems, and I highly recommend Vlad and his pair of rooms.

Later, James and I get to look at the students’ work from this morning. Their assessments are pretty accurate. Harvey’s solutions to everything are beautiful, while Neel’s bounding argument in Q2 is certainly the most vulgar (and, in fact, unnecessary) calculation of the year so far. Joe’s solution to Q3 bears such obvious resemblence to an official solution that his uncharacteristic abundance of small errors probably won’t matter, including the memorable set A_i\backslash\{i\}, where the two is mean different things. Some of the team might reflect that a moment of casualness in checking the n=2 case on Q2 is a frustrating way to lose a potential mark, but when I compare notes with James, it sounds like the slow and steady approach to Q1 has indeed paid off for everyone, so hopefully it will not be too painful to agree the scores tomorrow.

Saturday 25 February

It’s the second day of the competition, and the UK team look bright-eyed and positive at breakfast. They aren’t the only ones under pressure this morning, as James and I must settle the scores from yesterday’s questions with local markers, known as coordinators. It’s hard to guess in how much detail one will have to explain your contestants’ scripts, so it is safer to prepare almost line-by-line. On this occasion though, perhaps we have over-prepared, as every meeting ends quickly with offers of 7/7 exactly where we were hoping, and indeed in a couple of places where we were not hoping. The markschemes are very clear about certain omissions which carry a point deduction, so to ensure fairness and consistency, we insist that two scores are moved down. I’m confident that any British student would prefer an honourable 41/42 than an accidental 42/42.

No-one’s going to be scoring 41 nor 42 unless they solve the extremely challenging geometry Q6, and as we meet our students afterwards, it turns out they have not managed any progress there. However, they claim an almost full set of solutions to Questions 4 and 5, which, if accurate, is a very good return. Everyone is in a good mood, and after I explain a couple of approaches to Q6, no-one seems too disappointed that they didn’t spot these.

There are various schedules floating around, listing multiple locations and times for lunch, but our space-time trajectory intersects none of them, so we follow the Chinese team to a recommended cheap Szechuan restaurant round the corner. Various circle theorems are explored via the Lazy Susan, and there is a grand reveal of the marks we’ve recently confirmed. There’s time for another pair of escape rooms while the second day scripts arrive. As Rosie remarks, two in two days can lead to excessive outside-the-box thinking. Sometimes a radiator really isn’t a sinister prop, a device for encoding five-digit numbers, or a clue to a Templar tunnel; it’s just a radiator. Otherwise we’d be cold.

When the scripts arrive, as expected the cupboard is pretty bare on Q6. If there were marks for quantity, Neel might get some, and if there were marks for most uses of esoteric theory in a single page, Alexander might get one. No set of scripts for an international-level medium combinatorics problem will ever be perfect, but our Q5s come close. It’s therefore not a long evening, and we can join the students for dinner with the American team. For most of them it’s their first visit to Europe, and there’s much comparing of culture and maths training programmes. There’s also a long discussion of whether it’s sensible to teach maths in primary school. Those present who have small children or younger siblings weigh in on the mysteries of the ‘grid method’, and whether toddlers implicitly understand commutativity, even if they can’t spell it.

Sunday 26 February

The UK leaders gather early in the ‘philosophical anti-cafe’ opposite Vianu school, to ponder the final scripts with a coffee and a view of an artfully-arranged folio of Spinoza. James has a loyalty card here. Unfortunately two of our students have clear algebraic errors in Q4, but apart from that everything is very straightforward. Though following last night’s conversation, we note that maybe a revision clinic on mathematical spelling might prove useful. Anonymous student X thinks there’s one L in ‘ellipse’, counterbalanced by anonymous student Y who thinks there are two in ‘column’. The word ‘parallel’ comes in many disguises.

Of course, the coordinators couldn’t care less about that, and they don’t even mind Neel’s two-cases-at-once inductive step, so again we get what we ask for on Q5 immediately, and on Q4 in the time it takes James to draw a lozenge tiling representing Thomas’s shearing argument. For Q6, it turns out there clearly is a mark for most uses of esoteric theory in a single page, so Alexander gets it. They show us a diagram with over a hundred lines which suggests that the exotic equivalence he claims is actually true. There we go. Overall, the quality of our written solutions has been extremely high. It feels like I say this every time now, but it isn’t idle propaganda. We remember the horrors that used to emerge occasionally, and the effort to make this improvement permanent feels well worth it.

Meanwhile, to fill the day, the students have gone to Sinaia. Two of their guides went with them to help with tickets at the station, apparently under the impression that never having taken a train before wouldn’t be an obstacle to this role. Either way, they made it, and following my request for material for this report, I receive a trickle of presentable photos, though there is talk afterwards of some rather more informal versions which are apparently not suitable. The Transylvanian winter is thawing, but slowly and messily, and Harvey reports that several of the group spent more time horizontal than vertical. Irrespective of their preferred axis, there’s no comment on whether they saw my bear, or any other bear. But since my bear was scared of me, one wonders what it would make of MT’s telling-off face? (Last seen by me during the notorious ‘balcony incident’ at a summer school in 2005, but hardly forgotten.)

The students return in time for confirmation of the results and their medals. As so often, there is pleasure that we have done so well collectively, mixed with mild disappointment for those who ended up just short of a boundary, and that the UK was so close to placing first. Because of the strength of the invited countries, earning a medal of any colour is a very worthwhile achievement, and so Rosie is impressively sanguine about missing out so narrowly in such an unfortunate manner. Alexander was closer than it appears, and could have two more opportunities to take part.

The closing ceremony at Vianu school proceeds rapidly. There is the usual challenge of photographing the students receiving their prizes, but this time is easy. Thomas is about a foot taller than everyone else on the stage, while Neel is flanked by almost the entire Russian team, but his chutzpah trumps their numerical advantage, with laughter all round. Joe claims this year’s gold medal is substantially weightier. He hasn’t brought his previous pair, so the chance to verify this and recreate a Mark Spitz moment goes begging.

It’s 7pm, and UK student enthusiasm for the closing disco (not my words) is about as high as MT’s enthusiasm to chaperone the closing disco. Instead we find a Middle Eastern restaurant, and it’s refreshing to eat hummus in a place which doesn’t claim to be the ‘best in Israel’ though I don’t think Abu Said in Akko will be rushing to steal the recipe. Po-Shen outlines his vision of a year-long maths camp. I think present company are tired enough after five days here. Some are interested to view, if not actually participate in, the protests in Victory Square, but it seems tonight is a quiet one and nothing is being burned, so late-night cards and a perusal of each others’ scripts will have to do.

Monday 27th February

The rest of the group have a flight back to London later today which apparently cost 99p per person before tax. I don’t know how much less the 5am option was, but I think it’s probably worth it. My own flight is truly at 5am tomorrow and I plan to stay up all night. The students return to school tomorrow, doubtless to receive a glorious mix of adulation and apathy. Harvey requests whether next year this trip can be timed differently so that he can miss the whole of his local Eisteddfod, rather than just one day. I promise to ask the organisers, say goodbye, then head for the hills on a train journey long enough to write the entirety of this report.

3am at Bucharest airport, and thoughts can now turn to the future. Many of us will meet in five weeks’ for another round of mathematics in the more tranquil setting of Cambridge. Meanwhile, I certainly enjoyed, admittedly through red eyes, the entertainment of a flight to Israel where baggage size regulations are actually enforced at the boarding gate, and apparently everyone else made it back safely too.

RMM 2017 – Problems 2, 3 and 6

In the previous post, I discussed Problems 1, 4 and 5 from this year’s Romanian Master of Mathematics competition. In this post, I discuss the harder problems (modulo my subjective appreciation of difficulty).

Problem 2

Determine all positive integers n satisfying the following condition: for every monic polynomial P of degree at most n with integer coefficients, there exists a positive integer k \leq n, and (k+1) distinct integers x_1,\ldots,x_{k+1} such that

P(x_1) + P(x_2) + \cdots + P(x_k) = P(x_{k+1}).

Parsing this question deserve at least a moment. Straight after a first reading, I find it worth writing down any key quantifiers which I might forget later. Here, it’s the words at most. If you want to show the statement holds for n=2, you need to investigate monic polynomials with degree zero, one and two. You should also make sure that any instances of x_i really are always distinct.

This matters in competitions! Two of our contestants failed to get the mark for showing n=2 works, precisely because of not checking the linear case, and a third could have lost it for using examples which are sometimes not distinct. On hard papers, one mark actually is the difference between triumph and frustration. And of course it matters outside competitions too, since small cases are exactly what your reader might examine first, to check they understand the problem posed, so it’s not a good place for awkward errors.

I started by trying to show that it couldn’t possibly happen that every polynomial with degree at most n had this property, for some combinatorial reason. For example, that if every set of distinct integers could only be a solution set for a small number of polynomials, then we would end up with not enough polynomials. But I couldn’t make this work at all; every bound ended up heavily in the wrong direction.

The next natural question is, does a typical polynomial of degree at most n have this property? But choosing a typical polynomial is hard, so in fact I asked, do the simplest polynomials of degree at most n have this property? I think the simplest polynomials of degree at most n are \{1,x,x^2,\ldots,x^n\}. Under what circumstances does

x_1^m + \ldots x_k^m = x_{k+1}^m, (1)

have solutions in distinct integers? Famously, when k=2 and m\ge 3 this is a very very hard problem indeed. So the first point is that it though it might be useful to use Fermat’s Last Theorem, it would be foolish to pursue a strategy which, if successful, would have a proof of FLT as a sub-problem. At least, it would be foolish if the aim was to finish this strategy within a few hours.

So my main comment on this question is meta-mathematical. If lots of attempts at general arguments don’t work, there must be some special example that does it. And what properties do I want this special example to have? Maybe one might have thought of this from scratch, but my motivation came from (1) in the case m=p-1. Then, by Fermat’s Little Theorem, all the summands are equal to 1 or 0 modulo p. If k>p, then after discounting any uniform factors of p, we obtain a congruence equation which is, in informal terms,

\left(0\text{ or }1\right)+\ldots+\left(0\text{ or }1\right) \equiv \left(0\text{ or }1\right).

This looks really promising because it’s quite restrictive, but it’s still just a bit annoying: there are quite a few solutions. But it does give us the right idea, which is to find a polynomial P for which P(x)\equiv 1 modulo n. The equation 1+\ldots+1\equiv 1 modulo n has solutions only if the number of summands on the LHS is 1 modulo n. So in this context, this reduces to showing that P is, additionally, injective on the integers, ie that P(x)=P(y) only when x=y.

It’s a nice exercise to show the existence of polynomials which are constant modulo n, and a good problem to work out how to force injectivity. If a polynomial is increasing everywhere, then it is certainly injective, and so the problem ends up being slightly easier in the case where the degree is odd than when the degree is even, but this is a nice conclusion to a nice problem, so I’ll save it for any interested readers to finish themselves.

Problem 3

Let n be an integer greater than 1 and let X be an n-element set. A non-empty collection of subsets A_1,\ldots, A_k of X is tight if the union A_1 \cup \dots \cup A_k is a proper subset of X and no element of X lies in exactly one of the A_is. Find the largest cardinality of a collection of proper non-empty subsets of X, no non-empty subcollection of which is tight.

Note. A subset A of X is proper if A\neq X. The sets in a collection are assumed to be distinct. The whole collection is assumed to be a subcollection.

By Neel Nanda:

If |X|=n, there are 2^n possible subsets, so at first glance the answer could be a variety of things, from a linear to an exponential function of n, each of which would suggest a different approach. So the first step is to conjecture an answer, and by examining small cases it seems impossible to do better than 2n-2. There are several natural constructions for this bound, such as n subsets of size (n-1) and (n-2) subsets of size 1, so we guess this to be our answer (which later turn out to be right!).

From here, a solution is deceptively simple, though empirically the five full solutions in the contest show that it was by no means easy to find. We proceed by induction on the size of X, and want to show that any collection of subsets S has size at least (2n-2). By assumption all subcollections are not tight, so if the union of a subcollection is not the whole set X, then there is an element which appears in exactly one subset. This is a useful result, so we’d like to force a subcollection whose union is not the whole set X.

One way to guarantee that the union of a subcollection is not X is by taking the subcollection of all subsets not containing some element b. So there is some element c which appears in only one subset not containing b. If we choose b so that it’s the element contained in the fewest subsets of S, c is in at least as many subsets of S, but in only one subset not containing b. This means that at most one subset containing b doesn’t contain c. This is useful, because after removing at most 2 subsets (the coefficient of n in 2n-2, importantly!), we now have that every subset in S either contains both b and c or neither. This means that we can replace the pair (b,c) with a new element d, to get a new collection of subsets S’ of a set X’, of size n-1, so by induction |S| \le |S'|+2\le 2n-2.

There is also the case where all subsets contain b, but we can create an equivalent collection of subsets of X \ {b} by removing b from all subsets. So again by induction we are done.

Problem 6

Let ABCD be any convex quadrilateral and let P, Q, R, S be points on the segments AB, BC, CD, and DA, respectively. It is given that the segments PR and QS dissect ABCD into four quadrilaterals, each of which has perpendicular diagonals. Show that the points P, Q, R, S are concyclic.

I thought this problem was extremely hard. The official solution starts with a ‘magic lemma’, that isn’t quite so magic if you then read how it’s used. The overall claim is that PQ, RS and AC are concurrent (or parallel), and this is proved using the fact that the radical axis of the two circles with diameters PQ and RS also passes through this point of concurrency. Hunting for key properties of subsets of points in the diagram is an important skill in hard olympiad geometry, since it exactly reflects how problem-setters produce the problems. All the more so when there is lots of symmetry in the construction. But this is a hard example – there are a lot of potentially relevant subsets of the configuration.

When you’re really stuck with how to get involved in a synthetic configuration, you might consider using coordinates. Some of the UK students have been reading some chapters of a book (Euclidean Geometry in Mathematical Olympiads by Evan Chen. I’ve only had my own copy for a couple of days, but my initial impression is very positive – it fills a gap in the literature in a style that’s both comprehensive and readable.) focusing on various analytic approaches, so James and I felt it was safer to make sure we knew what the best settings were, and how far we could take them.

You almost certainly want the intersection of PR and QS to be your origin. I wanted to set up the configuration using the language of vectors, referenced by (P,Q,R,S). This was because PQ\perp BO and so on, hence \mathbf{b}\cdot (\mathbf{q}-\mathbf{p})=0 and so on. An alternative is to use complex numbers, which makes this condition a bit more awkward, but is more promising for the conclusion. Concyclity is not a natural property in vectors unless you can characterise the centre of the circle, but can be treated via cross-ratios in \mathbb{C}. You also have to decide whether to describe the collinearity of A, B and P by expressing \mathbf{p}=\lambda_{\mathbf{p}} \mathbf{a}+(1-\lambda_{\mathbf{p}})\mathbf{b}, or via something more implicit. There definitely are not four degrees of freedom here, since specifying A certainly defines at most one valid set of (B,C,D), so one is mindful we’ll have to eliminate many variables later. We also have to account for fact that \mathbf{r} is a negative scalar multiple of \mathbf{p}, and it’s not clear whether it’s better to break symmetry immediately, or use this towards the end of a calculation.

The point of writing this was that if your initial thought was ‘this looks promising via coordinate methods’, then I guess I agree. But there’s a difference between looking promising, and actually working, and there are lots of parameterisation options. It’s certainly worth thinking very carefully about which to choose, and in this case, challenging though they were, the synthetic or synthetic-trigonometric methods probably were better.