Sharpness of Phase Transition in Percolation

On the flight to Romania a couple of weeks ago, I read this very nice paper by Duminil-Copin and Tassion in which, as a corollary to a more general result in a companion paper, they provide a short proof of a result about percolation on \mathbb{Z}^d. Mitch talked about this paper at our final Junior Probability Seminar of this term yesterday, so it seemed appropriate to write up something about this nice argument. I must confess I know relatively little about these problems, and in particular know nothing about how the original authors, Aizenmann + Barsky (1987), and Menshikov (1986) approached this problem, but experts have said that this is substantially easier to digest.

Rather than reference the first paper repeatedly, I remark now that everything which follows comes from there.

We consider conventional bond percolation on the edges of \mathbb{Z}^d, for d\ge 2, and are interested in whether the the origin percolates with positive probability. That is, that zero is contained in an infinite component. As usual we define p_c=\sup\{p: \mathbb{P}_p(0\leftrightarrow\infty)=0\} to be the critical probability above which percolation happens with positive probability. Defining \theta(p)=\mathbb{P}_p(0\leftrightarrow\infty), we do not know whether \theta(p_c)=0 for some values of d, notably d=3.

If the origin is connected to infinity, it is by definition connected to the boundary of every \Lambda_n:=[-n,n]^d. The number of distinct paths from the origin to \partial \Lambda_n is bounded by the number of self-avoiding walks on the lattice of length n starting from 0, which grows at most as fast as (2d-1)^n. In particular, we know that p_c\ge \frac{1}{2d-1}, but also, for any p<\frac{1}{2d-1}, the probability \mathbb{P}_p[0\leftrightarrow \partial\Lambda_n] decays exponentially in n. We would expect this in fact to hold for all p<p_c, and this is something that the authors prove, called Item 1. They also show that the percolation probability grows at least linearly beyond p_c, specifically (called Item 2)

\theta(p)\ge \frac{p-p_c}{p(1-p_c)}.

The proof here proceeds by considering the function of subsets S which contain the origin:

\varphi_p(S):= p\sum_{(x,y)\in\Delta S} \mathbb{P}_p[0\stackrel{S}{\leftrightarrow} x],\quad S\subset \mathbb{Z}^d.

In words, this gives the expected number of edges across the boundary which are connected to the origin by a path within S. So this gives a measure of how likely we are to escape S, and in particular, an upper bound on the probability that an open path exists from 0 to outside S. The authors then define the alternative critical probability

\tilde p_c := \sup_\{p\text{ s.t. }\exists\text{ finite }0\in S\text{ with }\varphi_p(S)<1\}.

They will show that \tilde p_c satisfies the statements of both Item 1 and Item 2. Item 2 for \tilde p_c implies \tilde p_c\le p_c, and Item 1 for \tilde p_c implies p_c\le \tilde p_c, so this is exactly what we need.

They show Item 1 first. We consider this set S for which \varphi_p(S)<1, and take some box \Lambda_L which strictly contains S. Now we consider the probability of escaping from a box of size kL. The reason why considering this definition of S works really nicely is that it makes it possible to split this event of escaping from \Lambda_{kL} into an event involving subjects of various disjoint sets of edges being open, so we can use independence.

We decompose the path from 0 to \partial\Lambda_{kL} based on the first time it leaves S. We are mindful that there might be lots of paths from from 0 to this boundary. The way we are bounding means it doesn’t matter if we have lots of suitable paths, but they should all spend a maximal number of steps in S, in the sense that whenever the path re-enters S, say to vertex z, there is no open path from 0 to z contained in S. Let the vertex on \partial S we leave from for the first time be x. Then, for all vertices y later in the path, 0\stackrel{S}{\not\leftrightarrow}y.

So under any suitable path, now take y to be the vertex directly following x, hence (x,y)\in\Delta S. If we take \mathcal{C} to be the set of vertices z for which 0\stackrel{S}{\leftrightarrow}z, we can split the expression based on S to obtain:

\mathbb{P}_p[0\leftrightarrow \partial \Lambda_{kL}]\le p \sum_{(x,y)\in\Delta S}\sum_{C\subset S} \mathbb{P}_p[0\stackrel{S}{\leftrightarrow} x,\mathcal{C}=C] \mathbb{P}_p[y\stackrel{\mathbb{Z}^d\backslash C}{\leftrightarrow}\partial\Lambda_{kL}].

Splitting based on C gives us independence between all of these sets of edges, but then we immediately forget about it. Irrespective of choice of y (recall, y\in S\subset \Lambda_L), this final probability is definitely bounded by \mathbb{P}_p[0\leftrightarrow \partial \Lambda_{(k-1)L}], while the p and the first term can be summed over C to give \varphi_p(S). They obtain:

\mathbb{P}_p[0\leftrightarrow \partial \Lambda_{kL}] \le \varphi_p(S)\mathbb{P}_p[y\leftrightarrow \partial \Lambda_{(k-1)L}] \le \varphi_p(S)^{k-1},

where the final relation holds by induction, and clearly gives exponential decay as required.

For Item 2 we use Russo’s formula. Here we have a slightly simpler example than the most general version, since the event under consideration A_n=\{0\leftrightarrow \partial\Lambda_n\} is increasing with respect to adding edges. It is also a function of a finite number of edges. Then we can consider \frac{d}{dp}\mathbb{P}_p[A] under the coupling which adds each edge independently as a Poisson process with (locally) rate \frac{1}{1-t}. (We take this to be the rate to avoid having to reparameterise exponentially between time and probability. Here t=p.)

Just for ease, we only consider the right-derivative at p. Then with \mathbb{P} as the law of the coupled process:

\frac{d}{dp}\mathbb{P}_p[A] \approx \frac{1}{\epsilon} \mathbb{P}[A\text{ holds at }p+\epsilon,\text{ but not at }p]

= \frac{1}{\epsilon} \sum_{e\in E}\mathbb{P}[e\text{ added between }p,p+\epsilon\text{ and }e\text{ completes event }A]

+ \frac{1}{\epsilon}\mathbb{P}[\text{two or more edges relevant to }A\text{ added}].

Since the number of edges whose states determine A is finite, this second term vanishes as \epsilon\rightarrow 0. So

=\frac{1}{\epsilon}\sum \frac{\epsilon}{1-p} \mathbb{P}(A \text{ doesn't hold at p, and edge }e\text{ pivotal at p}).

Taking the limit in \epsilon in this example gives

\frac{d}{dp}\mathbb{P}_p[0\leftrightarrow \partial\Lambda_n] = \frac{1}{1-p} \sum_{e\in\Lambda_n} \mathbb{P}_p[e\text{ pivotal, }0\not\leftrightarrow \partial \Lambda_n].

The argument then proceeds in a similar way to Item 1, decomposing \{0\not\leftrightarrow \partial \Lambda_n\} by conditioning on the set of vertices \mathcal{S} from which it is not possible to get to \partial \Lambda_n. In particular, this set is an excellent candidate to view as S, since on this event it contains 0 by definition. Once we have specified \mathcal{S} we know which edges might be pivotal, namely those across the boundary of \mathcal{S}. Crucially, the event \{\mathcal{S}=S\} only depends on those edges between the boundary of S and \partial \Lambda_n, so is independent of the event \{0\stackrel{S}{\leftrightarrow}x\} whenever x\in\partial \mathcal{S}. So applying this version of Russo gives

\frac{d}{dp}\mathbb{P}_p[0\leftrightarrow \partial\Lambda_n] = \frac{1}{1-p}\sum_{0\in S\subset \Lambda_n} \sum_{(x,y)\in\Delta S} \mathbb{P}_p[0\stackrel{S}{\leftrightarrow} x]\mathbb{P}_p[\mathcal{S}=S].

It is clear where \varphi_p(S) might turn up within the sum (after removing a factor of p), so for a bound we can take \inf_S \varphi_p(S) outside the sum, and arrive at

\frac{d}{dp}\mathbb{P}_p[0\leftrightarrow \partial\Lambda_n] \ge \frac{1}{p(1-p)}\inf_{0\in S\subset \Lambda_n} (1-\mathbb{P}_p[0\leftrightarrow \partial \Lambda_n].

It wasn’t immediately clear to me immediately that this implied the given form of Item 2 (though it certainly is consistent). I think perhaps I was trying to be too complicated and thinking about Gronwall’s lemma when in fact everything really follows from bounding \inf \varphi_p(S) below by 1 (since we have assumed p>\tilde p_c here), then integrating the differential inequality

\frac{d}{dp}\left[ \frac{p}{1-p}f(p) \right] = \frac{p}{1-p}f'(p)+\frac{1}{(1-p)^2}f(p) \ge \frac{1}{(1-p)^2}.

I include this not because it’s an interesting part of the argument (I don’t think it is really) but because I struggled to digest it first time round.

What is interesting is how well this choice to consider \varphi_p(S) works out. In both parts of the argument, sets which work well for splitting the crossing probabilities into disjoint edge events mesh nicely with considering this function after conditioning on sub- or super-criticality with respect to \tilde p_c.

Advertisement

100k Views

When I started this blog, I implicitly made a promise to myself that I would be aiming for quality of posts rather than quantity of posts. Evaluating quality is hard, whereas one always feels some vague sense of purpose by seeing some measure of output increase. Nonetheless, I feel I have mostly kept this promise, and haven’t written that much solely for the sake of getting out another post. This post is something of an exception, since I noticed in the office on Friday that this blog was closing in on 100,000 page views. Some of these were not actually just me pressing F5 repeatedly. Obviously this is no more relevant an integer to be a threshold as any other, and one shouldn’t feel constrained by base 10, but it still feels like a good moment for a quick review.

Here are some interesting statistics over the (3+\epsilon) years of this website’s existence.

  • 175 posts, not including this one, or the three which are currently in draft status. This works out at about 4.7 posts a month. By some margin the most prolific period was May 2012, when I was revising for my Part III exams in Cambridge, and a series of posts about the fiddliest parts of stochastic calculus and network theory seemed a good way to consolidate this work. I’ve learned recently that PhDs are hard, and in fact it’s been a year since I last produced at least five posts in a month, if you discount the series of olympiad reports, which though enjoyable, don’t exactly require a huge amount of mathematical engagement.
  • By at least one order of magnitude, the most viewed day was 17th August 2014, when several sources simultaneously linked to the third part of my report on IMO 2014 in Cape Town. An interesting point to note is that WordPress counts image views separately to page views, so the rare posts which have a gallery attached count well in a high risk / high return sense. In any case, the analytics consider that this day resulted in 2,366 views by 345 viewers. During a typical period, the number of views per visitor fluctuates between roughly 1.5 and 1.8, so clearly uploading as many photos of maths competitions as possible is the cheap route to lots of hits, at least by the WordPress metric.

EAE stats views

  • One might well expect the distributions involved in such a setup to follow a power-law. It’s not particularly clear from the above data about views per month since late 2012 whether this holds. One anomalously large data point (corresponding to the interest in IMO 2014) does not indicate a priori a power law tail… In addition, there is a general upward trend. Since a substantial proportion of traffic arrives from Google, one might naively assume that traffic rate might be proportion to amount of content, which naturally will grow in time, though it seems impractical to test this. One might also expect more recent posts to be more popular, though in practice this seems not to have been the case.
  • The variance in popularity of the various posts has been surprisingly large. At some level, I guess I thought there would be more viewers who browse through lots of things, but such people would probably do so from the home page, so it doesn’t show up as viewing lots of different articles. There is some internal linking between some pages, but not enough to be a major effect.
  • At either end of the spectrum, a post about coupling and the FKG inequality has received only 16 views in 2.5 years, while a guide to understanding the Levy-Khintchine formula has, in slightly less time, racked up 2,182 hits. There are direct reasons for this. Try googling Levy-Khintchine formula and see what comes up. In a sense, this is not enough, since you also need people to be googling the term in question, and picking topics that are hard but not super-hard at a masters level is probably maximising interest. But I don’t have a good underlying reason for why some posts should end up being more Google-friendly than others.
  • In particular, quality of article seems at best loosely correlated with number of views. This is perhaps worrying, both for my reputation, and for the future of written media, but we will see. Indeed, two articles on the Dubins-Schwarz theorem and a short crib sheet for convergence of random variables, both get a regular readership, despite seeming to have been written (in as much as a blog post can be) on the back of an envelope. I also find it amusing that the Dubins-Schwarz theorem is always viewed at the same time of the year, roughly mid-February, as presumably it comes up in the second term of masters courses, just like it did in my own.
  • By contrast, I remain quite pleased with the Levy-Khintchine article. It’s the sort of topic which is perfectly suited to this medium, since most books on Levy processes seem to assume implicit that their readers will already be familiar with this statement. So it seemed like a worthwhile enterprise to summarise this derivation, and it’s nice to see that others clearly feel the same, and indeed I still find some posts of this flavour useful as revision for myself.

Blog log plot

  • This seemed like a particularly good data set in which to go hunting for power-laws. I appreciate that taking a print-screen of an Excel chart will horrify many viewers, but never mind. The above plot shows the log of page view values for those mathematical articles with at least 200 hits. You can see the Levy-Khintchine as a mild anomaly at the far left. While I haven’t done any serious analysis, this looks fairly convincing.
  • I haven’t engaged particularly seriously in interaction with other blogs and other websites. Perhaps I should have done? I enjoy reading such things, but networking in this fashion seems like a zero-sum game overall except for a few particularly engaged people, even if one gets a pleasing spike in views from a reciprocal tweet somewhere. As a result, the numbers of comments and out-going clicks here is essentially negligible.
  • Views from the UK narrowly outnumber views from the US, but at present rates this will be reversed very soon. I imagine if I discounted the olympiad posts, which are sometimes linked from UK social media, this would have happened already.
  • From purely book-keeping curiosity, WordPress currently thinks the following countries (and territories – I’m not sure how the division occurs…) have recorded exactly one view: Aaland Islands, Afghanistan, Belize, Cuba, Djibouti, El Salvador, Fiji, Guatemala, Guernsey, Laos, Martinique, Mozambique, New Caledonia, Tajikistan, US Virgin Islands, and Zambia. Visiting all of those would be a fun post-viva trip…

Conclusion

As I said, we all know that 100,000 is just a number, but taking half an hour to write this has been a good chance to reflect on what I’ve written here in the past three years. People often ask whether I would recommend that they start something similar. My answer is ‘probably yes’, so long as the writer is getting something out of most posts they produce in real time. When writing about anything hard and technical, you have to accept that until you become very famous, interest in what you produce is always going to be quite low, so the satisfaction has to be gained from the process of understanding and digesting the mathematics itself. None of us will be selling the musical rights any time soon.

I have two pieces of advice to anyone in such a position. 1) Wait until you’ve written five posts before publishing any of them. This is a good test of whether you actually want to do it, and you’ll feel much more plausible linking to a website with more than two articles on it. 2) Don’t set monthly post count targets. Tempting though it is to try this to prevent your blog dying, it doesn’t achieve anything in the long term. If you have lots to say, say lots; if you don’t, occasionally saying something worthwhile feels a lot better when you look back on it than producing your target number of articles which later feel underwhelming.

I don’t know whether this will make it to 6+2\epsilon years, but for now, I’m still enjoying the journey through mathematics.

Real Trees – Root Growth and Regrafting

Two weeks ago in our reading group meeting, Raphael told us about Chapter Five which introduces root growth and regrafting. One of the points of establishing the Gromov-Hausdorff topology in this book was to provide a more natural setting for a discussion of tree-valued processes. Indeed in what follows, one can imagine how to start the construction of a similar process for the excursions which can be used to encode real trees, involving cutting off sub-excursions above one-sided local minima, then glueing them back in elsewhere. But taking account of the equivalence structure will be challenging, and it is much nicer to be able to describe cutting a tree in two by removing a single point without having to worry about quotient maps.

We have seen in Chapter Two an example of a process defined on the family of rooted trees with n labelled vertices which has the uniform rooted tree as an invariant distribution. Given a rooted tree with root p, we choose uniformly at random a vertex p’ in [n] to be the new root. Then if p’=p we do nothing, otherwise we remove the unique first edge in the path from p’ to p, giving two trees. Adding an edge from p to p’ completes the step and gives a new tree with p’ as root. We might want to take a metric limit of these processes as n grows and see whether we end up with a stationary real tree-valued process whose marginals are the BCRT.

To see non-trivial limiting behaviour, it is most interesting to consider the evolution of a particular subtree (which includes the root) through this process. If the vertex chosen for cutting lies in our observed subtree, then the subtree undergoes a prune and regraft operation. On the other hand, if the vertex chosen for cutting does not lie in the subtree, then we do not see any effect of the pruning, except the addition of a new vertex below the original root, which becomes the new root. So essentially, from the point of view of our observed subtree, the root is growing.

Now we can think about interpreting the dynamics of a natural limit process acting on real trees. The key idea is that we don’t change the set on which the tree is supported much, but instead just change the metric. In particular, we will keep the original tree, and add on length at unit rate. Of course, where this length gets added on entirely determines the metric structure of the tree, but that doesn’t stop us giving a simple ‘name’ for the extra length. If we consider a process X^T starting from a particular finite subtree T, then at time t, the tree X^T_t is has vertex set T \coprod (0,t]. (Finite subtree here means that it has finite total length.)

Root regrafting should happen at a rate proportional to the total length of the current observed tree. This is reasonable since after all it is supported within a larger tree, so in the discrete case the probability of a prune-regrafting event happening within a given observed subtree is proportional to the number of vertices in that subtree, which scales naturally as length in the real tree limit. It turns out that to get unit rate root growth with \Theta(1) rate prune-regrafting, we should consider subtrees of size \sqrt{n} within a host tree of size n as n\rightarrow\infty. We also rescale the lengths by \frac{1}{\sqrt{n}}, and time by \sqrt{n} so we actually see prune-regraft events.

Furthermore, if the subtree is pruned, the location of the pruning is chosen uniformly by length of the current observed subtree. So we can view the pruning process as being driven by a Poisson point process with intensity given by the instantaneous length measure of the tree, which at time t has vertex set T\coprod (0,t]. It will turn out to be consistent that there is a ‘piecewise isometry’ for want of a better phrase between the metric (and thus length measure) on X^T_t and the canonical induced measure on T\coprod (0,t], so we can describe the instances and locations of the pruning events via a pair of PPPs. The first is supported on T \times [0,\infty), and the second on \{(t,x): 0 \le x \le t\}, since we only ‘notice’ pruning at the point labelled x if the pruning happens at some time t after x was created.

If we start from a compact tree T, then the total intensity of this pair is finite up to some time t, and so we have a countable sequence \tau_0=0<\tau_1<\tau_2<\ldots of times for pruning events. It is easy to describe (but a bit messy to notate) the evolution of the metric between these pruning times. Essentially the distance between any pair of points in the observed tree at time \tau_m with root \rho_{\tau_m} is constant between times \tau_m,\tau_{m+1}, and new points are added so that the distance between \rho_{\tau_m} and any new point a\in(\tau_m,\tau_{m+1}] is a-\tau_m, and everything thing else follows from straightforward consideration of geodesics.

When a pruning event happens at point x_m at time \tau_m, distances are preserved within the subtree above x_m in X^T_{\tau_m -}, and within the rest of the tree. Again, an expression for the cross distances is straightforward but requires a volume of notation not ideally suited to this medium.

The natural thing to consider is the coupled processes started from different subtrees (again both must contain the original root) of the same host tree. Say T^1,T^2\le T, then it is relatively easy to check that X^{T^1}_t,X^{T^2}_t \le X^T_t \,\forall t, when we drive the processes by consistent coupled Poisson processes. Furthermore, it is genuinely obvious that the Hausdorff distance between X^{T^1}_t,X^{T^2}_t, here viewed as compact subsets of (X^T_t, d^T_t) remains constant during root growth phase.

Less obvious but more important is that the Hausdorff distance decreases during regrafting events. Suppose that just before a regrafting event, the two subtrees are T’ and T”, and the Hausdorff distance between them is \epsilon. This Hausdorff distance is with respect to the metric on the whole tree T. [Actually this is a mild abuse of notation – I’m now taking T to be the whole tree just before the regraft, rather than the tree at time 0.]

So for any a\in T', we can choose b\in T'' such that d_T(a,b)\le \epsilon. This is preserved under the regraft unless the pruning point lies on the geodesic segment (in T) between a and b. But in that case, the distance between a and the pruning point is again at most \epsilon, and so after the regrafting, a is at most \epsilon away from the new root, which is in both subtrees, and in particular the regrafted version of T”.

This is obviously a useful first step on the path to proving any kind of convergence result. There are some technicalities which we have skipped over. It is fairly natural that this leads to a Markov process when the original tree is finite, but it is less clear how to define these dynamics when the total tree length is infinite, as we don’t want regrafting events to be happening continuously unless we can bound their net effect in some sense.

Last week, Franz showed us how to introduce the BCRT into matters. Specifically, that BCRT is the unique stationary distribution for this process. After a bit more work, the previous result says that for convergence properties it doesn’t matter too much what tree we start from, so it is fine to start from a single point. Then, the cut points and growth mechanism corresponds very well to the Poisson line-breaking construction of the BCRT. With another ‘grand coupling’ we can indeed construct them simultaneously. Furthermore, we can show weak convergence of the discrete-world Markov chain tree algorithm to the process with these RG with RG dynamics.

It does seem slightly counter-intuitive that a process defined on the whole of the discrete tree converges to a process defined through subtrees. Evans remarks in the introduction to the chapter that this is a consequence of having limits described as compact real trees. Then limitingly almost all vertices are close to leaves, so in a Hausdorff sense, considering only \sqrt{n} of the vertices (ie a subtree) doesn’t really make any difference after rescaling edge lengths. I feel I don’t understand exactly why it’s ok to take the limits in this order, but I can see why this might work after more checking.

Tomorrow, we will have our last session, probably discussing subtree prune-and-regraft, where the regrafting does not necessarily happen at the root.

RMM 2015 – UK Team Blog Part Two

Saturday 28th Feburary

There is much to squeeze into the programme, and so the second paper starts an hour earlier. James and I spend the day based at Tudor Vianu, the specialist maths and computing school that has hosted this competition since its first incarnation in 2008. Even coming from schools which regularly send students to such international competitions, we find it remarkable to see how much explicit emphasis they place on academic excellence here. Where British observers might expect lists of prefects and photos of glorious football teams, here instead we see students posing with medals and Romanian flags from contests around the world.

Our first task is to coordinate yesterday’s problems. Qs 1 and 3 are agreed extremely rapidly, with the problem captains very complimentary of the UK boys’ number theory solutions. Q2 has all the ingredients to suggest a long slog, but coordinators Lucian Turea and Radu Gologan have clearly thought very carefully about the UK scripts. Everything they say is sensible and easy to make consistent, so we are finished and happy comfortably within our 20 minute slot.

James and I also have to supervise the Romanian scripts for Qs 2 and 3 as these are British submissions. My schoolboy Latin helps a little bit, and we eventually agree on how to pair up comments in the solutions with points on the markscheme just in time to meet our students as they finish. Joe is full of excitement, having completed all three in the closing moments, while others are disappointed, having been slightly thrown by the geometry, but overall spirits are high.

The team are squirrelled off by the guides, and I have the afternoon to engage with the Q5 scripts. We have five solutions, and all are fine, but might not appear fine to a casual observer. Liam has opted for the Jackson Pollock approach to truth, where statements of various levels of interest and veracity are independently sprayed freely across three pages, though after a while I am convinced that every line does follow from something somewhere else on the page.

While working, I realise that having a ground floor room in an Eastern European hostel has its drawbacks. That said, opening the window gives the chance for an experiment to determine exactly which genre of music is found least appealing by lingering smokers. Enescu’s sonate dans le caractere populaire roumain proves successful, despite its local heritage.

I have a better idea what’s going on in all our students’ arguments in time to venture down to the slightly baffling Bucharest metro towards the farewell dinner, which retains its name despite not falling on the final evening. The students are deliberately separated from the leaders, but no attempt is made to enforce this and everyone mingles freely. This year the Chinese team comes from the Shanghai area, and their leader teaches at their high school. He has recently spent a term in Reading and, together with Warren and Harvey, we have a highly enthusiastic conversation about differences in education systems between our countries.

The UK students’ table seems to have been chosen for especially ponderous service, but the 30 seconds they are given between their desserts arriving and the bus arriving proves sufficient. I feel judged when I arrive back at my room and find the Hungarian deputy leader still working on his problematic geometry, so make sure to have at least a nominal further glance at our Q5s before setting an early alarm.

Sunday 1st March

The only people in Victory Square at 6.30am are stray dogs, and stray leaders heading to the school to prepare for their coordinations. I’m apprehensive about being asked to go through Harry’s solution to Q5 line-by-line, but though the effort to understand everything felt purposeful, it wasn’t necessary, as we get what we request almost immediately. Exactly the same thing happens on the other second day questions, so James and I are kicking our heels by 9.30, and the possibility of a return to bed feels very inviting while we wait for the other countries’ scores to clear.

Joe write: Meanwhile, we are at the mysterious ‘Hostel X’, in order to visit an ‘escape room‘. This was not actually as dubious as it first sounds. As Dominic explained, we were to be locked in a room for an hour during which we would have to solve a number of puzzles in order to escape. [DY: think of The Crystal Maze but with less leopard-print.] This turns out to be extremely enjoyable, as we gradually discover the collective significance of some masks, a chessboard and a couple of UV torches, but also quite difficult. Sam, Harvey, Andrei and I manage to escape with barely five minutes to spare but the others do not quite finish, although they assure us that their room was by far the more difficult of the two…

We meet the students, who are discussing Morse decoding and similar things with great enthusiasm. En route home, James thinks that we have been too hasty to accept a flaw in Joe’s solution to Q6, since it suddenly dawns that it could be fixed with the addition of a single \ge sign. Our original coordinators have gone home, but chief coordinator Mihai Baluna graciously takes a second look, and agrees with our re-assessment, so James’ defibrillator can go back in the box.

The bronze and silver medal boundaries have settled naturally, and after brief discussion the jury decides to round up the number of golds to ten, which is surely the right decision. This leaves the UK with three honourable mentions, two silvers, and a gold. Though some of our students might be disappointed to lie just below a boundary, they all recognise that this contest features challenging problems and experienced contestants, many from countries with far more strenuous training programmes than ours. By any measure, this is a fantastic team performance, and James and I are very proud of them.

The closing ceremony is held in the atrium of Vianu school, and after an encouraging speech from the headteacher, the medals are awarded fairly swiftly. Joe reports that the hardest aspect of winning a gold at RMM is the necessity to smile on stage continuously for three minutes. Russia is announced as the winner of the team competition, with a very impressive set of performances, closely followed by the USA.

With the entire evening clear, the UK and USA teams head to Piata Romana to celebrate each other’s successes. The Romanian guides and the UK leadership have slightly different views about what constitutes an appropriate venue for this, but in the end everyone is entirely happy to gather in the common area at James’ hotel. This has been an excellent competition, and it is wonderful to see students, guides and leaders from all teams finding so much in common and much to learn from one another.

Monday 2nd March

My roommate departs for a train to Budapest at 4am, and the accommodation staff are enthusiastically dismantling the bunkbeds in the adjacent room at 6am, so it is fair to say I might have slept better. Two cars take us out to the airport, precisely one of which thinks we are having a race through the rush hour traffic. Suffice it to say, I would probably like to cycle round the Arcul de Triumf even less than its Parisian counterpart.

The students’ recently acquired metalwork doesn’t quite take us over the baggage weight limit. The Wizzair boarding procedure leaves a little to be desired, but the party looks keen for little except sleep, so it makes no difference to do this sparsely. As ever, the arrivals barriers at Luton only just manage to hold back the legions of adoring fans. Goodbyes are exchanged before we head our separate ways, though we will meet again for more worthwhile mathematics in just over three weeks at our next training camp in Cambridge.