RMM 2017 – Problems 1, 4 and 5

I’ve recently taken a UK team to the 2017 edition of the Romanian Master of Mathematics competition in Bucharest. The British students did extremely well and we all enjoyed ourselves mathematically and generally. The customary diary may appear shortly, but this time I want to focus mainly on the questions, since that is after all the main point of these competitions! I hope that what follows is interesting, and at slightly education to potential future students.

I’ve split this into two posts based on my opinion on difficulty, which is subjective but probably correlates fairly positively with most people’s. The account of Q1 is guest-written by two British students, based on their solutions during the competition.

Problem 1

a) Prove that every positive integer n can be written uniquely in the form

$n = \sum_{j=1}^{2k+1} (-1)^{j-1} 2^{m_j},$

where $k\geq 0$ and $0 \leq m_1 < m_2 < \cdots < m_{2k+1}$ are integers. This number k is called the weight of n.

b) Find (in closed form) the difference between the number of positive integers at most $2^{2017}$ with even weight and the number of positive integers at most $2^{2017}$ with odd weight.

Rosie Cates and Neel Nanda:

a) We are trying to express n in terms of powers of 2, so it seems sensible to write in binary. As $2^{m_1}$ is the smallest power of 2, this term is responsible for the last 1 in the binary representation of n. Let $letx x = n – 2^{m_1}$ (ie n with the last 1 removed from its binary expansion). Now if we pair up terms in the sum to get

$x = (2^{m_{2k}+1} - 2^{m_{2k}}) + \ldots + (2^{m_3} - 2^{m_2}),$

we can see that each bracket looks like 11…100…0 when written in binary. Also, the condition that $m_i < m_{i+1}$ is equivalent to ensuring that we do not break any strings of consecutive 1s that were in the binary expansion of x (so for example 111110 = 110000 +1110 is not allowed). So writing x in the desired form is the same as writing it as the sum of numbers of the form 11…100\ldots 0 without breaking any strings of 1s. For example

1110100110 = 1110000000 + 100000 + 110.

Clearly there is exactly one way of doing this for every x, so (as each n has exactly one x) there is exactly one way to do it for each n as well.

This approach allows k to be understood differently. Write n in binary and remove the last 1; now count the number of groups of consecutive 1s. This is equal to k.

b) The second half of the problem becomes a lot simpler with the observation that $n\leq 2^{m_{2k+1}}$, as

$n=2^{m_{2k+1}}-(2^{m_{2k}}-2^{m_{2k-1}})-\ldots-(2^{m_2}-2^{m_1}),$

and the sequence $m_n$ is increasing, so each bracket is positive. As each sequence of $(m_n)$s corresponds uniquely to an integer, this means we just want to count sequences of $(m_n)$s with greatest term at most 2017. The sequence is increasing, so each sequence corresponds to a subset of {0, 1, …, 2017} of size (2k+1). There are $\binom{2018}{2k+1}$ subsets of size (2k+1), so the question reduces to finding a closed form for $\sum_{k=0}^{1008} (-1)^k {{2018}\choose{2k+1}}$.

This is reminiscent of a classic problem in combinatorics: using the binomial theorem to evaluate sums of binomial coefficients weighted by powers. The best example is

$\sum_{k=0}^n (-1)^k \binom{n}{k} =(1-1)^n=0,$

but here rather than (-1) we want something whose square is $(-1)$, so we consider the complex number i. Using the same ideas, we get that

$\sum_{r=0}^{2018} i^r \binom{2018}{r}=(1+i)^{2018},$

which contains what we want, but also binomial coefficients with even r. But if r is even, $i^r$ is real, and if r is odd, $i^r$ is imaginary. So the sum we want appears as the imaginary part, that is

$\mathrm{Im}\left((1+i)^{2018}\right)=\mathrm{Im}\left((\sqrt{2} \cdot e^{\frac{i\pi}{4}})^{2018}\right)=2^{1009}.$

Dominic: note that in both parts, the respective authors find slightly more than what they were required to. That is, respectively, the interpretation of k, and a bound on $m_{2k+1}$. The latter is an excellent example of the general notion that sometimes it is better to use a stronger statement than what you actually require in an induction argument (here for existence). The stronger statement (which you guess from playing with examples) makes the inductive step easier, as it’s then clear that the new term you get is distinct from the terms you already have.

Problem 4

In the Cartesian plane, let $\mathcal G_1, \mathcal G_2$ be the graphs of the quadratic functions $f_1(x) = p_1x^2 + q_1x + r_1, f_2(x) = p_2x^2 + q_2x + r_2$, where $p_1 > 0 > p_2$. The graphs $\mathcal G_1, \mathcal G_2$ cross at distinct points A and B. The four tangents to $\mathcal G_1, \mathcal G_2$ at~A and B form a convex quadrilateral which has an inscribed circle. Prove that the graphs $\mathcal{G}_1$ and $\mathcal{G}_2$ have the same axis of symmetry.

This question is quite unusual for an olympiad of this kind, and I was initially skeptical, but then it grew on me. Ultimately, I was unsurprised that many contestants attacked entirely with coordinate calculations. If you use this strategy, you will definitely get there in the end, but you have to accept that you aren’t allowed to make any mistakes. And because of the amount of symmetry in the configuration, even if you make a mistake, you might still get the required answer, and so not notice that you’ve made a mistake. But I decided I liked it because various levels of geometric insight either reduced or removed the nastier calculations.

Typically, one could gain geometric insight by carefully observing an accurate diagram, but an accurate parabola is hard to draw. However, even from a vague diagram, we might guess the key intermediate property of the configuration, which is that the line joining the other two points in the quadrilateral is parallel to the y-axis. This means that they have the same x-coordinate, and indeed this x-coordinate must in fact be the same for any parabola through A and B, so it is reasonable to guess that it is $\frac{x_A+x_B}{2}$, the mean of the x-coordinates of A and B.

Since you know this is the goal, it’s not too bad to calculate the equations of the tangent lines directly, and demonstrate this algebraically. But I was determined to use the focus-directrix definition of a parabola. Either recall, or digest the interesting new fact that a parabola may be defined as the locus of points which are the same distance from a fixed point P (the focus), and a fixed line $\ell$ (the directrix). Naturally, the distance to the line is perpendicular distance.

To ensure the form given in the statement where y is a quadratic function of x, in this setting the directrix should be parallel to the x-axis. To define the tangent to the parabola at A, let A’ be the foot of the perpendicular from A onto $\ell$, so AA’=PA. I claim that the tangent at A is given by the perpendicular bisector of A’P. Certainly this passes through A, and it is easy to convince yourself that it can’t pass through any other point B on the parabola, since BA’> PB, as A’ is on $\ell$ but is not the foot of the perpendicular form B to $\ell$. This final observation is truly a lot more obvious if you’re looking at a diagram.

We now want to finish geometrically too. In our quadrilateral, one diagonal is parallel to the y-axis, and it will suffice to show that the existence of an incircle implies that A and B must have the same y-coordinate. We have just shown A and B are the same (horizontal) distance from the other diagonal. So certainly if they have the same y-coordinate, then the quadrilateral is a kite, and the sums of opposite sides are equal, which is equivalent to the existence of an incircle. One could then finish by arguing that this ceases to be true if you move one of A and B in either direction, or by some short explicit calculation if such a perturbation argument leaves you ill at ease.

Question 5

Fix an integer $n \geq 2$. An n x n  sieve is an n x n array with n cells removed so that exactly one cell is removed from every row and every column. A stick is a 1 x k or k x 1 array for any positive integer k. For any sieve A, let m(A) be the minimal number of sticks required to partition A. Find all possible values of m(A), as A varies over all possible n x n sieves.

This is a fairly classic competition problem, and while in my opinion the statement isn’t particularly fascinating, it’s interesting that it admits such a wide range of approaches.

As ever, you need to start by playing around with the setup, and guessing that the answer is 2n-2, and not thinking `it can’t possibly be the same answer as Q3??’ Then think about reasons why you couldn’t do better than 2n-2. My very vague reason was that if you only use horizontal sticks, the answer is clearly 2n-2, and the same if you only use vertical sticks. But it feels like you can only make life harder for yourself if you try to use both directions of sticks in lots of places. Note that some sort of argument involving averaging over stick lengths is definitely doomed to fail unless it takes into account the Latin square nature of the location of holes! For example, if you were allowed to put all the holes in the first row, m(A) would be n-1.

Induction is tempting. That is, you remove some number of sticks, probably those corresponding to a given hole, to reduce the board to an (n-1)x(n-1) configuration. If you do this, you need to be clear about why you can remove what you want to remove (in particular, the number of sticks you want to remove), and whether it’s qualitatively different if the hole in question lies on the border of the board. In all of these settings, you want to be careful about 1×1 sticks, which it’s easy inadvertently to count as both horizontal and vertical. This is unlikely to affect the validity of any argument (just picking either an arbitrary or a canonical direction if it’s 1×1 should be fine) but does make it much harder to check the validity.

Joe exhibited directly a construction of 2n-2 cells which must be covered by different sticks. This approach lives or dies by the quality of the written argument. It must look general, even though any diagram you draw must, almost by definition, correspond to some particular case. Alternatively, since the problem is set on a grid, the cells correspond naturally to edges of a bipartite graph, where classes correspond to rows and columns. The holes form a perfect matching on this bipartite graph. But, as Harvey observed, if you split the rows and columns in two, on either side of the relevant hole (or not in the 2+2 cases where the hole is at the border), you have a (2n-2)+(2n-2) bipartite graph, and a perfect matching here corresponds to a set of cells which must be covered by different sticks. This is an ingenious idea, and if you’ve recently met Hall’s Marriage Theorem, which gives a verifiable criterion for the existence of such a perfect matching, there are few better uses of your next ten minutes than to check whether Hall’s condition a) should hold; b) can be proven to hold in this setting.

Doob inequalities and Doob-Meyer decomposition

The first post I wrote on this blog was about martingales, way back in 2012 at a time when I had known what a martingale was for about a month. I now don’t have this excuse. So I’m going to write about a couple of properties of (discrete-time) martingales that came up while adjusting a proof which my thesis examiners suggested could be made much shorter as part of their corrections.

Doob’s submartingale inequality

When we prove that some sequence of processes converges to some other process, we typically want to show that this holds in some sense uniformly over a time-interval, rather than just at some fixed time. We don’t lose much at this level of vagueness by taking the limit process to be identically zero. Then, if the convergent processes are martingales or closely similar, we want to be able to bound $\sup_{k\le n} |Z_k|$ in some sense.

Doob’s submartingale inequality allows us to do this. Recall that a submartingale has almost-surely non-negative conditional increments. You might think of it heuristically as ‘more increasing than a martingale’. If $Z_n$ is a martingale, then $|Z_n|$ is a submartingale. This will be useful almost immediately.

The statement is that for $(Z_n)$ a non-negative submartingale,

$\mathbb{P}\left( \sup_{k\le n} Z_k \ge \lambda\right) \le \frac{\mathbb{E}\left[Z_n\right]}{\lambda}.$

The similarity of the statement to the statement of Markov’s inequality is no accident. Indeed the proof is very similar. We consider whether the event in question happens, and find lower bounds on the expectation of $Z_n$ under both possibilities.

Formally, for ease of notation, let $Z_n^*$ be the running maximum $\sup_{k\le n}Z_k$. Then, we let $T:= n\wedge \inf\{k\le n, M_j\ge \lambda\}$ and apply the optional stopping theorem for submartingales at T, which is by construction at most n. That is

$\mathbb{E}[Z_n]\ge \mathbb{E}[Z_T]=\mathbb{E}\left[Z_T\mathbf{1}_{Z_n^*<\lambda}\right] + \mathbb{E}\left[Z_T \mathbf{1}_{Z_n^*\ge \lambda}\right].$

The first of these summands is positive, and the second is at least $\lambda \mathbb{P}\left( Z_N^* \ge \lambda \right)$, from which the result follows.

We’ve already said that for any martingale $Z_n$, $|Z_n|$ is a submartingale, but in fact $f(Z_n)$ is a submartingale whenever f is convex, and $\mathbb{E}|f(Z_n)|<\infty$ for each n. Naturally, this continues to hold when $Z_n$ is itself a submartingale.

[Note that $Z_n^*$ is also a submartingale, but this probably isn’t as interesting.]

A particularly relevant such function f is $f(x)=x^p$, for p>1. If we take $Z_n$ a non-negative submartingale which is uniformly bounded in $L^p$, then by applying Holder’s inequality and this submartingale inequality, we obtain

$\mathbb{E}\left( \sup_{k\le n}Z_n^p \right) \le \left(\frac{p}{p-1}\right)^p \mathbb{E}\left[ Z_n^p \right].$

Since $Z_n^p$ is a submartingale, then a limit in n on the RHS is monotone, and certainly a limit in n on the LHS is monotone, so we can extend to

$\mathbb{E}\left( \sup_{k\le n}Z_\infty^p \right) \le \left(\frac{p}{1-p}\right)^p \mathbb{E}\left[ Z_\infty^p \right].$

Initially, we have to define $\mathbb{E}\left[ Z_\infty^p \right]$ through this limit, but in fact this result, Doob’s $L^p$ inequality, shows that $Z_\infty:= \lim Z_n$ exists almost surely as well.

Naturally, we will often apply this in the case p=2, and in the third of these three sections, we will see why it might be particularly straightforward to calculate $\mathbb{E}\left[Z_\infty^2\right].$

Remark: as in the case of Markov’s inequality, it’s hard to say much if the submartingale is not taken to be non-negative. Indeed, this effect can be seen even if the process is only defined for a single time step, for which the statement really is then Markov’s inequality.

Doob-Meyer decomposition

Unfortunately, most processes are not martingales. Given an discrete-time process $X_n$ adapted to $\mathcal{F}=(\mathcal{F}_n)$, it is a martingale if the conditional expectations of the increments are all almost surely zero. But given a general adapted process $X_n$ which is integrable (so the increments have well-defined finite expectation), we can iteratively construct a new process $M_n$, where the increments are centred versions of $X_n$‘s increments. That is,

$M_{n+1}-M_n:= X_{n+1}-X_n - \mathbb{E}\left[ X_{n+1}-X_n \,\big|\, \mathcal{F}_n\right] = X_{n+1}-\mathbb{E}\left[X_{n+1} \,\big|\, \mathcal{F}_n\right].$ (*)

Then it’s immediately clear from the definition that $M_n$ is a martingale.

There’s a temptation to tie oneself up in knots with the dependence. We might have that increments of the original process $X_n$ depend on the current value of the process. And is it necessarily clear that we can recover the current value of the original process from the current value of $M_n$? Well, this is why we demand that everything be adapted, rather than just Markov. It’s not the case that $M_n$ should be Markov, but it clearly is adapted.

Now we look at the middle expression in (*), and in particular the term we are subtracting, namely the conditional expectation. If we define, in the standard terminology, $A_0=0$ and

$A_{n+1}-A_n:= \mathbb{E}\left[ X_{n+1}-X_n \,\big|\, \mathcal{F}_n\right],$

then we have decomposed the original process $X_n$ as the sum of a martingale $M_n$, and this new process $A_n$. In particular, note that the increment $A_{n+1}-A_n$ given above is adapted to $\mathcal{F}_n$, which is a stronger condition than being adapted to $\mathcal{F}_{n+1}$ as we would expect a priori. This property of the process $(A_n)$ is called predictability (or possibly previsibility).

This decomposition $X_n=X_0+M_n+A_n$ as just defined is called the Doob-Meyer decomposition, and there is a unique such decomposition where $M_n$ is a martingale, and $A_n$ is predictable. The proof of uniqueness is very straightforward. We look at the equalities given above as definitions of $M_n,A_n$, but then work in the opposite direction to show that they must hold if the decomposition holds.

I feel a final heuristic is worthwhile, using the term drift, more normally encountered in the continuous-time setting to describe infinitissimal expected increments. The increments of $A_n$ represent the drift of $X_n$, and the increments of $M_n$ are what remains from $X_n$ after subtracting the drift. In general, the process to be subtracted to turn a non-martingale into a martingale is called a compensator, and the existence or otherwise of such processes is important but challenging for some classes of continuous-time processes.

In particular, note that when $X_n$ is itself a martingale, then $A_n\equiv 0$. However, probably the most useful case is when $X_n$ is a submartingale, as then the drift is always non-negative, and so $A_n$ is almost surely increasing. The converse holds too.

This is relevant because this Doob-Meyer decomposition is obviously only a useful tool for treating $X_n$ if we can handle the two processes $M_n,A_n$ easily. We have tools to bound the martingale term, but this previsible term might in general be tricky, and so the case where $X_n$ is a submartingale is good, as increasing processes are much easier than general processes, since bounding the whole process might involve only bounding the final term in many contexts.

A particularly relevant example is the square of a martingale, that is $X_n=M_n^2$, where $M_n$ is a martingale. By the convexity condition discussed earlier, $X_n$ is a submartingale (provided it is integrable, ie $M_n$ is square-integrable), and so the process $A_n$ in its Doob-Meyer decomposition is increasing. This is often called the (predictable) quadratic variation of $(X_n)$.

This predictable quadratic variation is sometimes denoted $\langle X_n\rangle$. This differs from the (regular) quadratic variation which is defined as the sum of the squares of the increments, that is $[X_n]:= \sum_{k=0}^{n-1} (X_{k+1}-X_k)^2$. Note that this is adapted, but obviously not previsible. The distinction between these two processes is more important in continuous time. There, they are almost surely equal for a continuous local martingale, but not for eg a Poisson process. (For a Poisson process, the PQV is deterministic, indeed linear, while the (R)QV is almost surely equal to the Poisson process itself.) In the discrete time setting, the regular quadratic variation is not relevant very often, while the predictable quadratic variation is useful, precisely because of this decomposition.

Whenever we have random variables which we then centre, there is a standard trick to apply when treating their variance. That is

$A_{n+1}-A_n= \mathbb{E}\left[ M^2_{n+1}-M^2_n \,\big|\, \mathcal{F}_n\right]$
$= \mathbb{E}\left[ M^2_{n+1}\,\big|\, \mathcal{F}_n\right] - 2M_n^2 +M_n^2$
$= \mathbb{E}\left[ M^2_{n+1}\,\big|\, \mathcal{F}_n\right] - 2M_n \mathbb{E}\left[ M_{n+1}\,\big|\, \mathcal{F}_n\right] + M_n^2$
$= \mathbb{E}\left[ \left(M_{n+1}-M_n\right)^2\,\big|\, \mathcal{F}_n\right].$

One consequence is seen by taking an ‘overall’ expectation. Because $M_n^2-A_n$ is a martingale,

$\mathbb{E}\left[M_n^2\right] = \mathbb{E}\left[A_n\right] = \mathbb{E}\left[M_0^2\right] + \sum_{k=0}^{n-1} \mathbb{E}\left[A_{k+1}-A_k\right]$
$= \mathbb{E}\left[ M_0^2\right] + \sum_{k=0}^{n-1}\mathbb{E}\left[ \left(M_{k+1}-M_k\right)^2 \right].$ (**)

This additive (Pythagorean) property of the square of a martingale is useful in applications where there is reasonably good control on each increment separately.

We can also see this final property without the Doob-Meyer decomposition. For a martingale it is not the case that the increments on disjoint intervals are independent. However, following Williams 12.1 [1], disjoint intervals are orthogonal, in the sense that

$\mathbb{E}\left[(M_t-M_s)(M_v-M_u)\right]=0,$

whenever $s\le t\le u\le v$. Then, when we square the expression $M_n=M_0+\sum M_{k+1}-M_k$, and take expectations, all the cross terms vanish, leaving precisely (*).

References

[1] Williams – Probability with Martingales

I also followed the notes I made in 2011/12 while attending Perla Sousi’s course on Advanced Probability, and Arnab Sen’s subsequent course on Stochastic Calculus, though I can’t find any evidence online for the latter now.

Antichains in the grid

In the previous post on this topic, we discussed Dilworth’s theorem on chains and antichains in a general partially ordered set. In particular, whatever the size of the largest antichain in a poset, it is possible to partition the poset into exactly that many chains. So for various specific posets, or the directed acyclic graphs associated to them, we are interested in the size of this largest antichain.

The following example turned out to be more interesting than I’d expected. At a conventional modern maths olympiad, there are typically three questions on each paper, and for reasons lost in the mists of time, each student receives an integer score between 0 and 7 per question. A natural question to ask is “how many students need to sit a paper before it’s guaranteed that one will scores at least as highly as another on every question?” (I’m posing this as a straight combinatorial problem – the correlation between scores on different questions will be non-zero and presumably positive, but that is not relevant here.)

The set of outcomes is clearly $\{0,1,\ldots,7\}^3$, with the usual weak domination partial order inherited from $\mathbb{R}^3$. Then an antichain corresponds to a set of triples of scores such that no triple dominates another triple. So the answer to the question posed is: “the size of the largest antichain in this poset, plus one.”

In general, we might ask about $\{1,2,\ldots,n\}^d$, again with the weak domination ordering. This directed graph, which generalises the hypercube as well as our example, is called the grid.

Heuristics for the largest antichain

Retaining the language of test scores on multiple questions is helpful. In the previous post, we constructed a partition of the poset into antichains, indexed by the elements of some maximal chain, by starting with the sources, then looking at everything descended only from sources, and so on. (Recall that the statement that this is possible was referred to as the dual of Dilworth’s theorem.) In the grid, there’s a lot of symmetry (in particular under the mapping $x\mapsto n+1-x$ in every coordinate), and so you end up with the same family of antichains whether you work upwards from the sources or downwards from the sinks. (Or vice versa depending on how you’ve oriented your diagram…) The layers of antichains also have a natural interpretation – each layer corresponds to a given total score. It’s clear a priori why each of these is an antichain. If A scores the same as B overall, but strictly more on the first question, this must be counterbalanced by a strictly lower score on another question.

So a natural guess for the largest antichain is the largest antichain corresponding to some fixed total score. Which total score should this be? It ought to be the middle layer, that is total score $\frac{(n+1)d}{2}$, or the two values directly on either side if this isn’t an integer. My intuition was probabilistic. The uniform distribution on the grid is achieved by IID uniform distributions in each coordinate, which you can think of as a random walk, especially if you subtract off the mean first. It feels that any symmetric random walk should have mode zero or next-to-zero. Certainly this works asymptotically in a rescaled sense by CLT, and in a slightly stronger sense by local CLT, but we don’t really want asymptotics here.

When I started writing the previous paragraph, I assumed there would be a simple justification for the claim that the middle layer(s) was largest, whether by straight enumeration, or some combinatorial argument, or even generating functions. Perhaps there is, and I didn’t spot it. Induction on d definitely works though, with a slightly stronger hypothesis that the layer sizes are symmetric around the median, and monotone on either side of the median. The details are simple and not especially interesting, so I won’t go into them.

From now on, the hypothesis is that this middle layer of the grid is the largest antichain. Why shouldn’t it, for example, be some mixture of middle-ish layers? (*) Well, heuristically, any score sequence in one layer removes several possibilities from a directly adjacent layer, and it seems unlikely that this effect is going to cancel out if you take some intermediate number of score sequences in the first layer. Also, the layers get smaller as you go away from the middle, so because of the large amount of symmetry (coordinates are exchangeable etc), it feels reasonable that there should be surjections between layers in the outward direction from the middle. The union of all these surjections gives a decomposition into chains.

This result is in fact true, and its proof by Bollobas and Leader, using shadows and compression can be found in the very readable Sections 0 and 1 of [1].

Most of the key ideas to a compression argument are present in the case n=2, for which some notes by Leader can be found here, starting with Proof 1 of Theorem 3, the approach of which is developed over subsequent sections. We treat the case n=2, but focusing on a particularly slick approach that does not generalise as successfully. We also return to the original case d=3 without using anything especially exotic.

Largest antichain in the hypercube – Sperner’s Theorem

The hypercube $\{0,1\}^d$ is the classical example. There is a natural correspondence between the vertices of the hypercube, and subsets of $[d]$. The ordering on the hypercube corresponds to the ordering given by containment on $\mathcal{P}([d])$. Almost by definition, the k-th layer corresponds to subsets of size k, and thus includes $\binom{d}{k}$ subsets. The claim is that the size of the largest antichain is $\binom{d}{\lfloor d/2 \rfloor}$, corresponding to the middle layer if d is even, and one of the two middle layers if d is odd. This result is true, and is called Sperner’s theorem.

I know a few proofs of this from the Combinatorics course I attended in my final year at Cambridge. As explained, I’m mostly going to ignore the arguments using compression and shadows, even though these generalise better.

As in the previous post, one approach is to exhibit a covering family of exactly this number of disjoint chains. Indeed, this can be done layer by layer, working outwards from the middle layer(s). The tool here is Hall’s Marriage Theorem, and we verify the relevant condition by double-counting. Probably the hardest case is demonstrating the existence of a matching between the middle pair of layers when d is odd.

Take d odd, and let $d':= \lfloor d/2\rfloor$. Now consider any subset S of the d’-th layer $\binom{[d]}{d'}$. We now let the upper shadow of S be

$\partial^+(S):= \{A\in \binom{[d]}{d'+1}\,:\, \exists B\in S, B\subset A\},$

the sets in the (d’+1)-th layer which lie above some set in S. To apply Hall’s Marriage theorem, we have to show that $|\partial^+(S)|\ge |S|$ for all choice of S.

We double-count the number of edges in the hypercube from $S$ to $\partial^+(S)$. Firstly, for every element $B\in S$, there are exactly d’ relevant edges. Secondly, for every element $A\in\partial^+(S)$, there are exactly d’ edges to some element of $\binom{[d]}{d'}$, and so in particular there are at most d’ edges to elements of S. Thus

$d' |S|=|\text{edges }S\leftrightarrow\partial^+(S)| \le d' |\partial^+(S)|,$

which is exactly what we require for Hall’s MT. The argument for the matching between other layers is the same, with a bit more notation, but also more flexibility, since it isn’t a perfect matching.

The second proof looks at maximal chains. Recall, in this context, a maximal chain is a sequence $\mathcal{C}=B_0\subset B_1\subset\ldots\subset B_d$ where each $B_k:= \binom{[d]}{k}$. We now consider some largest-possible antichain $\mathcal{A}$, and count how many maximal chains include an element $A\in\mathcal{A}$. If $|A|=k$, it’s easy to convince yourself that there are $\binom{d}{r}$ such maximal chains. However, given $A\ne A'\in\mathcal{A}$, the set of maximal chains containing A and the set of maximal chains containing A’ are disjoint, since $\mathcal{A}$ is an antichain. From this, we obtain

$\sum_{A\in\mathcal{A}} \binom{d}{|A|} \le d!.$ (**)

Normally after a change of notation, so that we are counting the size of the intersection of the antichain with each layer, this is called the LYM inequality after Lubell, Yamamoto and Meshalkin. The heuristic is that the sum of the proportions of layers taken up by the antichain is at most one. This is essentially the same as earlier at (*). This argument can also be phrased probabilistically, by choosing a *random* maximal chain, and considering the probability that it intersects the proposed largest antichain, which is, naturally, at most one. Of course, the content is the same as this deterministic combinatorial argument.

Either way, from (**), the statement of Sperner’s theorem follows rapidly, since we know that $\binom{d}{|A|}\le \binom{d}{\lfloor d/2\rfloor}$ for all A.

Largest antichain in the general grid

Instead of attempting a proof or even a digest of the argument in the general case, I’ll give a brief outline of why the previous arguments don’t transfer immediately. It’s pretty much the same reason for both approaches. In the hypercube, there is a lot of symmetry within each layer. Indeed, almost by definition, any vertex in the k-th layer can be obtained from any other vertex in the k-th layer just by permuting the labels (or permuting the coordinates if thinking as a vector).

The hypercube ‘looks the same’ from every vertex, but that is not true of the grid. Consider for clarity the n=8, d=3 case we discussed right at the beginning, and compare the scores (7,0,0) and (2,2,3). The number of maximal chains through (7,0,0) is $\binom{14}{7}$, while the number of maximal chains through (2,2,3) is $\binom{7}{2, 2,3}\binom{14}{4,5,5}$, and the latter is a lot larger, which means any attempt to use the second argument is going to be tricky, or at least require an extra layer of detail. Indeed, exactly the same problem arises when we try and use Hall’s condition to construct the optimal chain covering directly. In the double-counting section, it’s a lot more complicated than just multiplying by d’, as was the case in the middle of the hypercube.

Largest antichain in the d=3 grid

We can, however, do the d=3 case. As we will see, the main reason we can do the d=3 case is that the d=2 case is very tractable, and we have lots of choices for the chain coverings, and can choose one which is well-suited to the move to d=3. Indeed, when I set this problem to some students, an explicit listing of a maximal chain covering was the approach some of them went for, and the construction wasn’t too horrible to state.

[Another factor is that it computationally feasible to calculate the size of the middle layer, which is much more annoying in d>3.]

[I’m redefining the grid here as $\{0,1,\ldots,n-1\}^d$ rather than $\{1,2,\ldots,n\}^d$.]

The case distinction between n even and n odd is going to make both the calculation and the argument annoying, so I’m only going to treat the even case, since n=8 was the original problem posed. I should be honest and confess that I haven’t checked the n odd case, but I assume it’s similar.

So when n is even, there are two middle layers namely $\frac{3n}{2}-2, \frac{3n}{2}-1$ (corresponding to total score 10 and total score eleven in the original problem). I calculated the number of element in the $\frac{3n}{2}-1$ layer by splitting based on the value of the first coordinate. I found it helpful to decompose the resulting sum as

$\sum_{k=0}^{n-1} = \sum_{k=0}^{\frac{n}{2}-1} + \sum_{k=\frac{n}{2}}^{n-1},$

based on whether there is an upper bound, or a lower bound on the value taken by the second coordinate. This is not very interesting, and I obtained the answer $\frac{3n^2}{4}$, and of course this is an integer, since n is even.

Now to show that any antichain has size at most $\frac{3n^2}{4}$. Here we use our good control on the chain coverings in the case d=2. We note that there is a chain covering of the (n,d=2) grid where the chains have 2n-1, 2n-3,…, 3, 1 elements (%). We get this by starting with a maximal chain, then taking a maximal chain on what remains etc. It’s pretty much the first thing you’re likely to try.

Consider an antichain with size A in the (n,d=3) grid, and project into the second and third coordinates. The image sets are distinct, because otherwise a non-trivial pre-image would be a chain. So we have A sets in the (n,d=2) grid. How many can be in each chain in the decomposition (%). Well, if there are more than n in any chain in (%), then two must have been mapped from elements of the (n,d=3) grid with the same first coordinate, and so satisfy a containment relation. So in fact there are at most n image points in any of the chains of (%). So we now have a bound of $n^2$. But of course, some of the chains in (%) have length less than n, so we are throwing away information. Indeed, the number of images points in a given chain is at most

$\max(n,\text{length of chain}),$

and so the number of image points in total is bounded by

$n+\ldots+n+ (n-1)+(n-3)+\ldots+1,$

where there are n/2 copies of n in the first half of the sum. Evaluating this sum gives $\frac{3n^2}{4}$, exactly as we wanted.

References

[1] – Bollobas, Leader (1991) – Compressions and Isoperimetric Inequalities. Available open-access here.

BMO2 2017

The second round of the British Mathematical Olympiad was taken yesterday by about 100 invited participants, and about the same number of open entries. To qualify at all for this stage is worth celebrating. For the majority of the contestants, this might be the hardest exam they have ever sat, indeed relative to current age and experience it might well be the hardest exam they ever sit. And so I thought it was particularly worth writing about this year’s set of questions. Because at least in my opinion, the gap between finding every question very intimidating, and solving two or three is smaller, and more down to mindset, than one might suspect.

A key over-arching point at this kind of competition is the following: the questions have been carefully chosen, and carefully checked, to make sure they can be solved, checked and written up by school students in an hour. That’s not to say that many, or indeed any, will take that little time, but in principle it’s possible. That’s also not to say that there aren’t valid but more complicated routes to solutions, but in general people often spend a lot more time writing than they should, and a bit less time thinking. Small insights along the lines of “what’s really going on here?” often get you a lot further into the problem than complicated substitutions or lengthy calculations at this level.

So if some of the arguments below feel slick, then I guess that’s intentional. When I received the paper and had a glance in my office, I was only looking for slick observations, partly because I didn’t have time for detailed analysis, but also because I was confident that there were slick observations to be made, and I felt it was just my task to find them.

Anyway, these are the questions: (note that the copyright to these is held by BMOS – reproduced here with permission.)

Question One

I immediately tried the example where the perpendicular sides are parallel to the coordinate axes, and found that I could generate all multiples of 3 in this way. This seemed a plausible candidate for an answer, so I started trying to find a proof. I observed that if you have lots of integer points on one of the equal sides, you have lots of integer points on the corresponding side, and these exactly match up, and then you also have lots of integer points on the hypotenuse too. In my first example, these exactly matched up too, so I became confident I was right.

Then I tried another example ( (0,0), (1,1), (-1,1) ) which has four integer points, and could easily be generalised to give any multiple of four as the number of integer points. But I was convinced that this matching up approach had to be the right thing, and so I continued, trusting that I’d see where this alternate option came in during the proof.

Good setup makes life easy. The apex of the isosceles triangle might as well be at the origin, and then your other vertices can be $(m,n), (n,-m)$ or similar. Since integral points are preserved under the rotation which takes equal side to the other, the example I had does generalise, but we really need to start enumerating. The number of integer points on the side from (0,0) to (m,n) is G+1, where G is the greatest common divisor of m and n. But thinking about the hypotenuse as a vector (if you prefer, translate it so one vertex is at the origin), the number of integral points on this line segment must be $\mathrm{gcd}(m+n,m-n) +1$.

To me, this felt highly promising, because this is a classic trope in olympiad problem-setting. Even without this experience, we know that this gcd is equal to G if m and n have different parities (ie one odd, one even) and equal to 2G if m and n have the same parity.

So we’re done. Being careful not to double-count the vertices, we have 3G integral points if m and n have opposite parities, and 4G integral points if m and n have the same parity, which exactly fits the pair of examples I had. But remember that we already had a pair of constructions, so (after adjusting the hypothesis to allow the second example!) all we had to prove was that the number of integral points is divisible by at least one of 3 and 4. And we’ve just done that. Counting how many integers less than 2017 have this property can be done easily, checking that we don’t double-count multiples of 12, and that we don’t accidentally include or fail to include zero as appropriate, which would be an annoying way to perhaps lose a mark after totally finishing the real content of the problem.

Question Two

(Keen observers will note that this problem first appeared on the shortlist for IMO 2006 in Slovenia.)

As n increases, obviously $\frac{1}{n}$ decreases, but the bracketed expression increases. Which of these effects is more substantial? Well $\lfloor \frac{n}{k}\rfloor$ is the number of multiples of k which are at most n, and so as a function of n, this increases precisely when n is a multiple of k. So, we expect the bracketed expression to increase substantially when n has lots of factors, and to increase less substantially when n has few factors. An extreme case of the former might be when n is a large factorial, and certainly the extreme case of the latter is n a prime.

It felt easier to test a calculation on the prime case first, even though this was more likely to lead to an answer for b). When n moves from p-1 to p, the bracketed expression goes up by exactly two, as the first floor increases, and there is a new final term. So, we start with a fraction, and then increase the numerator by two and the denominator by one. Provided the fraction was initially greater than two, it stays greater than two, but decreases. This is the case here (for reasons we’ll come back to shortly), and so we’ve done part b). The answer is yes.

Then I tried to do the calculation when n was a large factorial, and I found I really needed to know the approximate value of the bracketed expression, at least for this value of n. And I do know that when n is large, the bracketed expression should be approximately $n\log n$, with a further correction of size at most n to account for the floor functions, but I wasn’t sure whether I was allowed to know that.

But surely you don’t need to engage with exactly how large the correction due to the floors is in various cases? This seemed potentially interesting (we are after all just counting factors), but also way too complicated. An even softer version of what I’ve just said is that the harmonic function (the sum of the first n reciprocals) diverges faster than n. So in fact we have all the ingredients we need. The bracketed expression grows faster than n, (you might want to formalise this by dividing by n before analysing the floors) and so the $a_n$s get arbitrarily large. Therefore, there must certainly be an infinite number of points of increase.

Remark: a few people have commented to me that part a) can be done easily by treating the case $n=2^k-1$, possibly after some combinatorial rewriting of the bracketed expression. I agree that this works fine. Possibly this is one of the best examples of the difference between doing a problem leisurely as a postgraduate, and actually under exam pressure as a teenager. Thinking about the softest possible properties of a sequence (roughly how quickly does it grow, in this case) is a natural first thing to do in all circumstances, especially if you are both lazy and used to talking about asymptotics, and certainly if you don’t have paper.

Question 3

I only drew a very rough diagram for this question, and it caused no problems whatsoever, because there aren’t really that many points, and it’s fairly easy to remember what their properties are. Even in the most crude diagram, we see R and S lie on AC and AD respectively, and so the conclusion about parallel lines is really about similarity of triangles ARS and ACD. This will follow either from some equal angles, or by comparing ratios of lengths.

Since angle bisectors by definition involve equal angles, the first attack point seems promising. But actually the ratios of lengths is better, provided we know the angle bisector theorem, which is literally about ratios of lengths in the angle bisector diagram. Indeed

$\frac{AR}{RC}=\frac{AQ}{CQ},\quad \frac{AS}{SD}=\frac{AP}{PD},$     (1)

and so it only remains to show that these quantities are in fact all equal. Note that there’s some anti-symmetry here – none of these expressions use B at all! We could for example note that AP/PD = BP/PC, from which

$\left(\frac{AS}{SD}\right)^2 = \frac{AP.BP}{PC.PD},$     (2)

and correspondingly for R and Q, and work with symmetric expressions. I was pretty sure that there was a fairly well-known result that in a cyclic quadrilateral, where P is the intersection of the diagonals

$\frac{AP}{PC} = \frac{AD.AB}{DC.BC},$     (3)

(I was initially wondering whether there was a square on the LHS, but an example diagram makes the given expression look correct.)

There will be a corresponding result for Q, and then we would be almost done by decomposing (2) slightly differently, and once we’d proved (3) of course. But doing this will turn out to be much longer than necessary. The important message from (3) is that in a very simple diagram (only five points), we have a result which is true, but which is not just similar triangles. There are two pairs of similar triangles in the diagram, but they aren’t in the right places to get this result. What you do have is some pairs of triangles with one pair of equal angles, and one pair of complementary angles (that is, $\theta$ in one, and $180-\theta$ in the other). This is a glaring invitation to use the sine rule, since the sines of complementary angles are equal.

But, this is also the easiest way to prove the angle bisector theorem. So maybe we should just try this approach directly on the original ratio-of-lengths statement that we decided at (1) was enough, namely $\frac{AQ}{CQ}=\frac{AP}{PD}$. And actually it drops out rapidly. Using natural but informal language referencing my diagram

$\frac{AP}{PD} = \frac{\sin(\mathrm{Green})}{\sin(\mathrm{Pink})},\quad\text{and}\quad \frac{AQ}{CQ}= \frac{\sin(\mathrm{Green})}{\sin(180-\mathrm{Pink})}$

and we are done. But whatever your motivation for moving to the sine rule, this is crucial. Unless you construct quite a few extra cyclic quadrilaterals, doing this with similar triangles and circle theorems alone is going to be challenging.

Remark: If you haven’t seen the angle bisector theorem before, that’s fine. Both equalities in (1) are a direct statement of the theorem. It’s not an intimidating statement, and it would be a good exercise to prove either of these statements in (1). Some of the methods just described will be useful here too!

Question 4

You might as well start by playing around with methodical strategies. My first try involved testing 000, 111, … , 999. After this, you know which integers appear as digits. Note that at this stage, it’s not the same as the original game with only three digits, because we can test using digits which we know are wrong, so that answers are less ambiguous. If the three digits are different, we can identify the first digit in two tests, and then the second in a further test, and so identify the third by elimination. If only two integers appear as digits, we identify each digit separately, again in three tests overall. If only one integer appears, then we are already done. So this is thirteen tests, and I was fairly convinced that this wasn’t optimal, partly because it felt like testing 999 was a waste. But even with lots of case tries I couldn’t do better. So I figured I’d try to prove some bound, and see where I got.

A crucial observation is the following: when you run a test, the outcome eliminates some possibilities. One of the outcomes eliminates at least half the codes, and the other outcome eliminates at most half the codes. So, imagining you get unlucky every time, after k tests, you might have at least $1000\times 2^{-k}$ possible codes remaining. From this, we know that we need at least 9 tests.

For this bound to be tight, each test really does need to split the options roughly in two. But this certainly isn’t the case for the first test, which splits the options into 729 (no digit agreements) and 271 (at least one agreement). Suppose the first test reduces it to 729 options, then by the same argument as above, we still need 9 tests. We now know we need at least 10 tests, and so the original guess of 13 is starting to come back into play.

We now have to make a meta-mathematical decision about what to do next. We could look at how many options might be left after the second test, which has quite a large number of cases (depending on how much overlap there is between the first test number and the second test number). It’s probably going to be less than 512 in at least one of the cases, so this won’t get us to a bound of 11 unless we then consider the third test too. This feels like a poor route to take for now, as the tree of options has branching at rate 3 (or 4 if you count obviously silly things) per turn, so gets unwieldy quickly. Another thought is that this power of two argument is strong when the set of remaining options is small, so it’s easier for a test to split the field roughly in two.

Now go back to our proposed original strategy. When does the strategy work faster than planned? It works faster than planned if we find all the digits early (eg if they are all 6 or less). So the worst case scenario is if we find the correct set of digits fairly late. But the fact that we were choosing numbers of the form aaa is irrelevant, as the digits are independent (consider adding 3 to the middle digit modulo 10 at all times in any strategy – it still works!).

This is key. For $k\le 9$, after k tests, it is possible that we fail every test, which means that at least $(10-k)$ options remain for each digit, and so at least $(10-k)^3$ options in total. [(*) Note that it might actually be even worse if eg we get a ‘close’ on exactly one test, but we are aiming for a lower bound, so at this stage considering an outcome sequence which is tractable is more important than getting the absolute worst case outcome sequence if it’s more complicated.] Bearing in mind that I’d already tried finishing from the case of reduction to three possibilities, and I’d tried hard to sneak through in one fewer test, and failed, it seemed sensible to try k=7.

After 7 tests, we have at least 27 options remaining, which by the powers-of-two argument requires at least 5 further tests to separate. So 12 in total, which is annoying, because now I need to decide whether this is really the answer and come up a better construction, or enhance the proof.

Clearly though, before aiming for either of these things, I should actually try some other values of k, since this takes basically no time at all. And k=6 leaves 64 options, from which the power of two argument is tight; and k=5 leaves 125, which is less tight. So attacking k=6 is clearly best. We just need to check that the 7th move can’t split the options exactly into 32 + 32. Note that in the example, where we try previously unseen digits in every position, we split into 27 + 37 [think about (*) again now!]. Obviously, if we have more than four options left for any digit, we are done as then we have strictly more than 4x4x4=64 options. So it remains to check the counts if we try previously unseen digits in zero, one or two positions. Zero is silly (gives no information), and one and two can be calculated, and don’t give 32 + 32.

So this is a slightly fiddly end to the solution, and relies upon having good control over what you’re trying to do, and what tools you currently have. The trick to solving this is resisting calculations and case divisions that are very complicated. In the argument I’ve proposed, the only real case division is right at the end, by which point we are just doing an enumeration in a handful of cases, which is not really that bad.

Chains and antichains

I’ve recently been at the UK-Hungary winter olympiad camp in Tata, for what is now my sixth time. As well as doing some of my own work, have enjoyed the rare diversion of some deterministic combinatorics. It seems to be a local variant of the pigeonhole principle that given six days at a mathematical event in Hungary, at least one element from {Ramsay theory, Erdos-Szekeres, antichains in the hypercube} will be discussed, with probability one. On this occasion, all were discussed, so I thought I’d write something about at least one of them.

Posets and directed acyclic graphs

This came up on the problem set constructed by the Hungarian leaders. The original formulation asked students to show that among any 17 positive integers, there are either five such that no one divides any other, or five such that among any pair, one divides the other.

It is fairly clear why number theory plays little role. We assign the given integers to the vertices of a graph, and whenever a divides b, we add a directed edge from the vertex corresponding to a to the vertex corresponding to b. Having translated the given situation into a purely combinatorial statement, fortunately we can translate the goal into the same language. If we can find a chain of four directed edges (hence five vertices – beware confusing use of the word ‘length’ here) then we have found the second possible option. Similarly, if we can find an antichain, a set of five vertices with no directed edges between them, then we have found the first possible option.

It’s worth noting that the directed graph we are working with with is transitive. That is, whenever there is an edge a->b and b->c, there will also be an edge a->c. This follows immediately from the divisibility condition. There are also no directed cycles in the graph, since otherwise there would be a cycle of integers where each divided its successor. But of course, when a divides b and these are distinct positive integers, this means that b is strictly larger than a, and so this relation cannot cycle.

In fact, among a set of positive integers, divisibility defines a partial order, which we might choose to define as any ordering whether the associated directed graph is transitive and acyclic, although obviously we could use language more naturally associated with orderings. Either way, from now on we consider posets and the associated DAGs (directed acyclic graphs) interchangeably.

Dilworth’s theorem

In the original problem, we are looking for either a large chain, or a large antichain. We are trying to prove that it’s not possible to have largest chain size at most four, and largest antichain size at most four when there are 17 vertices, so we suspect there may some underlying structure: in some sense perhaps the vertex set is the ‘product’ of a chain and an antichain, or at least a method of producing antichains from a single vertex.

Anyway, one statement of Dilworth’s theorem is as follows:

Statement 1: in a poset with nm+1 elements, there is either a chain of size n+1, or an antichain of size m+1.

Taking n=m=4 immediately finishes the original problem about families of divisors. While this is the most useful statement here, it’s probably not the original, which says the following:

Statement 2: in a poset, there exists $\mathcal{C}$ a decomposition into chains, and an antichain $A$ such that $|\mathcal{C}|=|A|$.

Remark 1: Note that for any decomposition into chains and any antichain, we have $|\mathcal{C}|\ge |A|$, since you can’t have more than one representative from any chain in the antichain. So Statement 2 is saying that equality does actually hold.

Remark 2: Statement 1 follows immediately from Statement 2. If all antichains had size at most m, then there’s a decomposition into at most m chains. But each chain has size n, so the total size of the graph is at most mn. Contradiction.

Unsuccessful proof strategies for Dilworth

Since various smart young people who didn’t know the statement or proof of Dilworth’s theorem attempted to find it (in the form of Statement 1, and in a special case) in finite time conditions, it’s easy to talk about what doesn’t work, and try to gain intellectual value by qualifying why.

• Forgetting directions: in general one might well attack a problem by asking whether we have more information than we need. But ignoring the directions of the edges is throwing away too much information. After doing this, antichains are fine, but maybe you need to exhibit some undirected ‘chains’. Unless these undirected chains are much longer than you are aiming for, you will struggle to reconstruct directed chains out of them.
• Where can the final vertex go?: in a classic trope, one might exhibit a directed graph on nm vertices with neither a chain of size n+1 nor an antichain of size m+1. We attempt to argue that this construction is essentially unique, and that it goes wrong when we add an extra vertex. As a general point, it seems unlikely to be easier to prove that exactly one class of configurations has a given property in the nm case, than to prove no configurations has the same property in the nm+1 case. A standalone proof of uniqueness is likely to be hard, or a disguised rehash of an actual proof of the original statement.
• Removing a chain: If you remove a chain of maximal length, then, for contradiction, what you have left is m(n-1)+1 vertices. If you have a long chain left, then you’re done, although maximality has gone wrong somewhere. So you have an antichain size n in what remains. But it’s totally unclear why it should be possible to extend the antichain with one of the vertices you’ve just removed.

An actual proof of Dilworth (Statement 1), and two consequences

This isn’t really a proof, instead a way of classifying the vertices in the directed graph so that this version of Dilworth. As we said earlier, we imagine there may be some product structure. In particular, we expect to be able to find a maximal chain, and a nice antichain associated to each element of the maximal chain.

We start by letting $V_0$ consist of all the vertices which are sources, that is, have zero indegree. These are minima in the partial ordering setting. Now let $V_1$ consist of all vertices whose in-neighbourhood is entirely contained in $V_0$, that is they are descendents only of $V_0$. Then let $V_2$ consist of all remaining vertices whose in-neighourhood is entirely contained in $V_0\cup V_1$ (but not entirely in $V_0$, otherwise it would have already been treated), and so on. We end up with what one might call an onion decomposition of the vertices based on how far they are from the sources. We end up with $V_0,V_1,\ldots,V_k$, and then we can find a chain of size k+1 by starting with any vertex in $V_k$ and constructing backwards towards the source. However, this is also the largest possible size of a chain, because every time we move up a level in the chain, we must move from $V_i$ to $V_j$ where j>i.

It’s easy to check that each $V_i$ is an antichain, and thus we can read off Statement 1. A little more care, and probably an inductive argument is required to settle Statement 2.

We have however proved what is often called the dual of Dilworth’s theorem, namely that in a poset there exists a chain C, and a decomposition into a collection $\mathcal{A}$ of antichains, for which $|C|=|\mathcal{A}|$.

Finally, as promised returning to Erdos-Szekeres, if not to positive integers. We apply Dilworth Statement 1 to a sequence of $m^2+1$ real numbers $a_0,a_1,\ldots,a_{m^2}$, with the ordering $a_i\rightarrow a_j$ if $i\le j$ and $a_i\le a_j$. Chains correspond to increasing subsequences, and antichains to decreasing subsequences, so we have shown that there is either a monotone subsequence of length m+1.

Skorohod embedding

Background

Suppose we are given a standard Brownian motion $(B_t)$, and a stopping time T. Then, so long as T satisfies one of the regularity conditions under which the Optional Stopping Theorem applies, we know that $\mathbb{E}[B_T]=0$. (See here for a less formal introduction to OST.) Furthermore, since $B_t^2-t$ is a martingale, $\mathbb{E}[B_T^2]=\mathbb{E}[T]$, so if the latter is finite, so is the former.

Now, using the strong Markov property of Brownian motion, we can come up with a sequence of stopping times $0=T_0, T_1, T_2,\ldots$ such that the increments $T_k-T_{k-1}$ are IID with the same distribution as T. Then $0,B_{T_1},B_{T_2},\ldots$ is a centered random walk. By taking T to be the hitting time of $\{-1,+1\}$, it is easy to see that we can embed simple random walk in a Brownian motion using this approach.

Embedding simple random walk in Brownian motion.

The Skorohod embedding question asks: can all centered random walks be constructed in this fashion, by stopping Brownian motion at a sequence of stopping time? With the strong Markov property, it immediately reduces the question of whether all centered finite-variance distributions X can be expressed as $B_T$ for some integrable stopping time T.

The answer to this question is yes, and much of what follows is drawn from, or at least prompted by Obloj’s survey paper which details the problem and rich history of the many approaches to its solution over the past seventy years.

Applications and related things

The relationship between random walks and Brownian motion is a rich one. Donsker’s invariance principle asserts that Brownian motion appears as the scaling limit of a random walk. Indeed, one can construct Brownian motion itself as the limit of a sequence of consistent random walks with normal increments on an increasingly dense set of times. Furthermore, random walks are martingales, and we know that continuous, local martingales can be expressed as a (stochastically) time-changed Brownian motion, from the Dubins-Schwarz theorem.

The Skorohod embedding theorem can be used to prove results about random walks with general distribution by proving the corresponding result for Brownian motion, and checking that the construction of the sequence of stopping times has the right properties to allow the result to be carried back to the original setting. It obviously also gives a coupling between a individual random walk and a Brownian motion which may be useful in some contexts, as well as a coupling between any pair of random walks. This is useful in proving results for random walks which are much easier for special cases of the distribution. For example, when the increments are Gaussian, or when there are combinatorial approaches to a problem about simple random walk. At the moment no aspect of this blog schedule is guaranteed, but I plan to talk about the law of the iterated logarithm shortly, whose proof is approachable in both of these settings, as well as for Brownian motion, and Skorohod embedding provides the route to the general proof.

At the end, we will briefly compare some other ways to couple a random walk and a Brownian motion.

One thing we could do is sample a copy of X independently from the Brownian motion, then declare $T= \tau_{X}:= \inf\{t\ge 0: B_t=X\}$, the hitting time of (random value) X. But recall that unfortunately $\tau_x$ has infinite expectation for all non-zero x, so this doesn’t fit the conditions required to use OST.

Skorohod’s original method is described in Section 3.1 of Obloj’s notes linked above. The method is roughly to pair up positive values taken by X appropriately with negative values taken by X in a clever way. If we have a positive value b and a negative value a, then $\tau_{a,b}$, the first hitting time of $\mathbb{R}\backslash (a,b)$ is integrable. Then we choose one of these positive-negative pairs according to the projection of the distribution of X onto the pairings, and let T be the hitting time of this pair of values. The probability of hitting b conditional on hitting {a,b} is easy to compute (it’s $\frac{-a}{b-a}$) so we need to have chosen our pairs so that the ‘probability’ of hitting b (ie the density) comes out right. In particular, this method has to start from continuous distributions X, and treat atoms in the distribution of X separately.

The case where the distribution X is symmetric (that is $X\stackrel{d}=-X$) is particularly clear, as then the pairs should be $(-x,x)$.

However, it feels like there is enough randomness in Brownian motion already, and subsequent authors showed that indeed it wasn’t necessary to introduce extra randomness to provide a solution.

One might ask whether it’s possible to generate the distribution on the set of pairs (as above) out of the Brownian motion itself, but independently from all the hitting times. It feels like it might be possible to make the distribution on the pairs measurable with respect to

$\mathcal{F}_{0+} = \bigcap\limits_{t>0} \mathcal{F}_t,$

the sigma-algebra of events determined by limiting behaviour as $t\rightarrow 0$ (which is independent of hitting times). But of course, unfortunately $\mathcal{F}_{0+}$ has a zero-one law, so it’s not possible to embed non-trivial distributions there.

Dubins solution

The exemplar for solutions without extra randomness is due to Dubins, shortly after Skorohod’s original argument. The idea is to express the distribution X as the almost sure limit of a martingale. We first use the hitting time of a pair of points to ‘decide’ whether we will end up positive or negative, and then given this information look at the hitting time (after this first time) of two subsequent points to ‘decide’ which of four regions of the real interval we end up in.

I’m going to use different notation to Obloj, corresponding more closely with how I ended up thinking about this method. We let

$a_+:= \mathbb{E}[X \,|\, X>0], \quad a_- := \mathbb{E}[X\,|\, X<0],$ (*)

and take $T_1 = \tau_{\{a_-,a_+\}}$. We need to check that

$\mathbb{P}\left( B_{T_1}=a_+\right) = \mathbb{P}\left(X>0\right),$

for this to have a chance of working. But we know that

$\mathbb{P}\left( B_{T_1}=a_+\right) = \frac{a_+}{a_+-a_-},$

and we can also attack the other side using (*) and the fact that $\mathbb{E}[X]=0$, using the law of total expectation:

$0=\mathbb{E}[X]=\mathbb{E}[X\,|\, X>0] \mathbb{P}(X>0) + \mathbb{E}[X\,|\,X<0]\mathbb{P}(X<0) = a_+ \mathbb{P}(X>0) + a_- \left(1-\mathbb{P}(X>0) \right),$

$\Rightarrow\quad \mathbb{P}(X>0)=\frac{a_+}{a_+-a_-}.$

Now we define

$a_{++}=\mathbb{E}[X \,|\, X>a_+],\quad a_{+-}=\mathbb{E}[X\,|\, 0

and similarly $a_{-+},a_{--}$. So then, conditional on $B_{T_1}=a_+$, we take

$T_2:= \inf_{t\ge T_1}\left\{ B_t\not\in (a_{+-},a_{++}) \right\},$

and similarly conditional on $B_{T_1}=a_-$. By an identical argument to the one we have just deployed, we have $\mathbb{E}\left[B_{T_2} \,|\,\mathcal{F}_{T_1} \right] = B_{T_1}$ almost surely. So, although the $a_{+-+}$ notation now starts to get very unwieldy, it’s clear we can keep going in this way to get a sequence of stopping times $0=T_0,T_1,T_2,\ldots$ where $B_{T_n}$ determines which of the $2^n$ regions of the real line any limit $\lim_{m\rightarrow\infty} B_{T_m}$ should lie in.

A bit of work is required to check that the almost sure limit $T_n\rightarrow T$ is almost surely finite, but once we have this, it is clear that $B_{T_n}\rightarrow B_T$ almost surely, and $B_T$ has the distribution required.

We want to know how close we can make this coupling between a centered random walk with variance 1, and a standard Brownian motion. Here, ‘close’ means uniformly close in probability. For large times, the typical difference between one of the stopping times $0,T_1,T_2,\ldots$ in the Skorohod embedding and its expectation (recall $\mathbb{E}[T_k]=k$) is $\sqrt{n}$. So, constructing the random walk $S_0,S_1,S_2,\ldots$ from the Brownian motion via Skorohod embedding leads to

$\left |S_k - B_k \right| = \omega(n^{1/4}),$

for most values of $k\le n$. Strassen (1966) shows that the true scale of the maximum

$\max_{k\le n} \left| S_k - B_k \right|$

is slightly larger than this, with some extra powers of $\log n$ and $\log\log n$ as one would expect.

The Komlos-Major-Tusnady coupling is a way to do a lot better than this, in the setting where the distribution of the increments has a finite MGF near 0. Then, there exists a coupling of the random walk and the Brownian motion such that

$\max_{k\le n}\left|S_k- B_k\right| = O(\log n).$

That is, there exists C such that

$\left[\max_{k\le n} \left |S_k-B_k\right| - C\log n\right] \vee 0$

is a tight family of distributions, indeed with uniform exponential tail. To avoid digressing infinitely far from my original plan to discuss the proof of the law of iterated logarithm for general distributions, I’ll stop here. I found it hard to find much coverage of the KMT result apart from the challenging original paper, and many versions expressed in the language of empirical processes, which are similar to random walks in many ways relevant to convergence and this coupling, but not for Skorohod embedding. So, here is a link to some slides from a talk by Chatterjee which I found helpful in getting a sense of the history, and some of the modern approaches to this type of normal approximation problem.

BMO1 2016 – the non-geometry

Here’s a link to yesterday’s BMO1 paper, and the video solutions for all the problems. I gave the video solution to the geometric Q5, and discuss aspects of this at some length in the previous post.

In these videos, for obvious educational reasons, there’s a requirement to avoid referencing theory and ideas that aren’t standard on the school curriculum or relatively obvious directly from first principles. Here, I’ve written down some of my own thoughts on the other problems in a way that might add further value for those students who are already have some experience at olympiads and these types of problems. In particular, on problems you can do, it’s worth asking what you can learn from how you did them that might be applicable generally, and obviously for some of the harder problems, it’s worth knowing about solutions that do use a little bit of theory. Anyway, I hope it’s of interest to someone.

Obviously we aren’t going to write out the whole list, but there’s a trade-off in time between coming up with neat ideas involving symmetry, and just listing and counting things. Any idea is going to formalise somehow the intuitive statement ‘roughly half the digits are odd’. The neat ideas involve formalising the statement ‘if we add leading zeros, then roughly half the digits are odd’. The level of roughness required is less in the first statement than the second statement.

Then there’s the trade-off. Trying to come up with the perfect general statement that is useful and true might lead to something like the following:

‘If we write the numbers from 0000 to N, with leading zeros, and all digits of N+1 are even, then half the total digits, ie 2N of them, are odd.’

This is false, and maybe the first three such things you try along these lines are also false. What you really want to do is control the numbers from 0000 to 1999, for which an argument by matching is clear, and gives you 2000 x 4 / 2 = 4000 odd digits. You can exploit the symmetry by matching k with 1999-k, or do it directly first with the units, then with the tens and so on.

The rest (that is, 2000 to 2016) can be treated by listing and counting. Of course, the question wants an actual answer, so we should be wary of getting it wrong by plus or minus one in some step. A classic error of this kind is that the number of integers between 2000 and 2016 inclusive is 17, not 16. I don’t know why the memory is so vivid, but I recall being upset in Year 2 about erring on a problem of this kind involving fences and fenceposts.

As with so many new types of equation, the recipe is to reduce to a type of equation you already know how to solve. Here, because {x} has a different form on different ranges, it makes sense to consider the three ranges

$x\in[0,1/25],\, x\in[1/25,1/8],\, x\in [1/8,\infty),$

as for each of these ranges, we can rewrite $5y\{8y\}\{25y\}$ in terms of standard functions without this bracket notation. On each range we can solve the corresponding equation. We then have to check that each solution does actually lie in the appropriate range, and in two cases it does, and in one case it doesn’t.

Adding an appropriately-chosen value to each side allows you to factorise the quadratics. This might be very useful. But is it an invitation to do number theory and look at coprime factors and so on, or is a softer approach more helpful?

The general idea is that the set of values taken by any quadratic sequence with integer coefficients and leading coefficient one looks from a distance like the set of squares, or the set $\{m(m+1), \,m\in\mathbb{N}\}$, which you might think of as ‘half-squares’ or ‘double triangle numbers’ as you wish. And by, ‘from a distance’ I mean ‘up to an additive constant’. If you care about limiting behaviour, then of course this additive constant might well not matter, but if you care about all solutions, you probably do care. To see why this holds, note that

$n^2+2n = (n+1)^2 - 1,$

so indeed up to an additive constant, the quadratic on the LHS gives the squares, and similarly

$n^2 - 7n = (n-4)(n-3)-12,$

and so on. To solve the equation $n^2=m^2+6$, over the integers, one can factorise, but another approach is to argue that the distance between adjacent squares is much more than 6 in the majority of cases, which leaves only a handful of candidates for n and m to check.

The same applies at this question. Adding on 9 gives

$n^2-6n+9 = m^2 + m -1,$

which is of course the same as

$(n-3)^2 = m(m+1)-1.$

Now, since we now that adjacent squares and ‘half-squares’ are more than one apart in all but a couple of cases, we know why there should only be a small number of solutions. I would call a method of this kind square-sandwiching, but I don’t see much evidence from Google that this term is generally used, except on this blog.

Of course, we have to be formal in an actual solution, and the easiest way to achieve this is to sandwich $m(m+1)-1$ between adjacent squares $m^2$ and $(m+1)^2$, since it is very much clear-cut that the only squares which differ by one are zero and one itself.

I really don’t have much to say about this. It’s not on the school curriculum so the official solutions are not allowed to say this, but you have to use that all integers except those which are 2 modulo 4 can be written as a difference of two squares. The easiest way to show this is by explicitly writing down the appropriate squares, treating the cases of odds and multiples of four separately.

So you lose if after your turn the running total is 2 modulo 4. At this point, the combinatorics isn’t too hard, though as in Q1 one has to be mindful that making an odd number of small mistakes will lead to the wrong answer! As in all such problems, it’s best to try and give a concrete strategy for Naomi. And it’s best if there’s something inherent in the strategy which makes it clear that it’s actually possible to implement. (Eg, if you claim she should choose a particular number, ideally it’s obvious that number is available to choose.)

One strategy might be: Naomi starts by choosing a multiple of four. Then there are an even number of multiples of four, so Naomi’s strategy is:

• whenever Tom chooses a multiple of four, Naomi may choose another multiple of four;
• whenever Tom chooses a number which is one (respectively three) modulo 4, Naomi may choose another which is three (respectively one) modulo 4.

Note that Naomi may always choose another multiple of four precisely because we’ve also specified the second condition. If sometimes Tom chooses an odd number and Naomi responds with a multiple of four out an idle and illogical sense of caprice, then the first bullet point would not be true. One can avoid this problem by being more specific about exactly what the algorithm is, though there’s a danger that statements like ‘whenever Tom chooses k, Naomi should choose 100-k’ can introduce problems about avoiding the case k=50.

I started this at the train station in Balatonfured with no paper and so I decided to focus on the case of just m, m+1 and n, n+2. This wasn’t a good idea in my opinion because it was awkward but guessable, and so didn’t give too much insight into actual methods. Also, it didn’t feel like inducting on the size of the sequences in question was likely to be successful.

If we know about the Chinese Remainder Theorem, we should know that we definitely want to use it here in some form. Here are some clearly-written notes about CRT with exercises and hard problems which a) I think are good; b) cite this blog in the abstract. (I make no comment on correlation or causality between a) and b)…)

CRT is about solutions to sets of congruence equations modulo various bases. There are two aspects to this , and it feels to me like a theorem where students often remember one aspect, and forget the other one, in some order. Firstly, the theorem says that subject to conditions on the values modulo any non-coprime bases, there exist solutions. In many constructive problems, especially when the congruences are not explicit, this is useful enough by itself.

But secondly, the theorem tells us what all the solutions are. There are two stages to this: finding the smallest solution, then finding all the solutions. Three comments: 1) the second of these is easy – we just add on all multiples of the LCM of the bases; 2) we don’t need to find the smallest solution – any solution will do; 3) if you understand CRT, you might well comment that the previous two comments are essentially the same. Anyway, finding the smallest solution, or any solution is often hard. When you give students an exercise sheet on CRT, finding an integer which is 3 mod 5, 1 mod 7 and 12 mod 13 is the hard part. Even if you’re given the recipe for the algorithm, it’s the kind of computation that’s more appealing if you are an actual computer.

Ok, so returning to this problem, the key step is to phrase everything in a way which makes the application of CRT easy. We observe that taking n=2m satisfies the statement – the only problem of course is that 2m is not odd. But CRT then tells us what all solutions for n are, and it’s clear that 2m is the smallest, so we only need to add on the LCM (which is odd) to obtain the smallest odd solution.

BMO1 2016 Q5 – from areas to angles

For the second year in a row Question 5 has been a geometry problem; and for the second year in a row I presented the video solution; and the for the second year in a row I received the question(s) while I was abroad. You can see the video solutions for all the questions here (for now). I had a think about Q5 and Q6 on the train back from a day out at Lake Balaton in Western Hungary, so in keeping with last year’s corresponding post, here are some photos from those sunnier days.

I didn’t enjoy this year’s geometry quite as much as last year’s, but I still want to say some things about it. At the time of writing, I don’t know who proposed Q5, but in contrast to most geometry problems, where you can see how the question might have emerged by tweaking a standard configuration, I don’t have a good intuition for what’s really going on here. I can, however, at least offer some insight into why the ‘official’ solution I give on the video has the form that it does.

The configuration given is very classical, with only five points, and lots of equal angles. The target statement is also about angles, indeed we have to show that a particular angle is a right-angle. So we might suspect that the model approach might well involve showing some other tangency relation, where one of the lines AC and BC is a radius and the other a tangent to a relevant circle. I think it’s worth emphasising that throughout mathematics, the method of solving a problem is likely to involve similar objects to the statement of the problem itself. And especially so in competition problems – it seemed entirely reasonable that the setter might have found a configuration with two corresponding tangency relations and constructed a problem by essentially only telling us the details of one of the relations.

There’s the temptation to draw lots of extra points or lots of extra lines to try and fit the given configuration into a larger configuration with more symmetry, or more suggestive similarity [1]. But, at least for my taste, you can often make a lot of progress just by thinking about what properties you want the extra lines and points to have, rather than actually drawing them. Be that as it may, for this question, I couldn’t initially find anything suitable along these lines [2]. So we have to think about the condition.

But then the condition we’ve been given involves areas, which feels at least two steps away from giving us lots of information about angles. It doesn’t feel likely that we are going to be able to read off some tangency conditions immediately from the area equality we’ve been given. So before thinking about the condition too carefully, it makes sense to return to the configuration and think in very loose terms about how we might prove the result.

How do we actually prove that an angle is a right-angle? (*) I was trying to find some tangency condition, but it’s also obviously the angle subtending by the diameter of a circle. You could aim for the Pythagoras relation on a triangle which includes the proposed right-angle, or possibly it might be easier to know one angle and two side-lengths in such a triangle, and conclude with some light trigonometry? We’ve been given a condition in terms of areas, so perhaps we can use the fact that the area of a right-angled triangle is half the product of the shorter side-lengths? Getting more exotic, if the configuration is suited to description via vectors, then a dot product might be useful, but probably this configuration isn’t.

The conclusion should be that it’s not obvious what sort of geometry we’re going to need to do to solve the problem. Maybe everything will come out from similar triangles with enough imagination, but maybe it won’t. So that’s why in the video, I split the analysis into an analysis of the configuration itself, and then an analysis of the area condition. What really happens is that we play with the area condition until we get literally anything that looks at all like one of the approaches discussed in paragraph (*). To increase our chances, we need to know as much about the configuration as possible, so any deductions from the areas are strong.

The configuration doesn’t have many points, so there’s not much ambiguity about what we could do. There are two tangents to the circle. We treat APC with equal tangents and the alternate segment theorem to show the triangle is isosceles and that the base angles are equal to the angle at B in ABC. Then point Q is ideally defined in terms of ABC to use power of a point, and add some further equal angles into the diagram. (Though it turns out we don’t need the extra equal angle except through power of a point.)

So we have some equal angles, and also some length relations. One of the length relations is straightforward (AP=CP) and the other less so (power of a point $CQ^2 = AQ\cdot BQ$). The really key observation is that the angle-chasing has identified

$\angle PAQ = 180 - \angle \hat C,$

which gives us an alternative goal: maybe it will be easier to show that PAQ is a right-angle.

Anyway, that pretty much drinks the configuration dry, and we have to use the area condition. I want to emphasise how crucial this phase in for this type of geometry problem. Thinking about how to prove the goal, and getting a flavour for the type of relation that comes out of the configuration is great, but now we need to watch like a hawk when we play with the area condition for relations which look similar to what we have, and where we might be going, as that’s very likely to be the key to the problem.

We remarked earlier that we’re aiming for angles, and are given areas. A natural middle ground is lengths. All the more so since the configuration doesn’t have many points, and so several of the triangles listed as having the same area also have the same or similar bases. You might have noticed that ABC and BCQ share height above line AQ, from which we deduce AB=BQ. It’s crucial then to identify that this is useful because it supports the power of a point result from the configuration itself. It’s also crucial to identify that we are doing a good job of relating lots of lengths in the diagram. We have two pairs of equal lengths, and (through Power of a Point) a third length which differs from one of them by a factor of $\sqrt{2}$.

If we make that meta-mathematical step, we are almost home. We have a relation between a triple of lengths, and between a pair of lengths. These segments make up the perimeter of triangle APQ. So if we can relate one set of lengths and the other set of lengths, then we’ll know the ratios of the side lengths of APQ. And this is excellent, since much earlier we proposed Pythagoras as a possible method for establish an angle is a right-angle, and this is exactly the information we’d need for that approach.

Can we relate the two sets of lengths? We might guess yes, that with a different comparison of triangles areas (since we haven’t yet used the area of APC) we can find a further relation. Indeed, comparing APC and APQ gives CQ = 2PC by an identical argument about heights above lines.

Now we know all the ratios, it really is just a quick calculation…

[1] – I discussed the notion of adding extra points when the scripts for the recording were being shared around. It was mentioned that for some people, the requirement to add extra points (or whatever) marks a hard division between ‘problems they can do’ and ‘problem they can’t do’. While I didn’t necessarily follow this practice while I was a contestant myself, these days the first thing I do when I see any angles or an angle condition in a problem is to think about whether there’s a simple way to alter the configuration so the condition is more natural. Obviously this doesn’t always work (see [2]), but it’s on my list of ‘things to try during initial thinking’, and certainly comes a long way before approaches like ‘place in a Cartesian coordinate system’.

[2] – Well, I could actually find something suitable, but I couldn’t initially turn it into a solution. The most natural thing is to reflect P in AC to get P’, and Q in BC to get Q’. The area conditions [AP’C]=[ABC]=[BCQ’] continue to hold, but now P’ and B are on the same side of AC, hence P’B || AC. Similarly AQ’ || BC. I see no reason not to carry across the equal length deductions from the original diagram, and we need to note that angles P’AC, ACP’, CBA are equal and angles Q’AB and BAC are equal. In the new diagram, there are many things it would suffice to prove, including that CP’Q’ are collinear. Note that unless you draw the diagram deliberately badly, it’s especially easy accidentally to assume that CP’Q’ are collinear while playing around, so I wasted quite a bit of time. Later, while writing up this post, I could finish it [3].

[3] – In the double-reflected diagram, BCQ’ is similar to P’BA, and since Q’C=2P’C = P’A, and Q’B=AB, you can even deduce that the scale factor is $\sqrt{2}$. There now seemed two options:

• focus on AP’BC, where we now three of the lengths, and three of the angles are equal, so we can solve for the measure of this angle. I had to use a level of trigonometry rather more exotic than the Pythagoras of the original solution, so this doesn’t really serve purpose.
• Since BCQ’ is similar to P’BA and ABQ’ similar to CP’A, we actually have Q’BCA similar to AP’BC. In particular, $\angle CBP' = \angle ACB$, and thus both are 90. Note that for this, we only needed the angle deductions in the original configuration, and the pair of equal lengths.
• There are other ways to hack this final stage, including showing that BP’ meets AQ’ at the latter’s midpoint, to give CP’Q’ collinear.

DGFF 3 – Gibbs-Markov property for entropic repulsion

In the previous post, we saw that it isn’t much extra effort to define the DGFF with non-zero boundary conditions, by adding onto the zero-BC DGFF the unique (deterministic) harmonic function which extends the boundary values into the domain. We also saw how a Gibbs-Markov property applies, whereby the values taken by the field on some sub-region $A\subset D$ depend on the values taken on $D\backslash A$ only through values taken on $\partial A$.

In this post, we look at how this property and some other methods are applied by Deuschel [1] to study the probability that the DGFF on a large box in $\mathbb{Z}^d$ is positive ‘everywhere’. This event can be interpreted in a couple of ways, all of which are referred to there as entropic repulsion. Everything which follows is either taken directly or paraphrased directly from [1]. I have tried to phrase this in a way which avoids repeating most of the calculations, instead focusing on the methods and the motivation for using them.

Fix dimension $d\ge 2$ throughout. We let $P^0_N$ be the law of the DGFF on $V_N:=[-N,N]^d\subset \mathbb{Z}^d$ with zero boundary conditions. Then for any subset $A\subset \mathbb{Z}^d$, in an intuitively-clear abuse of notation, we let

$\Omega^+(A):= \{ h_x\ge 0, x\in A\},$

be the event that some random field h takes only non-negative values on A. The goal is to determine $P^0_N ( \Omega^+(V_N))$. But for the purposes of this post, we will focus on showing bounds on the probability that the field is non-negative on a thin annulus near the boundary of $V_N$, since this is a self-contained step in the argument which contains a blog-friendly number of ideas.

We set $(L_N)$ to be a sequence of integers greater than one (to avoid dividing by zero in the statement), for which $\frac{L_N}{N}\rightarrow 0$. We now define for each N, the annulus

$W_N = \{v\in V_N: L_N\le d_{\mathbb{Z}^d}(v, V_N^c)\le 2L_N \}$

with radius $L_N$ set a distance $L_N$ inside the box $V_N$. We aim to control $P^N_0 (\Omega^+(W_N))$. This forms middle steps of Deuschel’s Propositions 2.5 and 2.9, which discuss $P^N_0(\Omega^+(V_{N-L_N}))$. Clearly there is the upper bound

$P^N_0(\Omega^+(V_{N-L_N})) \le P^N_0(\Omega^+(W_N))$ (1)

and a lower bound on $P^N_0(\Omega^+(V_{N-L_N}))$ is obtained in the second proposition by considering the box as a union of annuli then combining the bounds on each annulus using the FKG inequality.

Upper bound via odds and evens

After removing step (1), this is Proposition 2.5:

$\limsup_{N\rightarrow \infty} \frac{L_N}{N^{d-1} \log L_N} \log P^N_0(\Omega^+(W_N)) < 0.$ (2)

This is giving a limiting upper bound on the probability of the form $L_N^{-CN^{d-1}/L_N}$, though as with all LDP estimates, the form given at (2) is more instructive.

Morally, the reason why it is unlikely that the field should be non-negative everywhere within the annulus is that the distribution at each location is centred, and even though any pair of values are positively correlated, this correlation is not strong enough to avoid this event being unlikely. But this is hard to corral into an upper bound argument directly. In many circumstances, we want to prove upper bounds for complicated multivariate systems by projecting to get an unlikely event for a one-dimensional random variable, or a family of independent variables, even if we have to throw away some probability. We have plenty of tools for tail probabilities in both of these settings. Since the DGFF is normal, a one-dimensional RV that is a linear combination (eg the sum) of all the field heights is a natural candidate. But in this case we would have thrown away too much probability, since the only way we could dominate is to demand that the sum $\sum_{x\in W_N}h^N_x\ge 0$, which obviously has probability 1/2 by symmetry. (3)

So Deuschel splits $W_N$ into $W_N^o,W_N^e$, where the former includes all vertices with odd total parity in $W_N$ and the latter includes all the vertices with even total parity in the interior of $W_N$. (Recall that $\mathbb{Z}^d$ is bipartite in exactly this fashion). The idea is to condition on $h^N\big|_{W^o_N}$. But obviously each even vertex is exactly surrounded by odd vertices. So by the Gibbs-Markov property, conditional on the odd vertices, the values of the field at the even vertices are independent. Indeed, if for each $v\in W_N^e$ we define $\bar h_v$ to be the average of its neighbours (which is measurable w.r.t to the sigma-algebra generated by the odd vertices), then

$\{h_v: v\in W_N^e \,\big|\, \sigma(h_w: w\in W_N^o)\},$

is a collection of independent normals with variance one, and where the mean of $h_v$ is $\bar h_v$.

To start finding bounds, we fix some threshold $m=m_N\gg 1$ to be determined later, and consider the odd-measurable event $A_N$ that at most half of the even vertices v have $\bar h_v\ge m$. So $A_N^c\cap \Omega^+(W_N)$ says that all the odd vertices are non-negative and many are quite large. This certainly feels like a low-probability event, and unlike at (3), we might be able to obtain good tail bounds by projection into one dimension.

In the other case, conditional on $A_N$, there are a large number of even vertices with conditional mean at most m, and so we can control the probability that at least one is negative as a product

$(1-\varphi(m))^{\frac12 |W_N^e|}$. (4)

Note that for this upper bound, we can completely ignore the other even vertices (those with conditional mean greater than m).

So we’ll go back to $A_N^c \cap \Omega^+(W_N)$. For computations, the easiest one-dimensional variable to work with is probably the mean of the $\bar h_v$s across $v\in W_N^e$, since on $A_N^c\cap \Omega^+(W_N)$ this is at least $\frac{m}{2}$. Rather than focus on the calculations themselves involving

$\bar S^e_N:= \frac{1}{|W_N^e|} \sum\limits_{v\in W_N^e} \bar h_v,$

let us remark that it is certainly normal and centered, and so there are many methods to bound its tail, for example

$P^0_N \left( \bar S^e_N \ge \frac{m}{2} \right) \le \exp\left( \frac{-m^2}{8\mathrm{Var}(\bar S^e_N)} \right),$ (5)

as used by Deuschel just follows from an easy comparison argument within the integral of the pdf. We can tackle the variance using the Green’s function for the random walk (recall the first post in this set). But before that, it’s worth making an observation which is general and useful, namely that $\bar S^e_N$ is the expectation of

$S^e_N:= \sum{1}{|W_N^e|}\sum\limits_{v\in W_N^e} h_v$

conditional on the odds. Directly from the law of total variance, the variance of any random variable X is always larger than the variance of $\mathbb{E}[X|Y]$.

So in this case, we can replace $\mathrm{Var}(\bar S^e_N)$ in (5) with $\mathrm{Var}(S^e_N)$, which can be controlled via the Green’s function calculation.

Finally, we choose $m_N$ so that the probability at (4) matches the probability at (5) in scale, and this choice leads directly to (2).

In summary, we decomposed the event that everything is non-negative into two parts: either there are lots of unlikely local events in the field between an even vertex and its odd neighbours, or the field has to be atypically large at the odd sites. Tuning the parameter $m_N$ allows us to control both of these probabilities in the sense required.

Lower bound via a sparse sub-lattice

To get a lower bound on the probability that the field is non-negative on the annulus, we need to exploit the positive correlations in the field. We use a similar idea to the upper bound. If we know the field is positive and fairly large in many places, then it is increasingly likely that it is positive everywhere. The question is how many places to choose?

We are going to consider a sub-lattice that lives in a slightly larger region than $W_N$ itself, and condition the field to be larger than $m=m_N$ everywhere on this lattice. We want the lattice to be sparse enough that even if we ignore positive correlations, the chance of this happening is not too small. But we also want the lattice to be dense enough that, conditional on this event, the chance that the field is actually non-negative everywhere in $W_N$ is not too small either.

To achieve this, Deuschel chooses a sub-lattice of width $\lfloor\epsilon L_N^{2/d}\rfloor$, and sets $\Lambda_N(\epsilon)$ to be the intersection of this with the annulus with radii $[N-\frac{5}{2}L_N, N-\frac{1}{2}L_N]$, to ensure it lives in a slightly larger region than $W_N$ itself. The scaling of this sub-lattice density is such that when a random walk is started at any $v\in W_N$, the probability that the RW hits $\Lambda_N(\epsilon)$ before $\partial V_N$ is asymptotically in (0,1). (Ie, not asymptotically zero or one – this requires some definitely non-trivial calculations.) In particular, for appropriate (ie large enough) choice of $\epsilon$, this probability is at least 1/2 for all $v\in W_N$. This means that after conditioning on event $B_N:=\{h_v\ge m : v\in \Lambda_N(\epsilon)\}$, the conditional expectation of $h_w$ is at least $\frac{m}{2}$ for all $w\in W_N\backslash \Lambda_N(\epsilon)$. Again this uses the Gibbs-Markov property and the Gaussian nature of the field. In particular, this conditioning means we are left with the DGFF on $V_N\backslash \Lambda_N(\epsilon)$, ie with boundary $\partial V_N\cup \Lambda_N(\epsilon)$, and then by linearity, the mean at non-boundary points is given by the harmonic extension, which is linear (and so increasing) in the boundary values.

At this point, the route through the calculations is fairly clear. Since we are aiming for a lower bound on the probability of the event $\Omega^+(W_N)$, it’s enough to find a lower bound on $P^0_N(\Omega^+(W_N)\cap B)$.

Now, by positive correlation (or, formally, the FKG inequality) we can control $P^0_N(B)$ just as a product of the probabilities that the field exceeds the threshold at each individual site in $\Lambda_N(\epsilon)$. Since the value of the field at each site is normal with variance at least 1 (by definition), this is straightforward.

Finally, we treat $P^0_N(\Omega^+(W_N) \,\big|\, B)$. We’ve established that, conditional on B, the mean at each point of $W_N\backslash \Lambda_N(\epsilon)$ is at least $\frac{m}{2}$, and we can bound the variance above too. Again, this is a conditional variance, and so is at most the corresponding original variance, which is bounded above by $\sigma_N^2:=\mathrm{Var}(h^N_0)$. (This fact that the variance is maximised at the centre is intuitively clear when phrased in terms of occupation times, but the proof is non-obvious, or at least non-obvious to me.)

Since each of the event $h_v^N\ge 0$ for $v\in W_N\backslash \Lambda_N(\epsilon)$ is positively correlated with B, we can bound the probability it holds for all v by the product of the probabilities that it holds for each v. But having established that the conditional mean is at least $\frac{m_N}{2}$ for each v, and the variance is uniformly bounded above (including in N), this gives an easy tail bound of the form we require.

Again it just remains to choose the sequence of thresholds $m_N$ to maximise the lower bound on the probability that we’ve found in this way. In both cases, it turns out that taking $m_N= \sqrt{C\log N}$ is sensible, and this turns out to be linked to the scaling of the maximum of the DGFF, which we will explore in the future.

References

[1] – J-D Deuschel, Entropic Repulsion of the Lattice Free Field, II. The 0-Boundary Case. Available at ProjectEuclid.

DGFF 2 – Boundary conditions and Gibbs-Markov property

In the previous post, we defined the Discrete Gaussian Free Field, and offered some motivation via the discrete random walk bridge. In particular, when the increments of the random walk are chosen to be Gaussian, many natural calculations are straightforward, since Gaussian processes are well-behaved under conditioning and under linear transformations.

Non-zero boundary conditions

In the definition of the DGFF given last time, we demanded that $h\equiv 0$ on $\partial D$. But the model is perfectly well-defined under more general boundary conditions.

It’s helpful to recall again the situation with random walk and Brownian bridge. If we want a Brownian motion which passes through (0,0) and (1,s), we could repeat one construction for Brownian bridge, by taking a standard Brownian motion and conditioning (modulo probability zero technicalities) on passing through level s at time 1. But alternatively, we could set

$B^{\mathrm{drift-br}}(t) = B(t)+ t(s-B(1)),\quad t\in[0,1],$

or equivalently

$B^{\mathrm{drift-br}}(t)=B^{\mathrm{br}}(t)+ st, \quad t\in[0,1].$

That is, a Brownian bridge with drift can be obtain from a centered Brownian bridge by a linear transformation, and so certainly remains a Gaussian process. And exactly the same holds for a discrete Gaussian bridge: if we want non-zero values at the endpoints, we can obtain this distribution by taking the standard centred bridge and applying a linear transformation.

We can see how this works directly at the level of density functions. If we take $0=Z_0,Z_1,\ldots,Z_{N-1},Z_N=0$ a centred Gaussian bridge, then the density of $Z=\mathbf{z}\in \mathbb{R}^{N+1}$ is proportional to

$\mathbf{1}\{z_0=z_N=0\}\exp\left( -\frac12 \sum_{i=1}^N (z_i-z_{i-1})^2 \right).$ (3)

So rewriting $z_i= y_i- ki$ (where we might want $k=s/N$ to fit the previous example), the sum within the exponent rearranges as

$-\frac12 \sum_{i=1}^N (y_i-y_{i-1} - k)^2 = -\frac12 \sum_{i=1}^N (y_i-y_{i-1})^2 - 2k(y_N-y_0)+ Nk^2.$

So when the values at the endpoints $z_0,z_n,y_0,y_N$ are fixed, this middle term is a constant, as is the final term, and thus the density of the linearly transformed bridge has exactly the same form as the original one.

In two or more dimensions, the analogue of adding a linear function is to add a harmonic function. First, some notation. Let $\varphi$ be any function on $\partial D$. Then there is a unique harmonic extension of $\varphi$, for which $\nabla \varphi=0$ everywhere on D, the interior of the domain. Recall that $\nabla$ is the discrete graph Laplacian defined up to a constant by

$(\nabla \varphi) _x = \sum\limits_{x\sim y} \varphi_x - \varphi_y.$

If we want $h^D$ instead to have boundary values $\varphi$, it’s enough to replace $h^D$ with $h^D+\varphi$. Then, in the density for the DGFF ( (1) in the previous post), the term in the exponential becomes (ignoring the $\frac{1}{4d}$ )

$-\sum\limits_{x\sim y} \left[ (h^D_x-h^D_y)^2 + (\varphi_x-\varphi_y)^2 +2(h^D_x - h^D_y)(\varphi_x-\varphi_y)\right].$

For each $x\in D$, on taking this sum over its neighbours $y\in \bar D$, the final term vanishes (since $\varphi$ is harmonic), while the second term is just a constant. So the density of the transformed field, which we’ll call $h^{D,\varphi}$ is proportional to (after removing the constant arising from the second term above)

$\mathbf{1}\left\{h^{D,\varphi}_x = \varphi_x,\, x\in\partial D\right\} \exp\left( -\frac{1}{4d} \sum\limits_{x\sim y} \left( h^{D,\varphi}_x - h^{D,\varphi}_y \right)^2 \right).$

So $h^{D,\varphi}:= h^D + \varphi$ satisfies the conditions for the DGFF on D with non-zero boundary conditions $\varphi$.

Harmonic functions and RW – a quick review

Like the covariances in DGFF, harmonic functions on D are related to simple random walk on D stopped on $\partial D$. (I’m not claiming a direct connection right now.) We can define the harmonic extension $\varphi$ to an interior point x by taking $\mathbb{P}_x$ to be the law of SRW $x=Z_0,Z_1,Z_2,\ldots$ started from x, and then setting

$\varphi(x):= \mathbb{E}\left[ \varphi_{\tau_{\partial d}} \right],$

where $\tau_{\partial D}$ is the first time that the random walk hits the boundary.

Inverse temperature – a quick remark

In the original definition of the density of the DGFF, there is the option to add a constant $\beta>0$ within the exponential term so the density is proportional to

$\exp\left(-\beta \sum\limits_{x\sim y} (h_x-h_y)^2 \right).$

With zero boundary conditions, the effect of this is straightforward, as varying $\beta$ just rescales the values taken by the field. But with non-zero boundary conditions, the effect is instead to vary the magnitude of the fluctuations of the values of the field around the (unique) harmonic function on the domain with those BCs. In particular, when $\beta\rightarrow \infty$, the field is ‘reluctant to be far from harmonic’, and so $h^D \Rightarrow \varphi$.

This parameter $\beta$ is called inverse temperature. So low temperature corresponds to high $\beta$, and high stability, which fits some physical intuition.

A Markov property

For a discrete (Gaussian) random walk, the Markov property says that conditional on a given value at a given time, the trajectory of the process before this time is independent of the trajectory afterwards. The discrete Gaussian bridge is similar. Suppose we have as before $0=Z_0,Z_1,\ldots, Z_N=0$ a centred Gaussian bridge, and condition that $Z_k=y$, for $k\in\{1,\ldots,N-1\}$, and $y\in\mathbb{R}$. With this conditioning, the density (3) splits as a product

$\mathbf{1}\{z_0=z_N=0, z_k=y\}\exp\left(-\frac12 \sum\limits_{i=1}^N (z_i-z_{i-1})^2 \right) =$

$\mathbf{1}\{z_0=0,z_k=y\} \exp\left(-\frac12 \sum\limits_{i=1}^k (z_i-z_{i-1})^2 \right) \cdot \mathbf{1}\{z_k=y,z_N=0\} \exp\left(-\frac12 \sum\limits_{i=k+1}^N (z_i-z_{i-1})^2 \right).$

Therefore, with this conditioning, the discrete Gaussian bridge splits into a pair of independent discrete Gaussian bridges with drift. (The same would hold if the original process had drift too.)

The situation for the DGFF is similar, though rather than focusing on the condition, it makes sense to start by focusing on the sub-domain of interest. Let $A\subset D$, and take $B=\bar D\backslash A$. So in particular $\partial A\subset B$.

Then we have that conditional on $h^D\big|_{\partial A}$, the restricted fields $h^D\big|_{B\backslash \partial A}$ and $h^D\big|_A$ are independent. Furthermore, $h^D\big|_A$ has the distribution of the DGFF on A, with boundary condition given by $h^D\big|_{\partial A}$. As in the discrete bridge, this follows just by splitting the density. Every gradient term corresponds to an edge in the underlying graph that lies either entirely inside $\bar A$ or entirely inside B. This holds for a general class of Gibbs models where the Hamiltonian depends only on the sum of some function of the heights (taken to be constant in this ‘free’ model) and the sum of some function of their nearest-neighbour gradients.

One additional and useful interpretation is that if we only care about the field on the restricted region A, the dependence of $h^D\big|_A$ on $h^D\big|_{D\backslash A}$ comes only through $h^D\big|_{\partial A}$. But more than that, it comes only through the (random) harmonic function which extends the (random) values taken on the boundary of A to the whole of A. So, if $h^A$ is an independent DGFF on A with zero boundary conditions, we can construct the DGFF $h^D$ from its value on $D\backslash A$ via

$h^D_x \stackrel{d}= h^A_x + \varphi^{h^D\big|_{\partial A}},$

where $\varphi^{h^D\big|_{\partial A}}$ is the unique harmonic extension of the (random) values taken by $h^D$ on $\partial A$ to $\bar A$.

This Markov property is crucial to much of the analysis to come. There are several choices of the restricted domain which come up repeatedly. In the next post we’ll look at how much one can deduce by taking A to be the even vertices in D (recalling that every integer lattice $\mathbb{Z}^d$ is bipartite), and then taking A to be a finer sublattice within D. We’ll use this to get some good bounds on the probability that the DGFF is positive on the whole of D. Perhaps later we’ll look at a ring decomposition of $\mathbb{Z}^d$ consisting of annuli spreading out from a fixed origin. Then the distribution of the field at this origin can be considered, via the final idea discussed above, as the limit of an infinite sequence of random harmonic functions given by the values taken by the field at increasingly large radius from the origin. Defining the DGFF on the whole lattice depends on the existence or otherwise of this local limit.