RMM 2017 – Problems 1, 4 and 5

I’ve recently taken a UK team to the 2017 edition of the Romanian Master of Mathematics competition in Bucharest. The British students did extremely well and we all enjoyed ourselves mathematically and generally. The customary diary may appear shortly, but this time I want to focus mainly on the questions, since that is after all the main point of these competitions! I hope that what follows is interesting, and at slightly education to potential future students.

I’ve split this into two posts based on my opinion on difficulty, which is subjective but probably correlates fairly positively with most people’s. The account of Q1 is guest-written by two British students, based on their solutions during the competition.

Problem 1

a) Prove that every positive integer n can be written uniquely in the form

n = \sum_{j=1}^{2k+1} (-1)^{j-1} 2^{m_j},

where k\geq 0 and 0 \leq m_1 < m_2 < \cdots < m_{2k+1} are integers. This number k is called the weight of n.

b) Find (in closed form) the difference between the number of positive integers at most 2^{2017} with even weight and the number of positive integers at most 2^{2017} with odd weight.

Rosie Cates and Neel Nanda:

a) We are trying to express n in terms of powers of 2, so it seems sensible to write in binary. As 2^{m_1} is the smallest power of 2, this term is responsible for the last 1 in the binary representation of n. Let $letx x = n – 2^{m_1}$ (ie n with the last 1 removed from its binary expansion). Now if we pair up terms in the sum to get

x = (2^{m_{2k}+1} - 2^{m_{2k}}) + \ldots + (2^{m_3} - 2^{m_2}),

we can see that each bracket looks like 11…100…0 when written in binary. Also, the condition that m_i < m_{i+1} is equivalent to ensuring that we do not break any strings of consecutive 1s that were in the binary expansion of x (so for example 111110 = 110000 +1110 is not allowed). So writing x in the desired form is the same as writing it as the sum of numbers of the form 11…100\ldots 0 without breaking any strings of 1s. For example

1110100110 = 1110000000 + 100000 + 110.

Clearly there is exactly one way of doing this for every x, so (as each n has exactly one x) there is exactly one way to do it for each n as well.

This approach allows k to be understood differently. Write n in binary and remove the last 1; now count the number of groups of consecutive 1s. This is equal to k.

b) The second half of the problem becomes a lot simpler with the observation that n\leq 2^{m_{2k+1}}, as

n=2^{m_{2k+1}}-(2^{m_{2k}}-2^{m_{2k-1}})-\ldots-(2^{m_2}-2^{m_1}),

and the sequence m_n is increasing, so each bracket is positive. As each sequence of (m_n)s corresponds uniquely to an integer, this means we just want to count sequences of (m_n)s with greatest term at most 2017. The sequence is increasing, so each sequence corresponds to a subset of {0, 1, …, 2017} of size (2k+1). There are \binom{2018}{2k+1} subsets of size (2k+1), so the question reduces to finding a closed form for \sum_{k=0}^{1008} (-1)^k {{2018}\choose{2k+1}}.

This is reminiscent of a classic problem in combinatorics: using the binomial theorem to evaluate sums of binomial coefficients weighted by powers. The best example is

\sum_{k=0}^n (-1)^k \binom{n}{k} =(1-1)^n=0,

but here rather than (-1) we want something whose square is $(-1)$, so we consider the complex number i. Using the same ideas, we get that

\sum_{r=0}^{2018} i^r \binom{2018}{r}=(1+i)^{2018},

which contains what we want, but also binomial coefficients with even r. But if r is even, i^r is real, and if r is odd, i^r is imaginary. So the sum we want appears as the imaginary part, that is

\mathrm{Im}\left((1+i)^{2018}\right)=\mathrm{Im}\left((\sqrt{2} \cdot e^{\frac{i\pi}{4}})^{2018}\right)=2^{1009}.

Dominic: note that in both parts, the respective authors find slightly more than what they were required to. That is, respectively, the interpretation of k, and a bound on m_{2k+1}. The latter is an excellent example of the general notion that sometimes it is better to use a stronger statement than what you actually require in an induction argument (here for existence). The stronger statement (which you guess from playing with examples) makes the inductive step easier, as it’s then clear that the new term you get is distinct from the terms you already have.

Problem 4

In the Cartesian plane, let \mathcal G_1, \mathcal G_2 be the graphs of the quadratic functions f_1(x) = p_1x^2 + q_1x + r_1, f_2(x) = p_2x^2 + q_2x + r_2, where p_1 > 0 > p_2. The graphs \mathcal G_1, \mathcal G_2 cross at distinct points A and B. The four tangents to \mathcal G_1, \mathcal G_2 at~A and B form a convex quadrilateral which has an inscribed circle. Prove that the graphs \mathcal{G}_1 and \mathcal{G}_2 have the same axis of symmetry.

This question is quite unusual for an olympiad of this kind, and I was initially skeptical, but then it grew on me. Ultimately, I was unsurprised that many contestants attacked entirely with coordinate calculations. If you use this strategy, you will definitely get there in the end, but you have to accept that you aren’t allowed to make any mistakes. And because of the amount of symmetry in the configuration, even if you make a mistake, you might still get the required answer, and so not notice that you’ve made a mistake. But I decided I liked it because various levels of geometric insight either reduced or removed the nastier calculations.

Typically, one could gain geometric insight by carefully observing an accurate diagram, but an accurate parabola is hard to draw. However, even from a vague diagram, we might guess the key intermediate property of the configuration, which is that the line joining the other two points in the quadrilateral is parallel to the y-axis. This means that they have the same x-coordinate, and indeed this x-coordinate must in fact be the same for any parabola through A and B, so it is reasonable to guess that it is \frac{x_A+x_B}{2}, the mean of the x-coordinates of A and B.

Since you know this is the goal, it’s not too bad to calculate the equations of the tangent lines directly, and demonstrate this algebraically. But I was determined to use the focus-directrix definition of a parabola. Either recall, or digest the interesting new fact that a parabola may be defined as the locus of points which are the same distance from a fixed point P (the focus), and a fixed line \ell (the directrix). Naturally, the distance to the line is perpendicular distance.

To ensure the form given in the statement where y is a quadratic function of x, in this setting the directrix should be parallel to the x-axis. To define the tangent to the parabola at A, let A’ be the foot of the perpendicular from A onto \ell, so AA’=PA. I claim that the tangent at A is given by the perpendicular bisector of A’P. Certainly this passes through A, and it is easy to convince yourself that it can’t pass through any other point B on the parabola, since BA’> PB, as A’ is on \ell but is not the foot of the perpendicular form B to \ell. This final observation is truly a lot more obvious if you’re looking at a diagram.

We now want to finish geometrically too. In our quadrilateral, one diagonal is parallel to the y-axis, and it will suffice to show that the existence of an incircle implies that A and B must have the same y-coordinate. We have just shown A and B are the same (horizontal) distance from the other diagonal. So certainly if they have the same y-coordinate, then the quadrilateral is a kite, and the sums of opposite sides are equal, which is equivalent to the existence of an incircle. One could then finish by arguing that this ceases to be true if you move one of A and B in either direction, or by some short explicit calculation if such a perturbation argument leaves you ill at ease.

Question 5

Fix an integer n \geq 2. An n x n  sieve is an n x n array with n cells removed so that exactly one cell is removed from every row and every column. A stick is a 1 x k or k x 1 array for any positive integer k. For any sieve A, let m(A) be the minimal number of sticks required to partition A. Find all possible values of m(A), as A varies over all possible n x n sieves.

This is a fairly classic competition problem, and while in my opinion the statement isn’t particularly fascinating, it’s interesting that it admits such a wide range of approaches.

As ever, you need to start by playing around with the setup, and guessing that the answer is 2n-2, and not thinking `it can’t possibly be the same answer as Q3??’ Then think about reasons why you couldn’t do better than 2n-2. My very vague reason was that if you only use horizontal sticks, the answer is clearly 2n-2, and the same if you only use vertical sticks. But it feels like you can only make life harder for yourself if you try to use both directions of sticks in lots of places. Note that some sort of argument involving averaging over stick lengths is definitely doomed to fail unless it takes into account the Latin square nature of the location of holes! For example, if you were allowed to put all the holes in the first row, m(A) would be n-1.

Induction is tempting. That is, you remove some number of sticks, probably those corresponding to a given hole, to reduce the board to an (n-1)x(n-1) configuration. If you do this, you need to be clear about why you can remove what you want to remove (in particular, the number of sticks you want to remove), and whether it’s qualitatively different if the hole in question lies on the border of the board. In all of these settings, you want to be careful about 1×1 sticks, which it’s easy inadvertently to count as both horizontal and vertical. This is unlikely to affect the validity of any argument (just picking either an arbitrary or a canonical direction if it’s 1×1 should be fine) but does make it much harder to check the validity.

Joe exhibited directly a construction of 2n-2 cells which must be covered by different sticks. This approach lives or dies by the quality of the written argument. It must look general, even though any diagram you draw must, almost by definition, correspond to some particular case. Alternatively, since the problem is set on a grid, the cells correspond naturally to edges of a bipartite graph, where classes correspond to rows and columns. The holes form a perfect matching on this bipartite graph. But, as Harvey observed, if you split the rows and columns in two, on either side of the relevant hole (or not in the 2+2 cases where the hole is at the border), you have a (2n-2)+(2n-2) bipartite graph, and a perfect matching here corresponds to a set of cells which must be covered by different sticks. This is an ingenious idea, and if you’ve recently met Hall’s Marriage Theorem, which gives a verifiable criterion for the existence of such a perfect matching, there are few better uses of your next ten minutes than to check whether Hall’s condition a) should hold; b) can be proven to hold in this setting.

Antichains in the grid

In the previous post on this topic, we discussed Dilworth’s theorem on chains and antichains in a general partially ordered set. In particular, whatever the size of the largest antichain in a poset, it is possible to partition the poset into exactly that many chains. So for various specific posets, or the directed acyclic graphs associated to them, we are interested in the size of this largest antichain.

The following example turned out to be more interesting than I’d expected. At a conventional modern maths olympiad, there are typically three questions on each paper, and for reasons lost in the mists of time, each student receives an integer score between 0 and 7 per question. A natural question to ask is “how many students need to sit a paper before it’s guaranteed that one will scores at least as highly as another on every question?” (I’m posing this as a straight combinatorial problem – the correlation between scores on different questions will be non-zero and presumably positive, but that is not relevant here.)

The set of outcomes is clearly \{0,1,\ldots,7\}^3, with the usual weak domination partial order inherited from \mathbb{R}^3. Then an antichain corresponds to a set of triples of scores such that no triple dominates another triple. So the answer to the question posed is: “the size of the largest antichain in this poset, plus one.”

In general, we might ask about \{1,2,\ldots,n\}^d, again with the weak domination ordering. This directed graph, which generalises the hypercube as well as our example, is called the grid.

Heuristics for the largest antichain

Retaining the language of test scores on multiple questions is helpful. In the previous post, we constructed a partition of the poset into antichains, indexed by the elements of some maximal chain, by starting with the sources, then looking at everything descended only from sources, and so on. (Recall that the statement that this is possible was referred to as the dual of Dilworth’s theorem.) In the grid, there’s a lot of symmetry (in particular under the mapping x\mapsto n+1-x in every coordinate), and so you end up with the same family of antichains whether you work upwards from the sources or downwards from the sinks. (Or vice versa depending on how you’ve oriented your diagram…) The layers of antichains also have a natural interpretation – each layer corresponds to a given total score. It’s clear a priori why each of these is an antichain. If A scores the same as B overall, but strictly more on the first question, this must be counterbalanced by a strictly lower score on another question.

So a natural guess for the largest antichain is the largest antichain corresponding to some fixed total score. Which total score should this be? It ought to be the middle layer, that is total score \frac{(n+1)d}{2}, or the two values directly on either side if this isn’t an integer. My intuition was probabilistic. The uniform distribution on the grid is achieved by IID uniform distributions in each coordinate, which you can think of as a random walk, especially if you subtract off the mean first. It feels that any symmetric random walk should have mode zero or next-to-zero. Certainly this works asymptotically in a rescaled sense by CLT, and in a slightly stronger sense by local CLT, but we don’t really want asymptotics here.

When I started writing the previous paragraph, I assumed there would be a simple justification for the claim that the middle layer(s) was largest, whether by straight enumeration, or some combinatorial argument, or even generating functions. Perhaps there is, and I didn’t spot it. Induction on d definitely works though, with a slightly stronger hypothesis that the layer sizes are symmetric around the median, and monotone on either side of the median. The details are simple and not especially interesting, so I won’t go into them.

From now on, the hypothesis is that this middle layer of the grid is the largest antichain. Why shouldn’t it, for example, be some mixture of middle-ish layers? (*) Well, heuristically, any score sequence in one layer removes several possibilities from a directly adjacent layer, and it seems unlikely that this effect is going to cancel out if you take some intermediate number of score sequences in the first layer. Also, the layers get smaller as you go away from the middle, so because of the large amount of symmetry (coordinates are exchangeable etc), it feels reasonable that there should be surjections between layers in the outward direction from the middle. The union of all these surjections gives a decomposition into chains.

This result is in fact true, and its proof by Bollobas and Leader, using shadows and compression can be found in the very readable Sections 0 and 1 of [1].

Most of the key ideas to a compression argument are present in the case n=2, for which some notes by Leader can be found here, starting with Proof 1 of Theorem 3, the approach of which is developed over subsequent sections. We treat the case n=2, but focusing on a particularly slick approach that does not generalise as successfully. We also return to the original case d=3 without using anything especially exotic.

Largest antichain in the hypercube – Sperner’s Theorem

The hypercube \{0,1\}^d is the classical example. There is a natural correspondence between the vertices of the hypercube, and subsets of [d]. The ordering on the hypercube corresponds to the ordering given by containment on \mathcal{P}([d]). Almost by definition, the k-th layer corresponds to subsets of size k, and thus includes \binom{d}{k} subsets. The claim is that the size of the largest antichain is \binom{d}{\lfloor d/2 \rfloor}, corresponding to the middle layer if d is even, and one of the two middle layers if d is odd. This result is true, and is called Sperner’s theorem.

I know a few proofs of this from the Combinatorics course I attended in my final year at Cambridge. As explained, I’m mostly going to ignore the arguments using compression and shadows, even though these generalise better.

As in the previous post, one approach is to exhibit a covering family of exactly this number of disjoint chains. Indeed, this can be done layer by layer, working outwards from the middle layer(s). The tool here is Hall’s Marriage Theorem, and we verify the relevant condition by double-counting. Probably the hardest case is demonstrating the existence of a matching between the middle pair of layers when d is odd.

Take d odd, and let d':= \lfloor d/2\rfloor. Now consider any subset S of the d’-th layer \binom{[d]}{d'}. We now let the upper shadow of S be

\partial^+(S):= \{A\in \binom{[d]}{d'+1}\,:\, \exists B\in S, B\subset A\},

the sets in the (d’+1)-th layer which lie above some set in S. To apply Hall’s Marriage theorem, we have to show that |\partial^+(S)|\ge |S| for all choice of S.

We double-count the number of edges in the hypercube from S to \partial^+(S). Firstly, for every element B\in S, there are exactly d’ relevant edges. Secondly, for every element A\in\partial^+(S), there are exactly d’ edges to some element of \binom{[d]}{d'}, and so in particular there are at most d’ edges to elements of S. Thus

d' |S|=|\text{edges }S\leftrightarrow\partial^+(S)| \le d' |\partial^+(S)|,

which is exactly what we require for Hall’s MT. The argument for the matching between other layers is the same, with a bit more notation, but also more flexibility, since it isn’t a perfect matching.

The second proof looks at maximal chains. Recall, in this context, a maximal chain is a sequence \mathcal{C}=B_0\subset B_1\subset\ldots\subset B_d where each B_k:= \binom{[d]}{k}. We now consider some largest-possible antichain \mathcal{A}, and count how many maximal chains include an element A\in\mathcal{A}. If |A|=k, it’s easy to convince yourself that there are \binom{d}{r} such maximal chains. However, given A\ne A'\in\mathcal{A}, the set of maximal chains containing A and the set of maximal chains containing A’ are disjoint, since \mathcal{A} is an antichain. From this, we obtain

\sum_{A\in\mathcal{A}} \binom{d}{|A|} \le d!. (**)

Normally after a change of notation, so that we are counting the size of the intersection of the antichain with each layer, this is called the LYM inequality after Lubell, Yamamoto and Meshalkin. The heuristic is that the sum of the proportions of layers taken up by the antichain is at most one. This is essentially the same as earlier at (*). This argument can also be phrased probabilistically, by choosing a *random* maximal chain, and considering the probability that it intersects the proposed largest antichain, which is, naturally, at most one. Of course, the content is the same as this deterministic combinatorial argument.

Either way, from (**), the statement of Sperner’s theorem follows rapidly, since we know that \binom{d}{|A|}\le \binom{d}{\lfloor d/2\rfloor} for all A.

Largest antichain in the general grid

Instead of attempting a proof or even a digest of the argument in the general case, I’ll give a brief outline of why the previous arguments don’t transfer immediately. It’s pretty much the same reason for both approaches. In the hypercube, there is a lot of symmetry within each layer. Indeed, almost by definition, any vertex in the k-th layer can be obtained from any other vertex in the k-th layer just by permuting the labels (or permuting the coordinates if thinking as a vector).

The hypercube ‘looks the same’ from every vertex, but that is not true of the grid. Consider for clarity the n=8, d=3 case we discussed right at the beginning, and compare the scores (7,0,0) and (2,2,3). The number of maximal chains through (7,0,0) is \binom{14}{7}, while the number of maximal chains through (2,2,3) is \binom{7}{2, 2,3}\binom{14}{4,5,5}, and the latter is a lot larger, which means any attempt to use the second argument is going to be tricky, or at least require an extra layer of detail. Indeed, exactly the same problem arises when we try and use Hall’s condition to construct the optimal chain covering directly. In the double-counting section, it’s a lot more complicated than just multiplying by d’, as was the case in the middle of the hypercube.

Largest antichain in the d=3 grid

We can, however, do the d=3 case. As we will see, the main reason we can do the d=3 case is that the d=2 case is very tractable, and we have lots of choices for the chain coverings, and can choose one which is well-suited to the move to d=3. Indeed, when I set this problem to some students, an explicit listing of a maximal chain covering was the approach some of them went for, and the construction wasn’t too horrible to state.

[Another factor is that it computationally feasible to calculate the size of the middle layer, which is much more annoying in d>3.]

[I’m redefining the grid here as \{0,1,\ldots,n-1\}^d rather than \{1,2,\ldots,n\}^d.]

The case distinction between n even and n odd is going to make both the calculation and the argument annoying, so I’m only going to treat the even case, since n=8 was the original problem posed. I should be honest and confess that I haven’t checked the n odd case, but I assume it’s similar.

So when n is even, there are two middle layers namely \frac{3n}{2}-2, \frac{3n}{2}-1 (corresponding to total score 10 and total score eleven in the original problem). I calculated the number of element in the \frac{3n}{2}-1 layer by splitting based on the value of the first coordinate. I found it helpful to decompose the resulting sum as

\sum_{k=0}^{n-1} = \sum_{k=0}^{\frac{n}{2}-1} + \sum_{k=\frac{n}{2}}^{n-1},

based on whether there is an upper bound, or a lower bound on the value taken by the second coordinate. This is not very interesting, and I obtained the answer \frac{3n^2}{4}, and of course this is an integer, since n is even.

Now to show that any antichain has size at most \frac{3n^2}{4}. Here we use our good control on the chain coverings in the case d=2. We note that there is a chain covering of the (n,d=2) grid where the chains have 2n-1, 2n-3,…, 3, 1 elements (%). We get this by starting with a maximal chain, then taking a maximal chain on what remains etc. It’s pretty much the first thing you’re likely to try.

Consider an antichain with size A in the (n,d=3) grid, and project into the second and third coordinates. The image sets are distinct, because otherwise a non-trivial pre-image would be a chain. So we have A sets in the (n,d=2) grid. How many can be in each chain in the decomposition (%). Well, if there are more than n in any chain in (%), then two must have been mapped from elements of the (n,d=3) grid with the same first coordinate, and so satisfy a containment relation. So in fact there are at most n image points in any of the chains of (%). So we now have a bound of n^2. But of course, some of the chains in (%) have length less than n, so we are throwing away information. Indeed, the number of images points in a given chain is at most

\max(n,\text{length of chain}),

and so the number of image points in total is bounded by

n+\ldots+n+ (n-1)+(n-3)+\ldots+1,

where there are n/2 copies of n in the first half of the sum. Evaluating this sum gives \frac{3n^2}{4}, exactly as we wanted.

References

[1] – Bollobas, Leader (1991) – Compressions and Isoperimetric Inequalities. Available open-access here.

Hall’s Marriage Theorem

Hall’s Marriage Theorem gives conditions on when the vertices of a bipartite graph can be split into pairs of vertices corresponding to disjoint edges such that every vertex in the smaller class is accounted for. Such a set of edges is called a matching. If the sizes of the vertex classes are equal, then the matching naturally induces a bijection between the classes, and such a matching is called a perfect matching.

The number of possible perfect matchings of K_{n,n} is n!, which is a lot to check. Since it’s useful to have bijections, it’s useful to have matchings, so we would like a simple way to check whether a bipartite graph has a matching. Hall’s Marriage Theorem gives a way to reduce the number of things to check to 2^n, which is still large. However, much more importantly, the condition for the existence of a matching has a form which is much easier to check in many applications. The statement is as follows:

Given bipartite graph G with vertex classes X and Y, there is a matching of X into Y precisely when for every subset A\subset X, |\Gamma(A)|\ge |A|, where \Gamma(A) is the set of vertices joined to some vertex in A, called the neighbourhood of A.

Taking A=X, it is clear that |Y|\ge |X| is a necessary condition for the result to hold, unsurprisingly. Perhaps the most elementary standard proof proceeds by induction on the size of X, taking the smallest A to give a contradiction, then using the induction hypothesis to lift smaller matchings up to the original graph. This lifting is based on the idea that a subset relation between sets induces a subset relation between their neighbourhoods.

In this post, I want to consider this theorem as a special case of the Max-Flow-Min-Cut Theorem, as this will support useful generalisations much more easily. The latter theorem is a bit complicated notationally to set up, and I don’t want it to turn into the main point of this post, so I will summarise. The Wikipedia article, and lots of sets of lecture notes are excellent sources of more detailed definitions.

The setting is a weakly-connected directed graph, with two identified vertices, the source, with zero indegree, and the sink, with zero outdegree. Every other vertex lies on a path (not necessarily unique) between the source and the sink. Each edge has a positive capacity, which should be thought of as the maximum volume allowed to flow down the edge. A flow is a way of assigning values to each edge so that they do not exceed the capacity, and there is volume conservation at each interior vertex. That is, the flow into the vertex is equal to flow out of the vertex. The value of the flow is the sum of the flows out of the source, which is necessarily equal to the sum of flows into the sink.

A cut is a partition of the vertices into two classes, with the source in one and the sink in the other. The value of a cut is then the sum of the capacities of any edges going from the class containing the source to the class containing the sink. In most examples, the classes will be increasing, in the sense that any path from source to sink changes class exactly once.

The Max-Flow-Min-Cut Theorem asserts that the maximum value of a flow through the system is equal to the minimum value of a cut. The proof is elementary, though it relies on defining a sensible algorithm to construct a minimal cut from a maximal flow that is not going to be interesting to explain without more precise notation available.

First we explain why Hall’s Marriage Theorem is a special case of this result. Suppose we are given the setup of HMT, with edges directed from X to Y with infinite capacity. We add edges of capacity 1 from some new vertex x_0 to each vertex of X, and from each vertex of Y to a new vertex y_0. The aim is to give necessary and sufficient conditions for the existence of a flow of value |X|. Note that one direction of HMT is genuinely trivial: if there is a matching, then the neighbourhood size condition must hold. We focus on the other direction. If the maximum flow is less than |X|, then there should be a cut of this size as well. We can parameterise a cut by the class of vertices containing the source, say S. Let A=SnX. If \Gamma(A)\not\subset S, then there will be an infinite capacity edge in the cut. So if we are looking for a minimal cut, this should not happen, hence \Gamma(A)\subset S if S is minimal. Similarly, there cannot be any edges from X\A to \Gamma(A). The value of the cut can then be given by

|\Gamma(A)|+|X|-|A|, which is at least |X| if the neighbourhood size assumption is given. Note we can use the same method with the original edges having capacity one, but we have to track slightly more quantities.

This topic came up because I’ve been thinking about fragmentation chains over this holiday. I have a specific example concerning forests of unrooted trees in mind, but won’t go into details right now. The idea is that we often have distributions governing random partitions of some kind, let’s say of [n]. Conditioning on having a given number of classes might give a family of distributions P_{n,k} for the partitions of [n] into k parts. We would be interested to know how easy it is to couple these distributions in a nice way. One way would be via a coalescence or fragmentation process. In the latter, we start with [n] itself, then at each step, split one of the parts into two according to some (random, Markovian) rule. We are interested in finding out whether such a fragmentation process exists for a given distribution.

It suffices to split the problem into single steps. Can we get from P_{n,k} to P_{n,k+1}?

The point I want to make is that this is just a version of Hall’s Marriage Theorem again, at least in terms of proof method. We can take X to be the set of partitions of [n] into k parts, and Y the set of partitions into (k+1) parts. Then we add a directed edge with infinite capacity between x\in X and y\in Y if y can be constructed from x by breaking a part into two. Finally, we connect a fresh vertex x_0 to each edge in X, only now we insist that the capacity is equal to P_{n,k}(x), and similarly an edge from y to y_0 with capacity equal to P_{n,k+1}(y). The existence of a fragmentation chain over this step is then equivalent to the existence of a flow of value 1 in the directed graph network.

Although in many cases this remains challenging to work with, which I will explore in a future post perhaps, this is nonetheless a useful idea to have in mind when it comes to deciding on whether such a construction is possible for specific examples.