As part of Part III, instead of sitting an extra exam paper I am writing an essay. I have chosen the topic of ‘Multiplicative Coalescence’. I want to avoid contravening plagiarism rules, which don’t allow you to quote your own words without a proper citation, which I figure is tricky on a blog, nor open publishing of anything you intend to submit. So just to be absolutely sure, I’m going to suppress this series of posts until after May 4th, when everything has to be handed in.

———–

**Informal Description**

*Coalescence* refers to a process in which particles join together over time. An example might be islands of foam on the surface of a cup of coffee. When two clumps meet, they join, and will never split. In this example, a model would need to take into account the shape of all the islands, their positions, their velocities, and boundary properties. To make things tractable, we need to distance ourselves from the idea that particles merge through collisions, which are highly physical and complicated, and instead just consider that they merge.

**Description of the Model**

When two particles coalesce, it is natural to assume that mass is conserved, as this will be necessary in any physical application. With this in mind, it makes sense to set up the entire model using only the masses of particles. Define the *kernel K(x,y) *which describes the relative rate or likelihood of the coalescence *{x,y} -> x+y*. This has a different precise meaning in different contexts. Effectively, we are making a mean-field assumption that all the complications of a physical model as described above can be absorbed into this coalescent kernel, either because the number of particles is large, or because the other effects are small.

When there is, initially, a finite number of particles, the process is stochastic. Coalescence almost surely happen one at a time, and so we can view the process as a continuous time Markov Chain with state space the set of relevant partitions of the total mass present. The transition rate *p(A,B) *is given by *K(x,y) *when the coalescence *{x,y} -> x+y *transforms partition *A *into *B*, and 0 otherwise. An observation is that the process recording the number of *{x,y} -> x+y *coalescences is an inhomogeneous Poisson process with local intensity *n(x,t)n(y,t)K(x,y) *where *n(x,t) *is the number of particles with mass *x *at time *t*.

This motivates the move to an infinite-volume setting. Suppose that there are infinitely many particles, so that coalescences are occurring continuously. The rate of *{x,y} -> x+y *coalescences is still *n(x,t)n(y,t)K(x,y) *but now *n(x,t) *specifies the density of particles with mass *x *at time *t*. Furthermore, because of the continuum framework, this rate is now *deterministic* rather than *stochastic*. This is extremely important, as by removing the probability from a probabilistic model, it can be treated as a large but simple ODE.

**Two Remarks**

1) Once this introduction is finished, we shall be bringing our focus onto multiplicative coalescence, where *K(x,y) = xy*. In particular, this is a homogeneous function, as are the other canonical kernels. This means that considering *K(x,y) = cxy* is the same as performing a constant factor time-change when *K(x,y) = xy*. Similarly, it is not important how the density *n(x,t) *is scaled as this can also be absorbed with a time-change. In some contexts, it will be natural and useful to demand that the total density be 1, but this will not always be possible. In general it is convenient to absorb as much as possible into the time parameter, particularly initial conditions, as will be discussed.

2) Working with an infinite volume of particles means that mass is no longer constrained to finitely many values. Generally, it is assumed that the masses are *discrete*, taking values in the positive integers, or *continuous*, taking values in the positive reals. In this case, the rate of coalescences between particles with masses in *(x, x+dx)* and *(y,y+dy) *is *n(x,t)n(y,t)K(x,y)dxdy*. The main difference between these will arise when we try to view the process as limits of finite processes. Continue reading