Coalescence 2: Marcus-Lushnikov Processes and a neat equivalence

Last time, discussed the Smoluchowski equations which define an infinite volume mean-field model for coalescence. Now we consider a stochastic coalescent model, the Marcus-Lushnikov process. Here, we start with a finite collection of finite mass particles, with total mass N. Then we define a process by demanding that given particles with masses and coalesce into a single particle with mass x + y at rate K(x,y)/N.

This can be formalised as a continuous-time Markov chain in different ways. The underlying state space consists of the set of unordered multisets of positive integers which sum to N. But rather than considering the configurations of masses themselves, it is sometimes more convenient to take the state space to be:

\{n=(n_1,\ldots,n_N): \sum xn_x=N\}.

Here n_x records the number of particles with mass x, and the constraint controls conservation of mass. Writing out a transition of this Markov chain is notationally annoying, but they have simple transition rates. The rate of transition from (n_1,\ldots,n_N) to the state where masses x and have coalesced is given by

N^{-1}K(x,y)n_xn_y.

Therefore, the Marcus-Lushnikov process ML^{(N)}_t is the process of component sizes in the finite particle setting of coalescence. The existence and mode of convergence of these processes to the deterministic solutions to Smoluchowski’s equation are of particular interest in many models.

As discussed in the previous post, there is an obvious heuristic link between the multiplicative coalescent and random graph processes. An easy but interesting explicit equivalence can be drawn between the Marcus-Lushnikov process with kernel K(x,y) = xy and monodisperse (that is, starting with unit mass particles) initial conditions and a specific random graph structure.

Proposition: The process ML^{(N)}_t with the conditions above is equivalent to the process of component sizes in \mathcal{G}(N,1-e^{-t/N}).

Proof: First, observe that we can couple the latter process in the obvious way by associating a U[0,1] random variable U_e with each of the \binom{N}{2} edges. The edge e is included at time iff U_e\leq 1-e^{-t/N}. In this process, the appearances of different edges are independent and

\mathbb{P}(\text{edge \emph{e} appears after \emph{t}})=e^{-t/N}.

Therefore the waiting times for a given edge to appear are independent \exp(1/N) RVs. In particular, the edge process is memoryless, hence the component size process is Markov. Then it is genuinely obvious that the rate at which an edge joining given distinct components of sizes and appears is N^{-1}xy. So the evolution is exactly the same as the Marcus-Lushnikov process, and the initial configuration is monodisperse.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s