Loss Networks and Erlang’s Fixed Point

Loss Networks

In Erlang’s telephone line model discussed in the previous post, we considered users competing for a finite set of resources. When insufficient resources are available, the call is lost. A loss network generalises this situation to more complicated resource configurations. We think of links 1, …, J, each with some integer capacity c_j. Each incoming call requires 1 units of each link in some subset of the set of links, and lasts for a time distributed as an exponential random variable with parameter 1, independent of everything else in the model. We call this subset the route, and denote by A_{jr} the incidence matrix of links on routes. Calls arrive as PP(\nu_r)$ independently for each route: no queueing occurs – a call is lost if some link required is operating at full capacity. We call the probability of this event L_r, the loss probability. Observe that (n_r), the number of calls on each route r, is a Markov chain on the truncated space \{An\leq c\}.

By checking the DBEs, it is clear that an ED for this Markov chain is proportional to the ED for the MC without the capacity constraint, with state-space restricted to the truncated space. But without capacity constraints, the system is a linear migration process, for which we discovered the form of the ED in the previous section. If we write H(c)=\mathbb{P}(An\leq c) in the linear migration process, we can compute the acceptance probability for the finite capacity system as:

1-L_r=\frac{H(C-Ae_r)}{H(C)}

Approximating Blocking Probabilities

We want to calculate B_j, the equilibrium blocking probability, that a given link j is full. We have two methods: firstly, to find the distribution for (n_r) with maximum probability, for which the blocking probabilities appear as shadow prices. And secondly, to make a reasonable approximation about blocking independence, and solve explicitly. We want to show that these methods give the same answers.

To maximise the probability \pi(n)\propto \prod_r \frac{\nu_r^{n_r}}{n_r!} on \{An\leq c\}, we take logs and maximise using Stirling’s approximation, which is reasonable as we are implicitly working under a regime where the throughput tends to infinity while preserving ratios.

The primal problem is

\max\quad \sum_r(x_r\log \nu_r-x_r\log x_r+x_r),\quad\text{s.t. }Ax\leq c

which has Lagrangian

L(x,y,z)=\sum_r x_r+\sum_r x_r(\log \nu_r-\log x_r-\sum_j y_jA_{jr})+\sum_j y_jc_j-\sum_j y_jc_j

We observe that complementary slackness here has the form y.z=0, and remember that by Strong Duality, which applies here because everything relevant is convex, this equality holds at the primal optimum. Differentiating the Lagrangian at the optimum allows us to specify the optimal x in terms of y:

\bar{x}_r=\nu_r e^{-\sum y_jA_{jr}}

The dual problem is then to minimise

\min\quad \sum_r \nu_re^{-\sum_jy_jA_{jr}}+\sum_j y_jc_j

At this point, we make the suggestive substitution e^{-y_j}=1-B_j, observing that this gives B non-negative by default since y is non-negative. After further work, we will deduce that these B do indeed have a sensible interpretation as blocking probabilities, but it should be stressed that this is in no way obvious yet. Now complementary slackness asserts:

\sum_rA_{jr}\nu_r\prod_i(1-B_i)^{A_{ir}}\left\{\begin{array}{l l}=c_j& \quad B_j>0\\ \leq c_j & \quad B_j=0\\ \end{array} \right.

Note that the primal objective function is strictly convex so \bar{x} as discussed is the unique optimum. The dual objective is strictly convex in yA, so if A has full rank J, this induces a unique optimum in terms of y. We assume A is full rank (since for example we can perturb slightly) and that there is no degeneracy in the blocking.

Now we consider a sequence of networks with proportionally increasing arrival rates and capacities

\frac{1}{N}\nu_r(N)\rightarrow \nu_r,\quad \frac{1}{N}c_j(N)\rightarrow c_j

Then we show that B(N)\rightarrow B, \frac{1}{N}\bar{x}(N)\rightarrow\bar{x}. To do this, we observe that B(N)\subset [0,1]^J, compact. Let B’ be the limit of some subsequence of (B(N)). It is clear that for large enough N, we can replace the slackness condition in B(N) in the above for B’. Concretely,

\sum_rA_{jr}\nu_r(N)\prod_i(1-B_i(N))^{A_{ir}}\left\{\begin{array}{l l}=c_j& \quad B_j'>0\\ \leq c_j & \quad B_j'=0\\ \end{array} \right.

Dividing by N and applying uniqueness gives B’=B. In the usual way, this implies that the whole sequence B(N)\rightarrow B. The limit for \bar{x} follows at once.

We can further prove that u_r(N)=\frac{1}{\sqrt{N}}(n_r(N)-\bar{x}(N))\stackrel{d}{\rightarrow}u where u_r\sim N(0,\bar{x}_r), independent, uniformly conditioned on A_{\mathcal{B}}u=0, where \mathcal{B} are the indices for which B is non-zero.

By Little’s Law, a consequence is:

1-L_r(N)=\frac{\mathbb{E}(n_r(N))}{\nu_r(N)}=\frac{\mathbb{E}(\frac{n_r(N)}{N})}{\frac{\nu_r(N)}{N}}\rightarrow \frac{\bar{x}_r}{\nu_r}=\prod_j(1-B_j)^{A_{jr}}

Note that this is the first concrete result showing B represents blocking, but explains the significance of everything that came before.

Erlang’s Fixed Point

Alternatively, we assume blocking is independent for different links. A link only receives traffic on routes for which the other links are unblocked, so by Erlang’s formula, we can approximate B_j by

B_j=E(\sum_rA_{jr}\nu_r\prod_{i\in r-\{j\}}(1-B_i),c_j)

This is a continuous map from [0,1]^J\rightarrow[0,1]^J, so by Brouwer’s Fixed Point Theorem, there is a solution, called the Erlang Fixed Point.

We consider in fact the more general equations (ie for more general A):

E_j=E\left[(1-E_j)^{-1}\sum_rA_{jr}\nu_r\prod_i(1-E_i)^{A_{ir}},c_j\right]

We prove these have a unique solution, again making use of optimisation arguments.

Define U(z,c) implicitly by

U(-\log(1-E(\nu,c)),c)=\nu(1-E(\nu,c))

which is strictly increasing in z. This has a utilisation interpretation as the mean number of circuits in use for an Erlang model with blocking probability 1-e^{-z}. The function \int_0^y U(z,c)dz is strictly convex in y, and so

\min\quad \sum_r \nu_re^{-\sum_j y_jA_{jr}}+\sum_j\int_0^{y_j}U(z,c_j)dz has a unique minimum.

The conditions for the minimum precisely assert that (E_j) is unique. Furthermore U(y,c)=c-(e^y-1)^{-1}+o(1) uniformly on compact sets in y as c\rightarrow\infty. This is because if c,\nu\rightarrow\infty for an isolated link:

\pi(c)=(1+\frac{c}{\nu}+\frac{c(c-1)}{\nu^2}+\ldots)^{-1}\rightarrow (1+(1-B)+(1-B)^2+\ldots)=B

and \mathbb{E}[\text{free circuits}]=\sum_{m=0}^c m\pi(c-m)=(e^y-1)^{-1} after a bit of calculation. Finally, since the revised dual converges uniformly to the original dual, the minima converge also: E_j(N)\rightarrow B_j as we would hope.

As a note of caution about the non-uniqueness of these Erlang-type equation sets, consider a model on K_n, where calls are routed directly if possible, otherwise an attempt is made to reroute through a randomly chosen second vertex. Then, assuming independence \mathbb{P}(\text{call accepted})=(1-B)+B(1-B)^2. So B=E(\nu(1+2B(1-B)),c), after thinking carefully about where calls come from.

Letting c,\nu\rightarrow\infty, we obtain

B=[1-\frac{c}{\nu}[1+2B(1-B)]^{-1}]^+

So if, eg c=\infty, we have multiple solutions for B.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s