# Loss Networks and Erlang’s Fixed Point

Loss Networks

In Erlang’s telephone line model discussed in the previous post, we considered users competing for a finite set of resources. When insufficient resources are available, the call is lost. A loss network generalises this situation to more complicated resource configurations. We think of links 1, …, J, each with some integer capacity $c_j$. Each incoming call requires 1 units of each link in some subset of the set of links, and lasts for a time distributed as an exponential random variable with parameter 1, independent of everything else in the model. We call this subset the route, and denote by $A_{jr}$ the incidence matrix of links on routes. Calls arrive as PP($\nu_r$)\$ independently for each route: no queueing occurs – a call is lost if some link required is operating at full capacity. We call the probability of this event $L_r$, the loss probability. Observe that $(n_r)$, the number of calls on each route r, is a Markov chain on the truncated space $\{An\leq c\}$.

By checking the DBEs, it is clear that an ED for this Markov chain is proportional to the ED for the MC without the capacity constraint, with state-space restricted to the truncated space. But without capacity constraints, the system is a linear migration process, for which we discovered the form of the ED in the previous section. If we write $H(c)=\mathbb{P}(An\leq c)$ in the linear migration process, we can compute the acceptance probability for the finite capacity system as:

$1-L_r=\frac{H(C-Ae_r)}{H(C)}$

Approximating Blocking Probabilities

We want to calculate $B_j$, the equilibrium blocking probability, that a given link j is full. We have two methods: firstly, to find the distribution for $(n_r)$ with maximum probability, for which the blocking probabilities appear as shadow prices. And secondly, to make a reasonable approximation about blocking independence, and solve explicitly. We want to show that these methods give the same answers.

To maximise the probability $\pi(n)\propto \prod_r \frac{\nu_r^{n_r}}{n_r!}$ on $\{An\leq c\}$, we take logs and maximise using Stirling’s approximation, which is reasonable as we are implicitly working under a regime where the throughput tends to infinity while preserving ratios.

The primal problem is

$\max\quad \sum_r(x_r\log \nu_r-x_r\log x_r+x_r),\quad\text{s.t. }Ax\leq c$

which has Lagrangian

$L(x,y,z)=\sum_r x_r+\sum_r x_r(\log \nu_r-\log x_r-\sum_j y_jA_{jr})+\sum_j y_jc_j-\sum_j y_jc_j$

We observe that complementary slackness here has the form $y.z=0$, and remember that by Strong Duality, which applies here because everything relevant is convex, this equality holds at the primal optimum. Differentiating the Lagrangian at the optimum allows us to specify the optimal x in terms of y:

$\bar{x}_r=\nu_r e^{-\sum y_jA_{jr}}$

The dual problem is then to minimise

$\min\quad \sum_r \nu_re^{-\sum_jy_jA_{jr}}+\sum_j y_jc_j$

At this point, we make the suggestive substitution $e^{-y_j}=1-B_j$, observing that this gives B non-negative by default since y is non-negative. After further work, we will deduce that these B do indeed have a sensible interpretation as blocking probabilities, but it should be stressed that this is in no way obvious yet. Now complementary slackness asserts:

$\sum_rA_{jr}\nu_r\prod_i(1-B_i)^{A_{ir}}\left\{\begin{array}{l l}=c_j& \quad B_j>0\\ \leq c_j & \quad B_j=0\\ \end{array} \right.$

Note that the primal objective function is strictly convex so $\bar{x}$ as discussed is the unique optimum. The dual objective is strictly convex in $yA$, so if A has full rank J, this induces a unique optimum in terms of y. We assume A is full rank (since for example we can perturb slightly) and that there is no degeneracy in the blocking.

Now we consider a sequence of networks with proportionally increasing arrival rates and capacities Continue reading