Dispersion in Social Networks

This post is based on a paper that appeared a couple of weeks ago on the Computer Science section of arXiv. You can find it here. I’m going to write a few things about the ideas behind the paper, and avoid pretty much entirely the convincing data the authors present, as well as questions of implementing the algorithms discussed.

The setting is a social network, which we can describe as a graph. Nodes stand for people, and an edge represents that the two associated people have some social connection. This paper focuses on edges corresponding to friendship in the Facebook graph.

A key empirical feature of the graph topology of such social networks as compared to most mathematical models of random graphs is the prevalence of short cycles, and so-called clustering. Loosely speaking, in an Erdos-Renyi random graph, any potential edges appear in the graph independently of the rest of the configuration. But this does not accord well with our experience of our own Facebook friend circle. In general, knowledge that A is friends with both B and C increases the likelihood that B and C are themselves friends. This effect appears to be more present in other models, such as Preferential Attachment and the Configuration Model, but that is really more a consequence of the degree sequence being less concentrated.

The reason for this phenomenon appearing in social networks is clear. People meet other people generally by sharing common activities, whether that be choice of school, job or hobbies. The question of how readily people choose to add others on Facebook is a worthwhile one, but not something I have the time or the sociological credibility to consider! In any case, it is not a controversial idea that for some typical activity, it is entirely possible that almost all the participants will end up as friends, leading to a large (almost-) ‘clique’ in the graph. Recall a clique is a copy of a complete graph embedded in a larger graph – that is, a set of nodes all of which are pairwise connected.

We could think of much of the structure of this sort of network as being generated in the following way. Suppose we were able to perform the very unlikely-sounding task of listing every conceivable activity or shared attribute that might engender a friendship. Each activity corresponds to a set of people. We then construct a graph on the set of people by declaring that a pair of nodes are connected by an edge precisely if the people corresponding to these nodes can both be found in some activity set.

Another way of thinking about this setup is to consider a bipartite graph, with people as one class of vertices, and activities as the other. Predictably, we join a person to an activity if they engage in that activity. The edges within the class of people are then induced by the bipartite edges. Obviously, under this interpretation, we could equally well construct a graph on the set of activities. Here, two activities would be joined if there is a person who does them both. Graphs formed in this way can be called Intersection Graphs, and there is lots of interest in investigating various models of Random Intersection Graphs.

The question addressed by the authors of the paper can be summarised as follows. A social network graph tells us whether two people are ‘friends’, but it does not directly tell us how close their relationship is. It is certainly an interesting question to ask whether the (local) network topology can give us a more quantitative measure of the strength of a friendship.

As the authors explain, a first approach might be to consider how many mutual friends two people have. (We consider only pairs of people who are actually friends. It seems reasonable to take this as a pre-requisite for a strong relationship among people who do actually use Facebook.) However, this can fail because of the way these social networks organise themselves around shared attributes and activities. The size of one of these cliques (which are termed social foci in parts of the literature) is not especially likely to be well correlated to the strengths of the friendships within the clique. In particular, the clique corresponding to someone’s workplace is likely to grow in size over time, especially when people grow towards an age where, on average, they move job much less. So it seems likely that, according to a naive examination of the number of mutual friends, we would predict that a person’s strongest friend is likely to be someone they work with, who perhaps by chance also does some other activity with that person.

The authors phrase this problem slightly differently. They examine algorithms for establishing a person’s spouse or long-term partner with good accuracy from only the local network structure.

Heuristically we might expect that a husband knows many of his wife’s work colleagues, and vice versa. Not all of these ties might be so strong that they actually lead to friendship, in the Boolean sense of Facebook, but we might expect that some noticeable proportion have this property. Naturally, there will be cliques to which both belong. One or more of these might be the reason they met in the first place, and others (eg parents at children’s schools) might have developed over the course of their relationship. However, as we’ve explained, this doesn’t narrow things down much.

(We need not be constrained by this heteronormative scenario. However, as the authors point out in a footnote, there are challenges in collecting data because of the large number of ironic relationship listings on Facebook, mainly among the undergraduate and younger community. This problem is particularly obstructive in the case of same-sex marriage, owing to the smaller numbers of genuine pairings, and larger numbers of false listings for this setting.)

DSC_2212 - Copy

The crucial observation is that if we look at the couple’s mutual friends, we expect to see large parts of the most important cliques from both husband and wife’s lives. Among these mutual friends, there will be some overlap, that is cliques of which both are an integral member. But among the rest, there will be a natural partition into friends who really originate from the husband, and friends who were introduced via the wife. So the induced graph on these mutual friends is likely to split into three classes of vertices, with very poor connectivity between two of them.

This is, up to sorting out scaling and so on, precisely the definition of dispersion, introduced by the authors. The dispersion between two vertices is high if the induced graph on their mutual neighbourhood has poor connectivity. Modulo exact choice of definition, they then exhibit data showing that this is indeed a good metric for determining marriages from the network topology, with success rate of around 50% over a wide range of users.

Advertisements

Beyond Erdos-Renyi: more realistic models of networks

The claim is often made that the study of random graphs such as the Erdos-Renyi model is worthwhile because it gives us information about complex systems which exist in the real world. The internet or social networks provide the example du jour at the moment, but it’s equally plausible to think about traffic flows, electrical systems or interacting biological processes too.

If this were entirely true, it would be great for two reasons. Firstly, in my opinion at least, it is a beautiful subject in its own right, and to have a concrete applicable reason to continue studying it would make it even better. (Not to mention the dreaded competition for funding…) Secondly, Erdos-Renyi is so simple. After all, it involves little more than adding some simple topology to a collection of IID Bernoulli random variables, and so it would surely be possible to draw some significant conclusions about how complicated real-world objects interact without too much mathematical effort.

Unfortunately, but unsurprising, this simplicity is a drawback as far as applications go. It is fairly clear that most real-world systems cannot offer any property even approaching the niceness of the independent, same probability edges condition. But rather than consign E-R to the ‘pretty but useless’ category of mathematical structures, we should think carefully about exactly why it fails to be a good model for real-world networks, and see whether there are any small adjustments that could be made to improve it.

This is something I’ve been meaning to read up about for ages and ages. What follows is based heavily on the Albert and Barabasi 2002 review paper. I suspect that many of the open problems and intuitive calculations have since been finished and formalised, but for an overview I hope that doesn’t matter hugely. I’ve also leafed through the relevant chapters of Remco Van der Hofstad’s notes, but am setting the details and the exercises aside for the holidays when I have a bit more time!

Problems with Erdos-Renyi

Recall that G(n,p) takes n vertices, and adds edges between any pair of vertices independently with probability p.

One property shared by most real-world networks is the scale-free phenomenon, which says that the degree distribution has a power law tail. The Albert-Barabasi papers gives a comprehensive survey of data verifying this claim. By contrast, G(n,p) has degree distribution which is approximately Poisson as n grows. This is concentrated near the average degree with a thin exponential tail, so does not satisfy this requirement. I was and still am a bit confused by the term ‘scale-free’. The idea is certainly that the local structure is independent of the size of the system, which seems to be true for the degree distributions in sparse ER, that is where p = O(1/n). But I think the correct heuristic is that it doesn’t matter how far zoomed in you are – the macroscopic structure looks similar for n vertices as for n^2 vertices. This certainly fails to be true for ER, where no vertex has O(n) neighbours, whereas with a power law tail, this does hold.

The main consequence of this is that there are a few vertices with very high degree. These are often called ‘hubs’ and parallels are drawn to the internet, where key websites and servers connect lots of traffic and pages from different areas. The idea is that the hubs are almost certainly well-connected to each other, and this offers a step towards a small-world phenomenon, where the shortest path between any two vertices is very small relative to the size of the system. This notion was introduced to mainstream culture by Stanley Milgram’s ‘Six degrees of separation’ experiment in the 60s, where it became clear that subjects were able to deliver a package to a complete stranger on the other side of the USA, using only personal contacts, in about six stages. The graph theoretic notion for this is the diameter, defined as the maximal graph distance between two points. Here, the graph distance means the length of the shortest path between the points. This definition, with the max-min formalism looks rather complicated, but isn’t really. The diameter of an Erdos-Renyi graph for fixed p, increases like log n, which is small relative to n, and so this property holds.

A quick glance at your list of Facebook friends will confirm that the independent edges condition in an Erdos-Renyi random graph is not a plausible model for social networks. How many friends do you have? Let’s say about 1000, more to make the calculation easier than because you’re necessarily very popular. How many does your friend Tom have? Let’s say 1000 again. As was in the news a few months ago, there are now over a billion people on Facebook. Let’s say exactly a billion (that is 10^9 for these purposes). So both you and Tom are friends with 1/10^6 of the total membership of the network. So how large would you expect the overlap of your friendships to be, if they were all chosen independently at random? Well, the probability that you are both friends with Alice is 10^-12, and so the expected number of your mutual friends is 10^-12 x 10^9 = 10^-3 which is substantially less than 1. Yet I imagine if you substituted names suitably, you and Tom might well have over 50 mutual friends if you were, say, in the same year at school or niversity and haven’t yet purged your list.

We want a statistic that records this idea quantitatively. There are various candidates for such a clustering coefficient. The underlying notion that we might expect there to be greater connectivity between neighbours of some fixed point v than in the graph as a whole gives an intuition for a possible definition. Compare the proportion of triangles in the graph to the cube of the proportion of edges. When this ratio is large, then there is a lot of clustering. In the E-R case, we would expect these to be equal, as the probability of forming a triangle is equal to the cube of the probability of the presence of each of the three independent edges that make up the triangle.

So we have three properties of real networks that we would like to incorporate into a model: small diameter, power-law degree distribution, and high clustering. To avoid this turning into a book, I’m going to write a paragraph about each of the possibilities discussed by Albert and Barabasi.

Generalised Random Graph

The degree distribution will typically emerge as a consequence of the construction of a given model. The general idea here is to condition on the degree distribution having the form we want, and see what this does to the structure. Of course, the choice of how to do this conditioning is absolutely key. It certainly isn’t obvious what it means to ‘condition G(n,p) to have power-law distribution’, since the very idea of a power-law vs exponential tail requires the number of vertices to be large.

The first idea for achieving this gives the vertices ‘stubs’, which join up in pairs to form edges. We decide on the distribution of stubs according to this power law, then pair them up uniformly at random. Obviously, there is a possibility of getting some loops, but this is not going to happen so often as to be a genuine problem in the limit. This construction is similarly open to the branching process exploration ideas well covered for the E-R random graph, though we have to be careful to size-bias the degree distributions when necessary. There is still an underlying independence in the location of edges though, so it is reasonably clear that the amount of clustering may be closer to E-R than to the real examples cited.

The other possibility suggested is to retain the independent edge property, but give the vertices weights, and let the probability of an edge between two vertices be some sensible function of the weights. In the end it turns out to make little difference whether the weights are chosen deterministically or randomly, but by taking the weights i.i.d. with infinite mean, we can generate a so-called generalised random graph where the degree distribution has a power law.

Watts-Strogatz

In the WS model, the idea is to interpolate between a graph with maximal clustering and a random graph. A d-regular graph, say on a ring, where every vertex is connected to its d nearest neighbours has high clustering, but large diameter, as for example it takes roughly n/2d steps to get to the other side of the ring. Whereas in the standard E-R model we add edges with some fixed probability p, here we replace edges with some fixed probability p. That is, we take an edge in the regular graph and with some small probability we remove it and instead add an edge between two vertices chosen uniformly at random. The theoretical motivation is that removing a few edges doesn’t destroy the high clustering evident in the regular graph, but even a sparse random graph has small diameter, so adding a few ‘long-range’ edges should be enough to decrease the diameter significantly.

It obviously needs to be checked that a substantial drop in diameter occurs before a substantial decrease in clustering, and there is a calculation and diagram to support this intuitive idea in the paper. The one drawback of this model is that it fails to provide the power-law degree distributions we want. After all, an E-R graph has a concentrated degree distribution, and a d-regular graph has all degrees the same, so we would expect some interpolation between the two to have a concentrated distribution as well. Nonetheless, this model accords well with an idea of how complex networks might form, particularly if there is some underlying geometry. It is reasonable to assume that an initial setup for a network would be that people are connected to those closest to them, and then slowly acquire distant contacts as time progresses.

Preferential Attachment – Barabasi-Albert model

Most of our intuition for networks can be extended to an intuition for the formation of networks. The idea of prescribing a degree distribution is neat, but it doesn’t give any account to the mechanism of formation. Complexity emerges over time, and a good model should be able to describe why this happens. The Barabasi-Albert model takes this as its starting point, with the aim of producing a highly clustered system dynamically. Recall that we can describe G(n,p) as a process by coupling, then increasing p from 0 to 1, and seeing edges emerge. The independence assumption can be lifted through the coupling, and so which edge appears next is independent of the current state of the system.

This is what we need to relax. Recall the motivating idea of ‘hubs’, where a small collection of vertices have very high connectivity across the whole system, as observed in several real situations. A consequence of this is that new edge is more likely to be attached to a hub, than to a pair of poorly connected vertex elsewhere. But it turns out that this idea of preferential attachment isn’t enough by itself. Because as a network forms, it is not just the connectivity that increases, but also the size of the system itself. So in fact it makes sense to add vertices rather than edges, and join the new vertices to existing vertices in proportion to the degrees of the existing vertices. This combination of growth and preferential attachment is key to the scale-free graphs that this Barabasi-Albert model generates. Relaxing either mechanism returns us to the case of exponential tails. However, there are methods in the literature for generating such graphs without the need for a dynamic model, but they are harder to understand and describe. None I have seen so far has a high clustering coefficient.

Hubs are effectively a way to reduce the diameter. Recall the description of Milgram’s experiment where he encouraged randomly chosen people to send a package to Harvard. For the purposes of this model, an undergraduate from Wyoming or a husband from Alabama moving in with his wife in Boston are clear hubs, as for very many people near their previous home, they represent a good connection to Harvard. So it is unsurprising that BA, which reinforces hubs, has a sub-logarithmic small diameter.

Conclusions

I’m not entirely what conclusions I should draw from my reading. Probably the main one is that I should read more as there is plenty of interesting stuff going on in this area. Intuitively, it seems unlikely that there is going to be a single model which unites the descriptions of all relevant real-world networks. As ever, it is pleasant to find structures that are both mathematically interesting in their own right and relevant to applied problems. So it is reassuring to observe how similar many of the models discussed above are to the standard random graph.