Tag Archives: networks

Who will your friends be next week? The link prediction problem

Sitting in the student center of our university, I am surrounded by hundreds of students enjoying their lunch and socializing. They’re strengthening (and in some cases weakening) their social ties. Given the ability to observe this social network over time, we would see that some relationships flourish, while others disappear altogether.

This situation is not unique to university students. In fact, whether we’re studying the spreading of an infectious disease, or the growth of an organized crime network, the reality is that the relationships in these social networks are dynamic. They change. Being able to describe the current state of a network is important. Our goal is to predict the future state of the network by forecasting who will connect to whom. If we could make these sorts of predictions, then we may be able to better recommend or warn of future probable links, as well as come to understand mechanisms which may be driving the evolution of the network.

This brings us to the link prediction problem. Liben-Nowell and Kleinberg defined it as follows:

The link prediction problem asks, "Given a snapshot of a network at time t, can we predict new links which will occur at time t+1?"

The link prediction problem asks, “Given a snapshot of a network at time t, can we predict new links which will occur at time t+1?”

In our work, we explore how topological similarity indices of a network can be combined with node specific data to develop a link prediction tool. Rather than pre-supposing that all similarity indices are of equal importance, we employ an evolutionary algorithm to evolve coefficients to be used in a linear combination of these similarity indices. Our approach has the advantage of being able to detect which similarity indices are more salient predictors and doesn’t require any knowledge of the type of network one may be working with.

To test our method, we begin by examining one of the largest social networks in existence, namely Twitter. Below, we visualize the network of reciprocal replies for users who interacted within a single week in late 2008.

Given information about interactions in a given week, we seek to predict links in a future week.

Given information about interactions in a given week, we seek to predict links in a future week.

Our evolved predictors exhibit a thousand fold improvement over random link prediction, and a substantial improvement over individual indices used in isolation. The predictor also suggests possible factors which may be driving the evolution of Twitter reciprocal reply networks during the timespan of this study.

Our predictor reveals which topological and/or user specific indices are most important in link detection for a given a network. For example, index B is most often the top ranking index detected by the predictor. Other indices which were ranked well were indices E and I. This type of output can be helpful in detecting the salient features which may be driving the network's evolution over time.

Our predictor reveals which topological or user specific indices are most important in link detection for a given a network. For example, index B is most often the top ranking index detected by the predictor, while indices E and I are also important. This type of output can be helpful in detecting the salient features which may be driving the network’s evolution over time.

Returning to our original question, we ask: Given a snapshot of a Twitter reciprocal reply network, is it possible to predict the links which will occur in the future?

One approach is to predict a link between all of the people in the network, but clearly this is not the approach we wish to take. For example, in the case of a modestly sized network (say N=30,000 nodes), the number of potential links is roughly half a million! If we’re trying to recommend books to a shopper or potential dates to a person using a dating service, we certainly would not want to suggest half a million people for potential dates. It would be nice if we could recommend 10 books (or dates) and have the majority of those suggested links be successful.

In the language of signal processing, we hope to have a true positive rate (e.g., the % of time you’re actually right about the links that you predict) that is greater than the false positive rate (e.g., the % of time you’ve issued a false alarm). This relationship can be captured by the Receiver Operating Curve (ROC) shown below.

The Receiver Operating Curve (ROC) compares the true positive rate (TPR) and the false positive rate (FPR). The ROC curve shown here depicts a classifier (our link predictor) for which TPR>FPR.

The Receiver Operating Curve (ROC) compares the true positive rate (TPR) and the false positive rate (FPR). The ROC curve shown here depicts a classifier (our link predictor) for which TPR>FPR.

The area under the curve (AUC) provides one way of measuring the relative success of one’s method. An AUC of .50 suggests that the true positive rate is equal to the false positive rate, while an AUC > 0.50 indicates that the true positive rate is greater than the false positive rate. As shown in the picture above, our AUC is well above 0.50, in fact it is approximately 0.72, which is good!

These results are exciting! We’ve put together a research paper in which we describe our analysis and algorithm in more detail (http://arxiv.org/abs/1304.6257). Although we focused on Twitter in this investigation, our methodology is general and may apply to link prediction in many other types of time varying networks, such as disease networks or crime. It could also improve the “friends you may know” feature offered by many social networking services.

Here is a short video summarizing our work on link prediction:

What’s the Most Important Theorem?

Mathematical truths are organized in an incredibly structured manner. We start with the basic properties of the natural numbers, called axioms, and slowly, painfully work our way up, reaching the real numbers, the joys of calculus, and far, far beyond. To prove new theorems, we make use of old theorems, creating a network of interconnected results—a mathematical house of cards.

So what’s the big picture view of this web of theorems? Here, we take a first look at a part of the `Theorem Network’, and uncover surprising facts about the ones that are important.  This is blatantly fun for us. Really.

Let’s go through an example starting with the real numbers.  Mathematicians like to write these numbers as mathbb{R}, and here we’ll start by bravely assuming that they exist. One result that follows from the existence of mathbb{R}: Given a real number a belonging to mathbb{R}, we can find a natural number n (e.g. 1, 2, 3 …) such that n>a. This is known as the Archimedean property.  To visualize this relationship, we draw an arrow from the existence of mathbb{R} to the Archimedean property:

example_labels0_medium

Now, the fact that real numbers satisfy the Archimedean property tells us something about sets that contain them. For example, more than a century ago, two guys named Heine and Borel used the Archimedean property to help prove their glorious, eponymous theorem.  We’ll now add an arrow leading from the Archimedean Property to the Heine Borel theorem, and we’ll include the one other component Heine and Borel needed:

example_labels1_DeMorgan_medium_Heine

All right: who is this De Morgan and what are his laws?  Back in the mid 1800′s, Augustus De Morgan dropped this bit of logical wizardry on the masses: “the negation of a conjunction is the disjunction of the negation.” We know, really exciting words.  If it’s not true that both A and B are true, then this is the same as saying either A or B or both are not true.  Better?

Before diving into a larger network, let’s think some more about these links.  One could prove the Fundamental Theorem of Calculus (which sounds important but could be just good branding) with nothing more than the axioms of ZFC set theory. But such a proof would be so long and tedious that any hope of conveying a clear understanding  would be lost.  Imagine taking all the atoms that make up a duck and trying to stick them together to create a duck; this would be the worst Lego kit ever.  And so in any mathematical analysis textbook, the theorems contain small stories of logic that are meaningful to mathematicians, and theorems that are connected are neither too close or too far apart.

For this post, what we’ve done is to take all of the theorems contained in the third edition of Walter Rudin’s Principles of Mathematical Analysis, and displayed them as nodes in a network. As for our simple networks above, directed edges are drawn from Theorem A to Theorem B if the proof of B relied on A explicitly. Here’s the full network:

ChapColorNoSinglesTOsize_Labels_less_more_arrows_bigger

Node size weighted by total incoming degree, colored by chapter, and laid out by Gephi’s Force Atlas.

We find that Lebesgue theory (capstoned by Lebesgue Dominated Convergence) lives on the fringe, not nearly as tied up with the properties of the real numbers as the Riemann-Stieltjes integral or the integration of differential forms. Visually, it appears that the integration of differential forms and functions of several variables rely the most on prior results. Over on the right, we’ve got things going on with sequences and series, where the well-known Cauchy Convergence criterion is labeled. By sizing the nodes proportional to their outgoing degree (i.e., the number of theorems they lead to), we observe that the basic properties of mathbb{R}, of sets, and of topology (purple) lie at the core.

By considering the difference between outgoing and incoming degrees, we can find the most fundamental result (highest differential in outgoing and incoming degree, or net outgoing degree), and the most important or “end of the road” result (highest differential in incoming and outgoing degrees, or net incoming degree).  In Rudin’s text, the most fundamental result is De Morgan’s Laws, and the most important result is Multivariate Change of Variables in Integration Theorem (MCVIT, that’s a mouthful).

So the Fundamental Theorem of Calculus falls short of the mark with a net incoming degree 19, not even half of MCVIT’s net incoming degree of 45. And it is not the axioms of the real numbers that are the most fundamental, with the Existence of mathbb{R} having a net outgoing degree of 94, but instead the properties of sets shown by De Morgan with a whopping net outgoing degree of 122. Larry Page’s PageRank (the original algorithm behind Google) and Jon Kleinberg’s HITS algorithm also both rate the MCVIT as the most important result.

Would you agree that MCVIT is the most “important” result in Rudin’s text? It could just be the most technical.  We have only used a few lenses through which one might choose to evaluate the importance of theorems, so let us know what you think, or give it a try. Here’s a link to the Gephi files, containing all of the data used here.

Lastly, the network itself can be built differently by changing which theorems are included, or which are used in proofs. The resulting structures combine historical development with the author’s understanding. The goal of new textbooks is, in part, to organize the results in the most understandable fashion. With this view, we can start to think of the Theorem Network as the natural structuring of complex mathematical ideas for the human mind.

Now, one might idly think of extending this analysis to all of human knowledge. In that direction, Griff over at Griff’s Graphs has been making some very nice pictures leveraging the work of all those who edit Wikipedia.