Category Archives: networks

How our storytelling nature means we deeply misunderstand the mechanics of fame (and much else…)

Should the Mona Lisa be our most famous painting?

Was Harry Potter destined to (repeatedly) sweep the globe?

What would happen to everyone and everything famous if we ran the experiment that is our world over again?

Find out why fame is truly unpredictable, how it lives and dies entirely in our social stories, and why “… there is no such thing as fate, only the story of fate” in a current Nautilus Magazine piece by the Computational Story Lab’s co-team leader Peter Dodds:

“Homo Narrativus and the Trouble with Fame: We think that fame is deserved. We are wrong.”  

Nautilus is a new, design-driven publication on science published both online (free) and in print (unfree).  The Nautilus team is creating a beautiful showcase for scientific knowledge, and we encourage you to explore everything they have on offer.

nautilus-crowd

Now online: the Dow Jones Index of Happiness

Total excitement people: our website hedonometer.org has gone live.  We’re measuring Twitter’s happiness in real time.  Please check it out!

If you’re still here, here’s the blurb from the site’s about page:

Happiness: It’s what most people say they want. So how do we know how happy people are? You can’t improve or understand what you can’t measure. In a blow to happiness, we’re very good at measuring economic indices and this means we tend to focus on them. With hedonometer.org we’ve created an instrument that measures the happiness of large populations in real time.

Our hedonometer is based on people’s online expressions, capitalizing on data-rich social media, and we’re measuring how people present themselves to the outside world. For our first version of hedonometer.org, we’re using Twitter as a source but in principle we can expand to any data source in any language. We’ll also be adding an API soon.

So this is just a start – we invite you to explore the Twitter time series, let us know what you think, and follow the daily updates through the hedonometer twitter feed: .

A data-driven study of the patterns of life for 180,000 people

Here at the Computational Story Lab, some of us commute by foot, some by car, and a few deliver themselves by bike, even in the middle of our cold, snowful Vermont winter.  Occasionally, we transport ourselves over very long distances in magic flying tubes with wings to attend conferences, to see family, or for travel.  So what do our movement patterns look like over time?  Are there distinct kinds of movement patterns as we look across populations, or are they variations on a single theme?

Inspired by an analysis of mobile phone data by Marta Gonzalez at MIT, James Bagrow at Northwestern, and colleagues, we used 37 million geotagged tweets to characterize the movement patterns of 180,000 people during their 2011 travels. We used the standard deviation in their position, a.k.a. radius of gyration, as a reflection of their movement. As an example, below we plot a dot for each geotagged tweet we found posted in the San Francisco Bay area, colored by the author’s radius of gyration.

The Bay Area is shown with a dot for each tweet, colored by the radius of gyration of its author.

The Bay Area is shown with a dot for each tweet, colored by the radius of gyration of its author. The color scale is logarithmic, so we can compare people with very different habits.

You can see from the picture that there are many people with a radius near 100km tweeting from downtown San Francisco. This pattern could reflect a concentration of tourists visiting the area, or individuals who live downtown and travel for work or pleasure. Images for New York City, Chicago, and Los Angeles are also quite beautiful.

In the image below, we rotated every individual’s movement pattern so that the origin represents their average location, and the horizontal line heading to the left represents their principle axis (most likely the path from home to work). We also stretched or shrunk the vertical and horizontal axes for each individual, so that everyone could fit on the same picture. Basically, we have a heatmap of collective movement, with each individual in their own intrinsic reference frame.  The immediate good news for these kinds of data-driven studies is that we see a very similar form to those found for mobile phone data sets.  Apart from being a different social signal, Geotagged Tweets also have much better spatial resolution than mobile phone calls which are referenced by the nearest cellphone tower.

Movement pattern exhibited by 180,000 individuals in 2011, as inferred from 37 million geolocated tweets. Colormap shows the probability density in log10. Note that despite the resemblance, this image is neither a nested rainbow horseshoe crab, nor the Mandelbrot set.

Movement pattern exhibited by 180,000 individuals in 2011, as inferred from 37 million geolocated tweets. Colormap shows the probability density in log10. Note that despite the resemblance, this image is neither a nested rainbow horseshoe crab, nor the Mandelbrot set.

Several features of the map reveal interesting patterns. First, the teardrop shape of the contours demonstrates that people travel predominantly along their principle axis, with deviations becoming shorter and less frequent as they move farther away. Second, the appearance of two spatially distinct yellow regions suggests that people spend the vast majority of their time near two locations. We refer to these locations as the work and home locales, where the home locale is centered on the dark red region right of the origin, and the work locale is centered just left of the origin.

Finally, we see a clear horizontal asymmetry indicating the increasingly isotropic variation in movement surrounding the home locale, as compared to the work locale. We suspect this to be a reflection of the tendency to be more familiar with the surroundings of one’s home, and to explore these surroundings in a more social context. The up-down symmetry demonstrates the remarkable consistency of the movement patterns revealed by the data.

We see a clear separation between the most likely and second most likely position.

We see a clear separation between the most likely and second most likely position.

Looking just at the messages posted along the work-home corridor, the distribution is skewed left, with movement from home in a heading opposite work seen to be highly unlikely.

The isotropy ratio shows the change in the probability density's shape as a function of radius.

The isotropy ratio shows the change in the probability density’s shape as a function of radius.

Above we see that individuals who move around a lot have a much larger variation in their positions along their principle axis, exhibiting a less circular pattern of life than people who stay close to home. Remarkably, the isotropy ratio decays logarithmically with radius.

Finally, we grabbed messages from the most prolific tweople, those 300 champions who had posted more than 10,000 geotagged messages in 2011. We received 10% of these messages through our gardenhose feed from Twitter. Below, we plot the times during the week that they post from their most frequently visited location. These folks most likely have the geotag switch on for all messages, and exhibit a very regular routine.

A robust diurnal cycle is observed in the hourly time of day at which statuses are updated, with those from the mode location (black curve) occurring more often than other locations (red curve) in the morning and evening.

A robust diurnal cycle is observed in the hourly time of day at which statuses are updated, with those from the mode location (black curve) occurring more often than other locations (red curve) in the morning and evening.

Peaks in activity are seen in the morning (8-10am) and evening (10pm-midnight), separated by lulls in the afternoon (2-4pm) and overnight (2-4am) hours.  As we and our friend Captain Obvious would expect, people tend to tweet more from their home locale than any other locale (red curve) in the morning and evening.

Bottom line: Despite our seemingly different patterns of life, we are remarkably similar in the way we move around. Our walks are a far cry from random.

Next up: We’ll examine the emotional content of tweets as a function of distance.  Is home where the heart is?

For more details on these results, see our paper Happiness and the Patterns of Life: A Study of Geolocated Tweets.

Who will your friends be next week? The link prediction problem

Sitting in the student center of our university, I am surrounded by hundreds of students enjoying their lunch and socializing. They’re strengthening (and in some cases weakening) their social ties. Given the ability to observe this social network over time, we would see that some relationships flourish, while others disappear altogether.

This situation is not unique to university students. In fact, whether we’re studying the spreading of an infectious disease, or the growth of an organized crime network, the reality is that the relationships in these social networks are dynamic. They change. Being able to describe the current state of a network is important. Our goal is to predict the future state of the network by forecasting who will connect to whom. If we could make these sorts of predictions, then we may be able to better recommend or warn of future probable links, as well as come to understand mechanisms which may be driving the evolution of the network.

This brings us to the link prediction problem. Liben-Nowell and Kleinberg defined it as follows:

The link prediction problem asks, "Given a snapshot of a network at time t, can we predict new links which will occur at time t+1?"

The link prediction problem asks, “Given a snapshot of a network at time t, can we predict new links which will occur at time t+1?”

In our work, we explore how topological similarity indices of a network can be combined with node specific data to develop a link prediction tool. Rather than pre-supposing that all similarity indices are of equal importance, we employ an evolutionary algorithm to evolve coefficients to be used in a linear combination of these similarity indices. Our approach has the advantage of being able to detect which similarity indices are more salient predictors and doesn’t require any knowledge of the type of network one may be working with.

To test our method, we begin by examining one of the largest social networks in existence, namely Twitter. Below, we visualize the network of reciprocal replies for users who interacted within a single week in late 2008.

Given information about interactions in a given week, we seek to predict links in a future week.

Given information about interactions in a given week, we seek to predict links in a future week.

Our evolved predictors exhibit a thousand fold improvement over random link prediction, and a substantial improvement over individual indices used in isolation. The predictor also suggests possible factors which may be driving the evolution of Twitter reciprocal reply networks during the timespan of this study.

Our predictor reveals which topological and/or user specific indices are most important in link detection for a given a network. For example, index B is most often the top ranking index detected by the predictor. Other indices which were ranked well were indices E and I. This type of output can be helpful in detecting the salient features which may be driving the network's evolution over time.

Our predictor reveals which topological or user specific indices are most important in link detection for a given a network. For example, index B is most often the top ranking index detected by the predictor, while indices E and I are also important. This type of output can be helpful in detecting the salient features which may be driving the network’s evolution over time.

Returning to our original question, we ask: Given a snapshot of a Twitter reciprocal reply network, is it possible to predict the links which will occur in the future?

One approach is to predict a link between all of the people in the network, but clearly this is not the approach we wish to take. For example, in the case of a modestly sized network (say N=30,000 nodes), the number of potential links is roughly half a million! If we’re trying to recommend books to a shopper or potential dates to a person using a dating service, we certainly would not want to suggest half a million people for potential dates. It would be nice if we could recommend 10 books (or dates) and have the majority of those suggested links be successful.

In the language of signal processing, we hope to have a true positive rate (e.g., the % of time you’re actually right about the links that you predict) that is greater than the false positive rate (e.g., the % of time you’ve issued a false alarm). This relationship can be captured by the Receiver Operating Curve (ROC) shown below.

The Receiver Operating Curve (ROC) compares the true positive rate (TPR) and the false positive rate (FPR). The ROC curve shown here depicts a classifier (our link predictor) for which TPR>FPR.

The Receiver Operating Curve (ROC) compares the true positive rate (TPR) and the false positive rate (FPR). The ROC curve shown here depicts a classifier (our link predictor) for which TPR>FPR.

The area under the curve (AUC) provides one way of measuring the relative success of one’s method. An AUC of .50 suggests that the true positive rate is equal to the false positive rate, while an AUC > 0.50 indicates that the true positive rate is greater than the false positive rate. As shown in the picture above, our AUC is well above 0.50, in fact it is approximately 0.72, which is good!

These results are exciting! We’ve put together a research paper in which we describe our analysis and algorithm in more detail (http://arxiv.org/abs/1304.6257). Although we focused on Twitter in this investigation, our methodology is general and may apply to link prediction in many other types of time varying networks, such as disease networks or crime. It could also improve the “friends you may know” feature offered by many social networking services.

Here is a short video summarizing our work on link prediction:

What’s the Most Important Theorem?

Mathematical truths are organized in an incredibly structured manner. We start with the basic properties of the natural numbers, called axioms, and slowly, painfully work our way up, reaching the real numbers, the joys of calculus, and far, far beyond. To prove new theorems, we make use of old theorems, creating a network of interconnected results—a mathematical house of cards.

So what’s the big picture view of this web of theorems? Here, we take a first look at a part of the `Theorem Network’, and uncover surprising facts about the ones that are important.  This is blatantly fun for us. Really.

Let’s go through an example starting with the real numbers.  Mathematicians like to write these numbers as mathbb{R}, and here we’ll start by bravely assuming that they exist. One result that follows from the existence of mathbb{R}: Given a real number a belonging to mathbb{R}, we can find a natural number n (e.g. 1, 2, 3 …) such that n>a. This is known as the Archimedean property.  To visualize this relationship, we draw an arrow from the existence of mathbb{R} to the Archimedean property:

example_labels0_medium

Now, the fact that real numbers satisfy the Archimedean property tells us something about sets that contain them. For example, more than a century ago, two guys named Heine and Borel used the Archimedean property to help prove their glorious, eponymous theorem.  We’ll now add an arrow leading from the Archimedean Property to the Heine Borel theorem, and we’ll include the one other component Heine and Borel needed:

example_labels1_DeMorgan_medium_Heine

All right: who is this De Morgan and what are his laws?  Back in the mid 1800′s, Augustus De Morgan dropped this bit of logical wizardry on the masses: “the negation of a conjunction is the disjunction of the negation.” We know, really exciting words.  If it’s not true that both A and B are true, then this is the same as saying either A or B or both are not true.  Better?

Before diving into a larger network, let’s think some more about these links.  One could prove the Fundamental Theorem of Calculus (which sounds important but could be just good branding) with nothing more than the axioms of ZFC set theory. But such a proof would be so long and tedious that any hope of conveying a clear understanding  would be lost.  Imagine taking all the atoms that make up a duck and trying to stick them together to create a duck; this would be the worst Lego kit ever.  And so in any mathematical analysis textbook, the theorems contain small stories of logic that are meaningful to mathematicians, and theorems that are connected are neither too close or too far apart.

For this post, what we’ve done is to take all of the theorems contained in the third edition of Walter Rudin’s Principles of Mathematical Analysis, and displayed them as nodes in a network. As for our simple networks above, directed edges are drawn from Theorem A to Theorem B if the proof of B relied on A explicitly. Here’s the full network:

ChapColorNoSinglesTOsize_Labels_less_more_arrows_bigger

Node size weighted by total incoming degree, colored by chapter, and laid out by Gephi’s Force Atlas.

We find that Lebesgue theory (capstoned by Lebesgue Dominated Convergence) lives on the fringe, not nearly as tied up with the properties of the real numbers as the Riemann-Stieltjes integral or the integration of differential forms. Visually, it appears that the integration of differential forms and functions of several variables rely the most on prior results. Over on the right, we’ve got things going on with sequences and series, where the well-known Cauchy Convergence criterion is labeled. By sizing the nodes proportional to their outgoing degree (i.e., the number of theorems they lead to), we observe that the basic properties of mathbb{R}, of sets, and of topology (purple) lie at the core.

By considering the difference between outgoing and incoming degrees, we can find the most fundamental result (highest differential in outgoing and incoming degree, or net outgoing degree), and the most important or “end of the road” result (highest differential in incoming and outgoing degrees, or net incoming degree).  In Rudin’s text, the most fundamental result is De Morgan’s Laws, and the most important result is Multivariate Change of Variables in Integration Theorem (MCVIT, that’s a mouthful).

So the Fundamental Theorem of Calculus falls short of the mark with a net incoming degree 19, not even half of MCVIT’s net incoming degree of 45. And it is not the axioms of the real numbers that are the most fundamental, with the Existence of mathbb{R} having a net outgoing degree of 94, but instead the properties of sets shown by De Morgan with a whopping net outgoing degree of 122. Larry Page’s PageRank (the original algorithm behind Google) and Jon Kleinberg’s HITS algorithm also both rate the MCVIT as the most important result.

Would you agree that MCVIT is the most “important” result in Rudin’s text? It could just be the most technical.  We have only used a few lenses through which one might choose to evaluate the importance of theorems, so let us know what you think, or give it a try. Here’s a link to the Gephi files, containing all of the data used here.

Lastly, the network itself can be built differently by changing which theorems are included, or which are used in proofs. The resulting structures combine historical development with the author’s understanding. The goal of new textbooks is, in part, to organize the results in the most understandable fashion. With this view, we can start to think of the Theorem Network as the natural structuring of complex mathematical ideas for the human mind.

Now, one might idly think of extending this analysis to all of human knowledge. In that direction, Griff over at Griff’s Graphs has been making some very nice pictures leveraging the work of all those who edit Wikipedia.