Samuel V. Scrapino, Ph.D.

Complex Systems

University of Vermont

Thursday, May 5, 2016

4:00 - 5:00 pm

Perkins 101

**Abstract:**

The primary tool for modern population genetic inference is coalescent theory, which provides a retrospective, mathematical framework for relating genetic variation to historical evolutionary processes. Because many pathogens mutate so rapidly, their evolutionary and population-level processes are inextricably linked. Therefore, studying epidemics requires models able to connect evolution to ecology. The emerging field of phylodynamics seeks to leverage the genetic variation of pathogens to investigate their complex, epidemiological dynamics through the use of mathematical transmission models. Linking these models with the genetic sequence data–now routinely collected during disease outbreaks–provides an unprecedented opportunity to advance our scientific understanding of epidemics and pathogen establishment. In this talk, I will present new results on the expected rate of coalescence for diseases spreading through social networks and demonstrate that an unbiased estimate of the coalescent rate can be obtained when only a subset of cases are reported. With these results, we will explore the utility of coalescent models during the 2014-15 ebola outbreak in West Africa and the ongoing whooping cough outbreak in the USA.

ADA: Individuals requiring accommodations please contact Doreen Taylor at (802) 656-3166.

]]>

Monday, May 2, 2016

6:30 PM

Perkins 101

Thanks for a great semester of Math Club! We hope to see at our last meeting of the semester. There will be FREE PIZZA, origami and games.

If you know you will be coming, please contact the Math Club President at rbayersd@uvm.edu so we can get an idea of how many pizzas to order. But if you decide to come at the last minute, that's okay too.

We hope to see you there!

Nick Allgaier, PhD

Postdoctoral Associate in Psychiatry, University of Vermont

Friday, April 29, 2016

4:00 - 5:00 PM

Kalkin 003

**Abstract:**

The neurological mechanisms underlying addiction in humans are still not well understood. Though the reward-based reinforcement learning circuitry of the limbic system has been implicated, it is not clear why many people who partake in recreational drug and alcohol use avoid getting hooked, while others succumb to addiction. In this talk we describe a neurodiagnostic methodology comprised of nonlinear functional mapping (NFM), a procedure developed at UVM for applying an evolutionary algorithm to functional magnetic resonance imaging (fMRI) data, and subsequent classification by support vector machine (SVM). NFM is a symbolic regression algorithm that searches for models relating the activity in different regions of interest (ROI) in the brain, as represented by fMRI signal, without assuming linearity. Summary statistics of the models inferred by NFM indicate levels of pairwise coordination among ROI, and diagnosis of addiction is accomplished by SVM based on these coordination levels.

We apply this methodology to resting-state fMRI time series from a cohort of 25 addicted cigarette smokers, and 30 control subjects. The resulting cross-validated classifier correctly diagnoses all 25 smokers while only miss-diagnosing 5 control subjects. Further, many of the top-ranking SVM features represent coordination among ROI in the prefrontal cortex, as well as coordination between these ROI and both cortical and subcortical ROI involved in the limbic system. The importance of coordination among these particular ROI in addiction diagnosis hints at a mechanism of prefrontal executive control over the limbic system, whose efficacy may be a key determining factor in a subject's risk of addiction.

ADA: Individuals requiring accommodations please contact Doreen Taylor at (802) 656-3166.

]]>**Abstract:** From the right perspective, everything is mathematical – even juggling. In this talk, Greg Warrington from the University of Vermont will describe how juggling relates to traditional mathematical fields such as graph theory and look at what happens when a person starts juggling randomly. In doing so, he’ll illustrate how these mathematical underpinnings can be useful even for accomplished jugglers. There will be numerous live demonstrations with the juggling of 0 to 7 objects.

We hope to see you there!

]]>Generalizations and Applications

Sagun Chanillo, Rutgers University

Friday, April 8, 2016, 4:00 PM

Kalkin 004

**Abstract:**

The Fundamental theorem of Calculus can be generalized in two ways in higher dimensions, via the Gagliardo-Nirenberg inequality with its close connections to the isoperimetric inequality and the Moser-Trudinger inequality which has powerful ramifications in Conformal Geometry. Recently a third inequality was discovered by Bourgain-Brezis which is very close to the fundamental theorem of Calculus in one dimension. We discuss generalizations of this inequality to Nilpotent Lie Groups, Riemannian Symmetric spaces and also a new proof of the original inequality of Bourgain-Brezis that allows us to extend the scope of the Bourgain-Brezis inequality. Lastly we provide appliactions of the Bourgain-Brezis inequality to the two dimensional Navier-Stokes equation of Fluid Mechanics and the Maxwell equations of Electromagnetism. This is joint work with Jean Van Schaftingen and Po-lam Yung.

]]>

Mary Beth Ruskai, Emeritus Prof. of Mathematics, University of Massachusetts Lowell

Research Prof., Tufts University

Associate member, Institute for Quantum Computing, Waterlook, Canada

Friday, April 15, 2016, 4:00 PM

Kalkin 003

**Abstract:**

Entanglement is both one of the most puzzling aspects of quantum theory and a key component of powerful new methods of computation and communication. In contrast to classical systems, the conditional information of a quantum system can be negative. It is now known that this can be interpreted in terms of quantum correlations that can be used to transmit information by such mechanisms as quantum teleportation.

In quantum information theory, quantities like mutual and conditional information are defined using the von Neuman entropy. Key properties of the von Neumann entropy are closely associated with operator and trace inequalities, which can be proved using extremely elementary methods.

This talk will give an overview of, and introduction to, the concepts mentioned above. No prior knowledge of quantum theory is required and the mathematics is accessible to anyone with a good knowledge of linear algebra.

]]>

A panel Discussion with UVM Alumni and Community Professionals

Jeffrey Young,

Brian Orleans,

John Stanton-Geddes,

Polly Ramsey,

Leah Shulman,

Wednesday, April 6, 2016, 7:00 - 8:15 pm

Waterman Memorial Lounge

The UVM Statistics program will host a panel presentation of careers in Statistics and Data Science on Wednesday, April 6 from 7:00 - 8:15 pm in Waterman Memorial Lounge. Five speakers will talk about career opportunities at Dealer.com, ICF International, Cigna Insurance, and the Vermont Health Department. A question and answer session will follow. Refreshments will be served.

Save the date for the last few meetings of the semester:

April 2 - Hudson River Undergraduate Math Conference at Saint Michael's College

April 4 - Jim Bozeman's talk on gerrymandering

April 18 - Greg Warrington's talk

May 2 - End of year celebration

]]>We will be building a modular origami creation as well as playing games such as Set.

Feel free to bring your favorite board game to play. We will provide some as well.

All are welcome. Bring a friend!

]]>**When:** Thursday, February 4, 2016, 12:00 - 1:00 PM

**Place:** Math Conference Room

Bring your lunch.

**Speaker:** Ian M. Wanless, Monash University

**Title:** *Embedding small partial Latin squares in Cayley tables*

**Abstract:**

Sudoku puzzles are examples of a combinatorial object called a partial latin square (PLS). In 1974 Keedwell asked for each n what is the smallest PLS that cannot embed into any group table of order n. In 2012 Hirsch and Jackson asked what is the smallest PLS that can be embedded in an infinite group but not into any finite group. We answer both these questions and a couple of related ones.

Other events planned for the semester include guest talks about actuarial math careers, cryptography, and juggling, as well as a trip to Saint Michael's College for the Hudson River Undergraduate Math Conference.

]]>

Nathan Dowlin

Princeton University

Thursday, December 10, 2015

4:00 – 5:00 PM

Kalkin 003

Abstract:

The last two decades have seen the construction of many knot invariants. The most powerful have been homology theories, of which there are two main types: those constructed using symplectic techniques, such as knot Floer homology, and those related to the representation theory of quantum groups, such as Khovanov homology and HOMFLY-PT homology. Despite the fundamental differences in these theories, they seem to be closely related. I will discuss both the known and conjectured relationships between them, with special attention to the conjectured spectral sequences from HOMFLY-PT and Khovanov homology to knot Floer homology.

ADA: Individuals requiring accommodations, please contact Doreen Taylor (802) 656-3166

]]>Talk: Fake Simple Random Walks.

Speaker: Ewa Infeld, Dartmouth

Abstract: If we let two tokens do simple random walks on a connected graph, they

will collide. Can we set them up so that they don't, but if you only see

either one of them it looks like it's doing a simple random walk? What

does that even mean? The answer is: sometimes. Let's see when.

**Robert D. Tortora, Ph.D.****Senior Fellow of Survey Methodology & Chief Methodologist****Survey Research, ICF International**

**Wednesday, September 23****12:00 PM****Perkins 101**

Abstract:

This talk will use the Total Survey Error (TSE) model to highlight a few of my experiences as a statistician working on surveys for the federal government and in the private sector. The TSE has four major components: sampling error, coverage error, nonresponse error and measurement error. After a brief overview of each component of error I will use the model to discuss my work experience conducting agriculture surveys for USDA, dissecting the biggest and most expensive statistical undertaking in the world, viz., the US Census of Population and Housing, serving as an expert witness in a case against the NCAA and starting and working on a worldwide survey conducted in over 125 countries each year with a special emphasis on sub Saharan Africa. A brief review of my current research on coverage error and the use of non-probability surveys will conclude the talk.

ADA: Individuals requiring accommodations, please contact the Office of Affirmative Action & Equal Opportunity at 656-3368

Title: Impact of Local Testing in a Targeted Therapy Setting with Companion Diagnostic Development

Abstract: In the pharmaceutical industry, one goal using pharmacogenomics information is to select patients who have certain characteristics that may respond to your treatment over the standard of care, or therapy of a competitor. Identification of patients is made through assays that are commonly DNA, RNA or protein based, may be multi-marker and/or may even be derived through a combination of technologies. To the health authority, assigning treatment to specific patients represents significant risk and requires the generation of a high quality companion diagnostic assay to ensure that a consistent patient population is robustly selected over time. This requires running a series of experiments to both verify and eventually validate that the assay performs according to strict specifications. In a clinical setting, there are several challenging factors that may negatively impact the quality of the diagnostic data one collects. In addition, the popularity of genomic screening presents new challenges that require open dialogue among academia, industry and regulators. These challenges, accompanied by actual examples, will be the focus of my initial presentation.

For the second part of my presentation, the focus will shift towards the role of the statistician in the pharmaceutical industry. There are many roles one may aspire to, including clinical trial statistician in early or late phase trials, clinical pharmacology, biomarker and diagnostics, and methods development. Although these roles are varied, there are commonalities in terms training, ability to communicate and collaborate, etc. that make one successful. A question and answer period will follow.

]]>Faculty are welcome, too! ]]>

Dr. Galia Dafni

Department of Mathematics

Concordia University

Montreal, Quebec

Thursday, April 30, 2015, 4:00 PM

Kalkin 003

**Abstract:**

The mathematical field of harmonic analysis, named for "harmonics" in music, is also known as Fourier analysis, after the Frenchman Joseph Fourier. It is concerned with decomposing signals into their frequency components and reconstructing the signal from this data, and has had numerous applications, from describing the motion of a vibrating string or heat propagation (as studied by Fourier) to modern-day techniques in image processing and medical imaging, such as wavelets and compressed sensing.

Current research in harmonic analysis is much broader and has deep connections to other fields, especially complex analysis, partial differential equations, mathematical physics and number theory.

We will focus on the theory of Hardy spaces and how it illustrates the ideas of decomposition and reconstruction.

ADA: Individuals requiring accommodations, please contact Doreen Taylor at (802) 656-3166

]]>This time, though, the mission was different. Cole and his fellow members of the Oncologic Drugs Advisory Committee would forge a new route through the realm of regulatory review of drugs in the United States. They would assist in the FDA’s first evaluation of a “biosimilar” product. Biosimilars are close copies of biological drugs, which are derived from living cells instead of the cocktail of chemicals that make up most medicines.

When chemically based drugs lose their patents, generic versions with identical components can easily reach the marketplace, typically lowering prices. Until now, the FDA had no mechanism to approve close copies of biologics, deemed too complex to ensure that similar but inexact alternatives were equally safe and reliable. The most popular biologics, used to treat cancer and autoimmune diseases, are also some of the most expensive drugs.

Under a mandate of the Affordable Care Act, with a goal to encourage competing products that could help lower costs, the FDA now has a way to evaluate biosimilars. The federal law permits approval of a product shown as “highly similar” to a specific drug on the market and with “no clinically meaningful differences” in safety or effectiveness as that existing drug.

In March -- relying on the recommendation of Cole and the advisory panel, which applied those federal guidelines for the first time -- the FDA approved Zarxio. The new drug mimics the well-established Neupogen to fight infections in cancer patients undergoing chemotheraphy and other treatment.

The FDA’s Center for Drug Evaluation and Research approves more than 100 new medications each year. Most never go through advisory committees.

Cole’s group only sees the tough cases, those that lack a “favorable risk profile” or raise alarm bells, making it unclear whether the potential benefits outweigh the risks, he says. “They only bring stuff to us if the question is difficult.”

Cole has participated in four reviews on the 13-member panel. His fellow members include oncologists and other medical professionals, a cancer patient or survivor and a consumer advocate. Cole is the only biostatistician.

“The level of the work isn’t that bad, but the level of responsibility is huge,” he says. “It would be terrible for a drug company to get a drug approved that doesn’t work out” or that causes harm, he says. Equally terrible is the prospect of denying, because of perceived risks, a drug that is actually safe and could help millions of sick people or maybe save lives.

“I think of it from the overarching public health perspective,” Cole says of his role. “Every time you make a drug available, you’re altering public policy.”

For 23 years, Cole has tied his biostatistics background to cancer. During a post-doctoral fellowship at the Dana-Farber Cancer Institute in Boston, he found he liked collaborating with oncologists and studying patient outcomes. It put him at the cross-section of numerical science and human medicine, where he could not only advance the analysis of data but also answer questions for cancer patients.

“I look for a well-designed study,” Cole says of his approach on the FDA panel. “The famous line is: ‘All studies have warts.’ None of them are perfect.”

Cole has his own system after reviewing the briefings provided by the FDA and the drug company. After he studies the information, he writes two statements -- one in support of approval, and one against it -- explaining his reasoning for each. The position that sounds most convincing tells him where to lean.

Then, Cole gets on a plane and heads to FDA headquarters in Silver Springs, Md., for the meeting. It’s more like a courtroom trial, and the committee is the jury.

Each side, the drug company and the FDA staff, presents its argument. Cole and the other committee members sit at a U-shaped table and can ask questions, and he revises his previous statements. The meeting, which usually lasts a day, also includes time for public comments.

“Oftentimes, they’re patients who come and tell us stories about what they’ve been through,” Cole says. “Hearing those statements really puts some perspective on what we’re doing.”

Unlike a courtroom jury, the panelists don’t come to a consensus on a verdict at the end of the day. They simply answer “yes” or “no,” to the FDA’s question: “Should we approve this drug?” They also can comment at that time, and Cole says he sometimes reads his written statement.

With Zarxio, he says, the answer was relatively clear. The vote in favor was unanimous.

The committee played a significant role in the biosimilars review process, ensuring its transparency to the public and bringing expertise to vet critical information, says Tim Irvin, a spokesman for the Center for Drug Evaluation and Research. For now, the committee will look at every upcoming application for a new biosimilar, he says.

“We appreciate getting their feedback,” Irvin says. “It gives everybody a chance to look at the data and make sure everything is as it should be.”

Cole doesn’t expect every biosimilar review to go as smoothly. A key difference is the narrower scope of the clinical trials to show that the characteristics of the new drug and the outcomes for patients are close to those of the existing drug. It’s foreseeable that the measurements might not line up, he says.

“If there’s one chink in the armor, it opens up a question,” he says. “And then you might need a big clinical trial to answer the question.”

The committee only makes recommendations, but the FDA usually follows them. Cole says he believes all the committee members feel the weight of their decision.

“You get a collection of people together, you have a collective wisdom,” he says. “So it’s not all on one person’s shoulders.”

]]>Professor Mike Wilson

Department of Mathematics & Statistics

University of Vermont

Thursday, April 9, 2015, 4:00 PM

Kalkin 004

**Abstract:** The astrolabe is a special purpose analog computer. One can think of it as a "circular slide rule" for astronomical calculations. In this talk I will say a little about its history, demonstrate its parts, and show how it can be used to: tell time, predict the length of the day or night, predict the positions of stars (including the sun) and find ascendants (very important in casting horoscopes—just ask Geoffrey Chaucer). I will show how to make one and describe the mathematics behind it.

ADA: Individuals requiring accommodations, please contact Doreen Taylor at (802) 656-3166

]]>Dr. John Voight

Department of Mathematics

Dartmouth College

Thursday, April 2, 2015, 4:00PM

Kalkin 004

**Abstract:**

There is a marvelous and deep connection between:

- surfaces obtained by gluing together copies of a triangle,

- triples of permutations whose product is the identity,

- finite index subgroups of triangle groups,

- bicolored graphs equipped with a cyclic orientation, and

- three-point branched covers of the complex projective line.

The consequences of these equivalences are myriad for geometry, arithmetic, combinatorics, and group theory.

In the first part of the talk, we introduce these connections via examples and pictures.

In the second part of the talk, we discuss their algorithmic aspects: specifically, we exhibit a numerical method that, given a permutation triple, computes equations for the associated three-point branched cover.

This is joint work with previous UVM graduate students Michael Klug, Michael Musty, and Sam Schiavone.

ADA: Individuals requiring accommodations, please contact Doreen Taylor at (802) 656-3166

We know it can be hard to travel to visit a workplace, or to do research on your own to get the answers you need to find the right career ... so we are coming to you!

- Kiran MacCormick, Dealer.com
- Dave Lansky, Precision Bioassay, Inc.
- Don Holly, Vermont Smoke and Cure
- Wendy Geller, Data Administration Director at the Vermont Agency of Education
- Amy Fowler, Vermont Deputy Secretary of Education
- Eric Sanberg, National Life Insurance Company

Presented by the Department of Mathematics and Statistics at the University of Vermont on:

**March 24, 2015****7 to 8:15pm****Memorial Lounge,****Waterman Building**

]]>

Archdeacon served the university in many leadership roles, including as director of the Mathematics Graduate Program and as a long time member and chair of the Professional Standards Committee of the Faculty Senate. Archdeacon was named a University Scholar for the academic year 2003-2004, was a Fulbright Teaching Fellow at the Riga’s Commerce School and held numerous visiting professorships at other universities, including the University of Auckland, Yokohama National University, Technical University of Denmark and the Open University.

A passionate and highly accomplished mathematician, Archdeacon’s research focus was on graph theory, combinatorics, theoretical computer science and topographical graph theory, for which he had particular interest. He published over 70 articles and was an invited speaker at mathematics conferences around the world, including this past January in Slovenia. He served as a reviewer and referee for more than 30 journals and served on the boards of the *Journal of Combinatorial Theory B* and the *Journal of Graph Theory*.

“Professor Archdeacon was a gifted mathematician and researcher whose work was applauded around the world, a skilled teacher admired by undergraduate and graduate students alike and a beloved colleague,” said UVM president Tom Sullivan. “We deeply appreciate his contributions to the life of our university over so many years and will greatly miss him.”

“Dan was an amazing guy who I had the great fortune of knowing for 40 years,” said Jeff Dinitz, a professor in mathematics who chaired the department for many years. “He was a world-class mathematician with many important theorems to his name. He was an invited lecturer at conferences and universities around the world and was the editor-in-chief of a major journal in his research area of graph theory. Dan loved UVM and was a great teacher who motivated his students and showed them the beauty and magic in mathematics. He was witty and just plain fun to be with. He deeply loved his role as a father and was an exceptional husband. He will be missed by his friends and colleagues here at UVM, as well as by the world-wide mathematics community.”

A campus memorial service will be held March 10 at 4 p.m. at Ira Allen Chapel.

]]>**Title:** Causal Hazard Ratio Estimation Using Principal Stratification

**Abstract**

In randomized trials, the most commonly reported estimator of treatment

effect is the intention-to-treat (ITT). Other commonly reported

estimators are the as-treated, and per-protocol. The ITT is preferred

because it is an unbiased estimator of treatment assignment. If there is any non-adherence the ITT is a biased estimate of the treatment effect, defined as the contrast between the potential outcome if treated versus the potential outcome if not treated. The as-treated and per-protocol estimators are biased estimates of both treatment assignment and treatment effect. Principal stratification is a framework for estimating treatment effects that combines potential outcomes and latent adherence strata. It yields an unbiased estimator of the complier average causal effect (CACE) for a difference in means or proportions, in the setting of all-or-nothing adherence. In this talk we propose two estimators of the causal hazard ratio, as well as estimators of the hazards in the treated and untreated compliers. Both of the approaches are operationalized using a weighted estimation approach, in which some of the weights are negative. We report the results of simulations that vary the amount of adherence and selection bias that show the hazard ratio estimators we propose have little bias unlike the ITT, as-treated and per-protocol estimators. We demonstrate the approach using a randomized controlled trial with all-or-nothing adherence and a right censored time-to-event endpoint.

Now a team of scientists at the University of Vermont and The MITRE Corporation have applied a Big Data approach—using a massive data set of many billions of words, based on actual usage, rather than "expert" opinion—to confirm the 1960s guess.

Movie subtitles in Arabic, Twitter feeds in Korean, the famously dark literature of Russia, books in Chinese, music lyrics in English, and even the war-torn pages of *The New York Times*—the researchers found that these, and probably all human language, skews toward the use of happy words.

“We looked at ten languages,” says UVM mathematician Peter Dodds who co-led the study, “and in every source we looked at, people use more positive words than negative ones.”

But doesn’t our global torrent of cursing on Twitter, horror movies, and endless media stories on the disaster *du jour* mean this can’t be true? No. This huge study of the “atoms of language—individual words,” Dodds says, indicates that language itself—perhaps humanity’s greatest technology—has a positive outlook. And, therefore, “it seems that positive social interaction,” Dodds says, is built into its fundamental structure.

The new study, "Human Language Reveals a Universal Positivity Bias," appeared in the Feb. 9 online edition of the *Proceedings of the National Academy of Sciences. *

To deeply explore this Pollyanna possibility, the team of scientists at UVM’s Computational Story Lab—with support from the National Science Foundation and The MITRE Corporation—gathered billions of words from around the world using 24 types of sources including books, news outlets, social media, websites, television and movie subtitles, and music lyrics. For example, “we collected roughly 100 billion words written in tweets,” says UVM mathematician Chris Danforth, who co-led the new research.

From these sources, the team then identified about 10,000 of the most frequently used words in each of 10 languages including English, Spanish, French, German, Brazilian Portuguese, Korean, Chinese, Russian, Indonesian and Arabic. Next, they paid native speakers to rate all these frequently used words on a nine-point scale from a deeply frowning face to a broadly smiling one. From these native speakers, they gathered five million individual human scores of the words. Averaging these, in English for example, “laughter” rated 8.50, “food” 7.44, “truck” 5.48, “the” 4.98, “greed” 3.06 and “terrorist” 1.30.

A Google Web crawl of Spanish-language sites had the highest average word happiness, and a search of Chinese books had the lowest, but—and here’s the point—all 24 sources of words that they analyzed skewed above the neutral score of five on their one-to-nine scale—regardless of the language. In every language, neutral words like “the” scored just where you would expect: in the middle, near five. And when the team translated words between languages and then back again they found that “the estimated emotional content of words is consistent between languages.”

In all cases, the scientists found “a usage-invariant positivity bias,” as they write in the study. In other words, by looking at the words people actually use most often they found that, on average, we—humanity— “use more happy words than sad words," Danforth says.

This new research study also describes a larger project that the team of 14 scientists has developed to create “physical-like instruments” for both real-time and offline measurements of the happiness in large-scale texts—“basically, huge bags of words,” Danforth explains.

They call this instrument a “hedonometer”—a happiness meter. It can now trace the global happiness signal from English-language Twitter posts on a near-real-time basis and show differing happiness signals between days. For example, a big drop was noted on the day of the terrorist attack on Charlie Hebdo in Paris, but the signal rebounded over the following three days. The hedonometer can also discern different happiness signals in U.S. states and cities: Vermont currently has the happiest signal, while Louisiana has the saddest. And the latest data puts Boulder, Colo., in the number one spot for happiness, while Racine, Wis., is at the bottom.

But, as the new paper describes, the team is working to apply the hedonometer to explore happiness signals in many other languages—the French signal will be up soon—and from many sources beyond Twitter. For example, the team has applied their technique to more than 10,000 books, inspired by Kurt Vonnegut’s “shapes of stories” idea. Visualizations of the emotional ups and downs of these books can been seen on the hedonometer website; they rise and a fall like a stock-market ticker. The new study shows that *Moby Dick*’s 170,914 words has four or five major valleys that correspond to low points in the story, and the hedonometer signal drops off dramatically at the end, revealing this classic novel’s darkly enigmatic conclusion. In contrast, Dumas’s *Count of Monte Cristo*—100,081 words in French— ends on a jubilant note, shown by a strong upward spike on the meter.

The new research “in no way asserts that all natural texts will skew positive,” the researchers write, as these various books reveal. But at a more elemental level, the study brings evidence from Big Data to a long-standing debate about human evolution: our social nature appears to be encoded in the building blocks of language.

*The new study as well as the hedonometer is based on the research of Peter Dodds and Chris Danforth and their team in the University of Vermont’s Computational Story Lab, including visualization by Andy Reagan, at UVM’s Complex Systems Center, and the technology of Brian Tivnan, Matt McMahon and their team from The MITRE Corporation.*