GreenBlueBar.gif GreenBlueBar.gif

R Code for Chapter 13

Hypothesis Tests Applied to Means: Two Related Samples


The first thing that I have done is to run a t-test on the data from Table 13.1. I have done this two ways to illustrate that they are exactly equivalent. The first time I ran t.test() with the arguments "After", and "Before." I named them in that order so that the output would show a positive mean difference, indicating that the participants gained weight. If I had named them in reverse, the mean would be negative, but everything else would be the same. I next ran the t-test() function on the Gain scores, and you can see that the printout is identical. Either way will do.

Below the t-tests you will see the commands to plot the data and provide legends. I used par() to make the figure look better. You might prefer to drop that. I ran the linear model to get my regression equation. Notice that it only prints out the two coefficients. I took those coefficients and used them to create the first legend. That is one way to do it if you know what the coefficients are. But below that I calculated r and then pasted it into the second legend in such a way that it will print out even if I don't know what that actual r value is. It is worth a bit of discussion to understand what that is about.

In an earlier chapter I noted that when you run something like a regression, R stores all sorts of stuff away in an object called (here) reg. If you ask for summary(reg) you get even more. When I typed "names(summary(reg))" I was asking for the names of the things that it has there. One of those was named "r.squared". So I grabbed that by typing "summary(reg))$r.squared and setting the square root of that equal to a variable named r. Then in the legend I could tell it to print out r. But that isn't all the sneaky stuff. I used "paste" so that I could paste together some text "r = " and the value of the variable r. You can see this if you look at the plot. You might also notice how I rounded off r to a manageable number of decimal places.

In the last part of this section of code I compute a measure of effect size (Cohen's δ ) and calculate confidence limits on mean weight gain. I carried out these calculations a bit more step-by-step than necessary, because I wanted you to be able to see the parallels with what is in the text.


### Chapter 13
### Analysis and plot for Section 13.2 -- Everitt's data

Everitt <- read.table("http://www.uvm.edu/~dhowell/fundamentals9/DataFiles/Tab13-1.dat",
 header = TRUE)
names(Everitt)
attach(Everitt)

### t-test on Before and After
t.test(After, Before, paired = TRUE, conf.level = .95 )

### t-test on Gain
t.test(Gain, conf.level = .95)

par(mfrow = c(2,1))
reg <- lm(After ~ Before)
print(reg)
names(summary(reg))
plot(After ~ Before, main = "Relationship between Before and After Weights")
abline(reg, col = "red")
legend(85, 90, "Y = 0.909*Before + 14.820", bty = "n")
r <- round(sqrt(summary(reg)$r.squared), digits = 3)
legend(85, 85, paste("r = ",r), bty = "n")

### Table 13.2
VulPashler <- read.table("http://www.uvm.edu/~dhowell/fundamentals/DataFiles/Tab13-2.dat", header = TRUE)
names(VulPashler)
attach(VulPashler)
t.test(ErrorAvGuess, AverError, paired = TRUE)


### Section 13.5
### Data from Exercise 12.11
Everitt <- read.table("http://www.uvm.edu/~dhowell/fundamentals9/DataFiles/Tab13-1.dat", header = TRUE)
names(Everitt)
attach(Everitt)
N <- length(Gain)
delta <- mean(Gain)/sd(Before)
### Confidence limits on change
meanChange <- mean(Gain)
sdChange <- sd(Gain)
tLower <- qt(p = .025, df = 16)
tUpper <- qt(.975, 16)
CI95lower <- meanChange + tLower * (sdChange)/sqrt(N)
CI95Upper <- meanChange + tUpper * (sdChange)/sqrt(N)


In the section below I have placed the code for several of the exercises. This is just to make it easier for you to cut and paste. I needed to use the MBESS package to obtain the statistics that I needed.There is nothing particularly new here, but notice in Exercise 13.22 the way that I pulled out the effect I want. It might help you to type "names(ci.effectSize)" to see the names of the variables that are stored in that object. Those names appear in the command that I give, although they seem like awkward names. (If you want more information that you get from "names(ci.effectSize)", try "str(ci.effectSize)". For a more extensive object, try "str(t)" to see the object created when you did a t test.


data <- ### Exercie 13.6
beta.endorph <- read.table("http://www.uvm.edu/~dhowell/fundamentals9/DataFiles/Ex13-6.dat", header = TRUE)
### This shows a way to use those variables without attaching the dataframe.
names(beta.endorph)
t.test(beta.endorph$First, beta.endorph$Second, paired = TRUE)
plot(beta.endorph$First, beta.endorph$Second)
reg <- lm(beta.endorph$Second ~ beta.endorph$First)
abline(reg)
correl <- sqrt(summary(reg)$r.squared)
cat("The correlation between the first and second measurements is = ",correl)

### Exercise 13.10  Effect size
attach(beta.endorph)
delta <- (mean(Second) - mean(First))/sd(First)

### Exercise 13.22
data <- read.table("http://www.uvm.edu/~dhowell/fundamentals9/DataFiles/Tab13-1.dat", header = TRUE)
attach(data)
library(MBESS) 
t <- t.test(Before, After)
print(t)
cat("lower CI on mean = ", t$conf.int[1], "\nupper CI on mean = ", t$conf.int[2],"\n")
sm <- (mean(After) - mean(Before))/sd(Before)    
cat("Standard mean difference (d) = ", sm,"\n")
ci.effectSize <- ci.sm(sm = 1.45, N = 17)
cat("Lower CI on effect size = ", ci.effectSize$Lower.Conf.Limit.Standardized.Mean,"\n")
cat("Upper CI on effect size = ", ci.effectSize$Upper.Conf.Limit.Standardized.Mean, "\n")  

GreenBlueBar.gif GreenBlueBar.gif dch: