This set of pages is intended to serve two purposes. Originally it was written to accompany a set of Windows© programs that I have written. Since then I have put together another set of programs written in R. These can be downloaded from ???? and run using R, which is freely available at www.r-project.org/. I prefer using R, even though it is not as pretty as the Windows program above, because it is very much quicker to write and to modify.
The main Windows program is named Resampling.exe, and can be downloaded from www.uvm.edu/~dhowell/StatPages/Resampling/ResamplingPackage.zip. The file that you will download is a zipped file, but can be decompressed with WinZip or any other file compression package. It contains a setup file that will install Resampling.exe on your machine, along with the help files. It will also install a few dll files that are needed to run the programs. (I have had reports of people having trouble installing this package. The problem seems to be with the microsoft dll files, and not with the program itself. This version was first compiled using Windows XP, and you may have trouble installing it on other operating systems. I'm afraid that I am out of ideas on how to solve the problem.) I am happily running it on Windows 7, so it can be done. I have also included a few sample data files, which are referred to in the help files. You may have to move these files manually to the Resampling directory once it is created.
You will find that two of the procedures will not load when you select them from the menu. This is intentional, because I am not satisfied with those procedures as written.
The sections of this document are organized in line with the(current) menu choices in that program, and the discussion is primarily of the statistics behind the programmed procedures, rather than how to use the procedures themselves. For a brief statement of how the data files need to be set up for each procedure, see the help menu for the procedure in question.
The second purpose of these pages is to elaborate on resampling techniques and the theory behind them. The theory is not particularly difficult, but it is quite different from the theory behind parametric statistics, and this difference is rarely spelled out in detail. Working my way through some of these procedures has been a real, but fascinating, trial, with stumbling blocks in the most unexpected places. One of the reasons for this is that discussion of these methods have stayed largely within literature aimed at the professional statistician titles not withstanding). Perhaps the best coverage is given in Lunneborg (2000) Data analysis by resampling: Concepts and applications, Belmont, CA: Duxbury Press. Lunneborg writes very well and clearly, but even he is writing for an audience who feel more comfortable with probability concepts than do many research workers, and he focuses heavily on bootstrapping. A secondary, or perhaps primary, reason for difficulty is that the philosophy behind such an approach is actually quite
different, and perhaps more appropriate, than the philosophy behind traditional parametric statistics. I have tried to think my way through these issues, and my current thinking can be found at
Because I am working through this material, looking for clear ways to explain the concepts, you may on occasion note that I am talking more to myself than to you. That's the way I work things out. I'll try not to do it too often. It is a sign that I don't know all the answers. If I did, I'd be off working on something else where there would be a greater intellectual challenge- -I like puzzling about things I don't know. But there is a nice challenge here, so I'll hang around for a while.)
The idea of resampling is actually quite an old one in statistics, dating to at least 1935, but the application of such techniques had to wait until faster computers came along. Resampling procedures are highly computer-intensive. While we can discuss such tests in the abstract, and can actually carry them out with pencil and paper on tiny data sets, the practical application requires thousands of resampled data sets.
When we speaking of "resampling," we are talking about procedures for either drawing many samples from some (pseudo-)population (bootstrapping), or constructing many rearrangements of the obtained sample values (randomization). For each sample or rearrangement, we compute a test statistic. The resulting set of test statistics constitutes the sampling distribution (often called a reference distribution) of that statistic, and we can use that sampling (reference) distribution to draw inferences about the model underlying the data.
Resampling procedures fall into a number of different categories, but the discussion here will be limited to Randomization and Bootstrap procedures. Bootstrap procedures take the combined samples as a representation of the population from which the data came, and create 1000 or more bootstrapped samples by drawing, with replacement, from that pseudo-population. Randomization procedures also start with the original data, but, instead of drawing samples with replacement, these procedures systematically or randomly reorder (shuffle) the data 1000 or more times, and calculate the appropriate test statistic on each reordering. Since shuffling data amounts to sampling without replacement, the issue of replacement is one distinction between the two approaches. There are other distinctions, including the fundamental purpose.
Aside from the replacement issue, the two approaches differ in a very fundamental way. Bootstrapping is primarily focused on estimating population parameters, and it attempts to draw inferences about the population(s) from which the data came. Randomization approaches, on the other hand, are not particularly concerned about populations and/or their parameters. Instead, randomization procedures focus on the underlying mechanism that led to the data being distributed between groups in the way that they are. Consider, for example, two groups of participants who have been randomly assigned to viewing a stimulus either monocularly or binocularly, and estimating its distance. The bootstrap approach would focus primarily on estimating population differences in distance perception between the two conditions, and would probably result in a confidence interval on the mean or median difference in estimated distance. A randomization test, on the other hand, would ask if it is likely that we would obtain a difference as large as the one we obtained if the monocular/binocular condition had no effect on the apparent distance. Notice that the resampling approach is not concerned with what the estimated distances (or differences in mean distance) were, nor is it even particularly concerned about population parameters. The bootstrap approach, on the other hand, is primarily concerned with parameter estimation. It turns out that these differences have very important implications.
The following links go straight to the material on the programs that I have written and the ideas that lie behind the various procedures (either in my programs or in those of others). I have chosen to start with the randomization tests. I find them fascinating and important. From there I move to the bootstrapping procedures. I want to be clear that the programs that I have written only scratch the surface of what is possible. I suggest that you first go to the main page on either Randomization procedures or Bootstrapping procedures before you go on to the specific tests. Even if you are not using the software, the material on the individual tests should be useful.
One source of software is the program written to accompany these pages. This program is available from the link given below. This is a Windows program, and, unfortunately, I haven't written a version for the Mac. The most recent update of the software is available, along with a bunch of sample data files, for free, from http://www.uvm.edu/~dhowell/StatPages/Resampling/Resampling Package.zip. I have put a zipped file there, so you will need something like Winzip to open it. If you don't have Winzip, you can download an evaluation version for free. Go to their website at http://www.winzip.com. (It is such a good program, you should pay the modest registration fee to support them.) A free zip utility is available from JustZIPit < - A simple, powerful and free ZIP tool , though it doesn't have all of the bells and whistles of WinZip.
Another program that I recommend is Resampling Stats, by Simon and Bruce, which is available from www.resample.com. This program is not free, but there is an inexpensive student version. The program was originally written as a stand-alone, but it is now designed to be used as an add-in to Excel. I have not used the newer version, but liked the old one a great deal. Stats, I should point out that their web page (www.resample.com) contains a wealth of neat stuff. They have an excellent bibliography of material on resampling, and a good list of the major books.
Bryan Manly, author of Manly, B.(1997) Randomization,
Bootstrap, and Monte Carlo Methods in Biology (2nd
London: Chapman & Hall, has written a program called
randomization testing). An examination copy can be
along with the Fortran code that lies behind it.
has an excellent text that
many of the computations using Resampling Stats, S-Plus,
SD-Plus is very similar to R, which is a free program
(or programming environment) available from
www.r-project.org. R is
powerful, with libraries covering a huge range of
statistical procedures, but
it is not simple to use. Normand Peladeau has an
traditional statistical package called Simstat, which is
relatively inexpensive. It contains bootstrap commands, that
allow you to apply a bootstrap approach to many of the
procedures. An examination copy is available at www.simstat.com. You
any procedure you wish, and resample the data as often as
wish. You can then extract any of the resulting statistics
those bootstrapped results, and plot the distribution of the
statistic. It is very slick, but I need to play with it some
more. (For example, in a factorial design, do they
or columns, or cells? It makes a difference.)
I am running out of time to incorporate all of the necessary references to good sources for resampling statistics. A surprisingly good list can be found at the wikipedia site under resampling. I have not yet read that site, so can't say anything about how good it is, but the list of references is excellent. The books by Lunneborg, Efron and Tibshirani, Manly, Edgington, and Sprent are all very good. Lunneborg is more concerned with bootstrapping than randomization, but you will get a lot from it if you read carefully, with a pencil in hand. Edgington has been writing on randomization tests for many years, and is a very readable source. He provides some Fortran programs that you can translate into other languages, but they are so tightly written to improve efficiency that they can drive you crazy when you try to figure out what they are doing. (That's not his fault-- that's what he needed to do in the days before gigahertz computers.) Good's two books on this topic are good, though he occasionally skips over the details that you were looking for.
For bootstrapping, a nice introduction is a Sage Publications 1993 monograph by Mooney and Duval. It is a good place to start. Another good place to start is to go to the Resampling Stats website and look at some of their references. The program Resampling Stats itself can be addictive.
Last revised: 3/31/2007
David C. Howell
University of Vermont