**NR 245**

**Lab 3: Spatial
autocorrelation**

**Due Feb 8**

- Download the BG_GF_Census feature class from the download.mdb geodatabase is the NR245 folder of the share drive account. When you load the resulting layer in ArcMap you’ll notice it is missing one polygon, which happens to be mostly park land.
- Now
we’ll run some simple SA analyses. Open Arc toolbox and go to spatial
statistics tools>>analyzing patterns>>spatial autocorrelation.
Double click on it, choosing BG_GF_census as the
input, P_coarseveg as the input field, checking
“generate report” choosing “row” as the type of standardization, and
accepting all the other defaults. Click OK. To see the results, open the geoprocessing results window (from on the top menu
click Geoprocessing>>Results). To get the
full graphical report, click “HTML Report File.” This should open a web
browser window with the results.
**[Q1]****What is the Moran’s index score, z score, and p value? Interpret these results.**Keep in mind that the critical values for a z statistic at a 95% confidence level are -1.96 and +1.96. This is because 95% of the area on a standard normal distribution is between -1.96 and 1.96. At the 99% confidence levels those thresholds are positive and negative 2.58. - Now try running SA for some other variables and briefly report each result, including the confidence level at which it is significant for:
- P_baresoil
- P_water
- Median household income (MED_HH_INC)
- Robbery rate (Robb05)

**[Q2] report the results for P_water. How are
they different from those of P_coarseveg and why does
that make sense?**

- Now
we’ll do a local hotspot analysis using localmoran,
aka LISA. Before doing this, create a new folder in your account called SA
in Arc Catalog. In it, create a new geodatabase
called SA as well (File>>new>>personal geodatabase).
Then close Arc Catalog. This hotspot analysis will look at the Moran
statistic variably across space. Click on spatial statistics
tools>>mapping clusters>>cluster and outlier analysis (Anselin). Choose BG_GF_Census
as the input, Robb05 (robbery rate) as the input field, and save the
output as a feature class in your SA geodatabase,
calling it LISA_rob1. Under standardization, choose “row.” Set distance
band to 5000. Leave everything else
at the default values. You should see the new feature class added at the
end with a customized symbology giving HH, LL,
LH, HL. Save the symbology
as a lyr file by right clicking on the layer and
clicking “save as layer file.” The default name will be something like “LISA_rob.lyr.”
**Take a screencapture**Next, change the symbology on that layer to map out LMiPValue IDW using “Quantities>>graduated color” mapping and choosing only 2 classes. Set the upper break range to .05 on the first class (so that the first class goes from 0-0.05, the other class goes from .051 and higher) and then map out and**take a screencapture. [Q3] Interpret both maps you just screencaptured. Describe what the legend items (e.g. HH or HL, etc) mean in the first map and where there is similarity in nearby values and where there is not. Explain what the second map is showing and how it’s different from the previous map.** - Try
this now using the Getis method (spatial
statistics tools>>mapping clusters>>hot spot analysis). Choose
the same input layer and field as last time, and for output also save it
in SA.mdb, but call it Getis_rob_contig. First,
choose the “conceptualization” as “polygon contiguity.” Now rerun this analysis, using the
“Fixed distance band” conceptualization, choosing 2000 as the band.
**Take a screencapture of both.**Try the last one with a band of 3000 instead and look at the difference (no screencapture).**[Q4] Note the differences with the Moran output and explain what the z Scores in the output legend mean. For instance, how is a polygon with score -2.58 different from +2.58? Also explain why you think the different conceptualization methods resulted in different outputs.**Again, save a layer files so you can use this symbology again if you want. - Now try the same thing using the parcel
layer from last week. Again choose the cluster and outlier analysis, but
this time choose parcels_R as the input layer
and assessed property value (assessed_tot) as
the input variable choosing “row” standardization and Inverse Distance as
the spatial method. Leave the distance band blank. Output this again to
your geodatabase.
**Screencapture****the output and compare it to a graduated color map of assessed total value. [Q5] What is the hoptspot analysis appear to be showing you?**If you have time, try the same parcel level analysis with the local Getis tool (spatial statistics tools>>mapping clusters>>hot spot analysis). Choose the same inputs, but change the name of the ouput to Getis_price, choose inverse distance output, row standardization and a distance band threshold of 500 meters. - Now
we’re going to run a regression in S Plus but look at the residuals to see
if they are spatially autocorrelated. Load up
the BG_GF_census layer into Splus
using File>>import data>>from database (using the instructions
of week 1). Go to statistics>>regression>>linear. Input the
following model to predict the variation in tree cover, using BG.GF.census as your data set: P.coarseveg~
AVE.HH.SZ+P.HS.+POP00.SQMI+MED.HH.INC+ P.SFDH+P.Protland+ d2ramp. Hit apply. You’ll note that
one variable (plus the intercept, which you don’t need to worry about) is clearly
not significant at the 95% confidence level (you may get another variable that
is just barely insignificant—you can leave that in). You
should get a good R-squared for this.
**Copy the table of results from S Plus into your document and [Q6]report which variable you plan to drop and how you know to drop it**. Rerun the model without that variable and also clicking on the results tab and checking the “residuals” box under “saved results,” choosing BG_GF_Census as the table to save it in. In the results, note that your formerly marginal variable is now highly significant. If you look at the BG.GF.Census table after hitting apply, you should see a column at the far end of the table called residuals. - Now let’s output these back to Arc Map. Go to file>>export data>>to database. Choose Data target as MS Access, browse to your NR245 directory and download geodatabase, Choose resid as the output table name, and click on the filter tab, where you will choose the preview columns checkbox, and then select to export (by shift-clicking) only BKGKEY and residuals. Then click OK. If it doesn’t work, it may be because your geodatabase is already open in Arc Catalog or Arc Map.
- Now,
open Arc Map and load BG-GF-Census as well as the “resid”
table you just created. Do a tabular join to join resid
to BG_GF_Census. Then, run a quick global Moran
test on those residuals (you could also do this in Splus,
but it’s easier in Arc Map), by going to spatial statistics
tools>>analyzing patterns>>spatial autocorrelation. Choose BG_GF_Census as the input table and Residual as the
input field. You can leave the distance band blank.
**[Q7]Interpret the results**.**Are your residuals independent and random, or spatially autocorrelated?**Now run a cluster and outlier analysis using the local Moran statistic, like we did above (spatial statistics tools>>mapping clusters>>cluster and outlier analysis). Choose the same table and fields, as you just did. Then click OK and**take a screencapture of the map. [Q8] Describe any patterns you see.** - Assemble screencapture and text into a file and upload.