Rapid Exploitation of Commercial Remotely Sensed Imagery for Disaster Response & Recovery Task List

Rapid Exploitation of Commercial Remotely Sensed Imagery for Disaster Response & Recovery Task List

Task 1: Creation of a Technical Advisory Committee

Task description:
A Technical Advisory Committee will be formed in the beginning of the project, composed of representatives from VTrans and other regional state DOTs, a member of the Metropolitan Planning Organization for Chittenden County or another metropolitan
area in the study region, a US DOT representative, a disaster management specialist, an industry representative (e.g. from Geoeye or Trimble) and a remote sensing specialist, potentially among others. The US DOT project manager will be consulted before finalizing membership. This group will meet in person or by video conference twice per year or on an as needed basis.

Output/Deliverables:
The Advisory Board comprised of 6 to 8 members will provide guidance in specific technical and policy recommendations that the team would take into consideration for implementation. Notes will be taken at each meeting and provided to members as a
brief summary report.

Task 2: Creation of a project website

Task description:
The project web site will serve as the main portal from which collaborators, the funding agency, and the general public can find out more about the project, obtain up-to-date information, and download products.

Output/Deliverables:
A project web site will be created on the University of Vermont domain (www.uvm.edu) containing a password protected section for internal documents and data products that have access/use restrictions associated with them (e.g. commercial satellite imagery) as well as access to regularly updated public documents.

Task 3: Damage detection system methods development

Task description:
Our proposed workflow, described in detail in the Technical Proposal document, consists of three phases: (1) data preparation, (2) damage detection/feature extraction, and (3) decision support tools.

The first phase of the workflow, data preparation, occurs once the CRS satellite imagery has been obtained, typically via an online portal such as the USGS Hazard Data Distribution System (HDDS). Due to the rapid tasking and posting of CRS imagery
to HDDS, the data are typically not georeferenced with sufficient accuracy to support automated damage detection. To overcome this limitation the first phase will employ automated approaches to image registration using the AutoSync module within ERDAS
IMAGINE. In this phase pre-event CRS imagery, primarily from the National Agricultural Imagery Program (NAIP), for which there is nationwide 1 meter coverage, will serve as the reference data set. Post-event CRS imagery will be registered to the pre-event imagery and placed on a server. Once on the server an automated import routine will then load the data into an object-based system for feature extraction and damage detection. The automated import routine will be built using the eXtensible Markup Language (XML). It will assemble the pre- and post-event imagery and the existing state transportation GIS vector road data into data stacks within the object-based system. The automated import routine will assign appropriate alias names to imagery bands and GIS vector data sets in addition to clipping the vector data to the extent of the imagery.

In the second phase the object-based image analysis system will automatically extract features and detect damage using a combination of pre- and post-event imagery in conjunction with transportation GIS data sets. A knowledge engineering approach will be used in which an expert system applies a series of segmentation, morphology, and classification algorithms to determine areas in which damage has likely occurred. These damaged areas are then presented to the user who has the option of either confirming or denying the presence of damage prior to sending the results on to the decision support tools (phase 3).

The first feature extraction process uses expert knowledge to extract road areas from the pre-event imagery and GIS transportation data sets. The expert system will be designed such that precise agreement between the vector GIS data and the pre-event imagery is not required. This will be accomplished by first using segmentation and classification routines to extract linear features resembling roads, then assigning those features to the appropriate road segment based on proximity and orientation measures. The feature extraction for the post-event imagery will be less specific. It will largely center on
extracting object primitives. Using the roads identified from the pre-event imagery in combination with the object primitives from the post-event imagery damage areas will be identified. This will be accomplished using object-fate analysis. The underlying theory behind object-fate analysis is that features in imagery collected on two time periods will always differ in some way due to a range of factors from sensor collection parameters to actual modifications to the landscape. By comparing the roads with the object primitives, the expert system will be able to compute the magnitude of change between the two objects using spectral, textual, geometric, and contextual information. Fuzzy logic will then be employed to determine those areas for which damage has likely occurred based on both the magnitude and type of change.

The expert system will purposely be designed in such a manner that it reduces errors of omission to as close to zero as possible, even if this means increasing errors of commission. The system will thus employ an optional tool that provides pre- and
post-event image chips to the user that allows him/her to tagged false positives.

The object-based image analysis system will be built using Trimble’s eCognition software platform. The expert system will be built using the Cognition Network Language (CNL) within eCognition Developer and deployed using eCognition Server. CNL provides
the greatest number of and most robust set of segmentation, classification, and morphology algorithms that we are aware of.

Testing and validation of this approach will occur for the areas of Vermont, New York, and New Hampshire using data collected in support of Tropical Storm Irene response. The expert system will be developed on a set of CRS scenes reserved specifically
for development, and then validated on an entirely separate set of CRS images. We will employ standard remote sensing accuracy assessment protocols to assess producer’s, user’s, and overall accuracy of the damage detection. Once we have a validated
system we will select at least one other geographical area outside of the Northeast to test its effectiveness. Preference will be given to a recent disaster and will be coordinated with the AmericaView network, a national remote sensing consortium with expertise in disaster response for which Co-PI O’Neil-Dunne serves on the board of directors. Thanks to the generosity of GeoEye we will have access to massive amounts of pre- and post-event imagery for which to test this within the United States.

In the third phase, the damage data from the object-based image analysis system is fed into the decision support tools. A geoprocessing routine will intersect the damage location with existing transportation GIS data sets to extract relevant attributes. This relatively simple procedure has the advantage in that it will work for standard (e.g. national road databases) and more complex (e.g. state transportation asset data) data sets. The end result of this operation will be a point location representing the center of the damage, an image chip of the damage from the post-event imagery, any information about the damage that can be extracted by the object-based image analysis system, and any information from the transportation GIS database (e.g. road name, mile marker, etc.). The point information will be uploaded to a web-based decision support portal using Google Fusion Tables, as described in task 5.

Output/Deliverables:
We will develop, validate, and accuracy assess a methodology for automating the identification of large road damage. This methodology will result in the development of a “knowledge base” of expert classification rules that remote sensing technicians can then reuse in other location. This knowledge base will be made available on our website along with documentation and tutorials on using it (see Task 6). We will also create and post an ESRI geoprocessing utility or standalone utility that extracts the geographic coordinates of the center of each damage polygon and then sends that coordinate to a web server (see Task 5).

Task 4: Fill calculation system methods development

Task description:
We intend to approach the second objective through the use of commercial, lightweight, deployable unmanned aerial vehicles (UAVs). We will make use of the Gatewing UAV, which flies low enough to be exempt from FAA regulations. Using preprogrammed
flight paths that take it over the same feature from two slightly different angles, the Gatewing is capable of acquiring stereo imagery.

The first phase in developing the UAV approach will be calibrating our measurements over void areas for which precise volumes are already known. We plan to conduct these calibrations over empty swimming pools and/or quarries. 3D surface models will be extracted from the stereo imagery using cost-based image-matching techniques. These techniques are widely available in both open-source and commercial software packages. For this project, we will employ the Inpho software package. 3D surface models will be distributed in the GeoTIFF format, an open format that is supported by all open-source and commercial GIS, mapping, and computer-aided design (CAD) software packages.

In conceptual terms, the automated fill calculation algorithm will work by generating a digital surface model that includes the eroded or damaged void and its immediate surroundings. Interpolation will be used to create an artificial plane representing where the bottom of the pavement meets underlying fill. Using standard 3D GIS functionality, the volume of the area between that plane and the bottom of the void (as estimated by the digital surface model) will be calculated.  This calculation will yield a total compacted fill quantity. However, a roadbed is made up of different layers, including the surface course, the base course, and the subbase course, each using different materials. Embankment materials may also be required on the side slopes of the roadway. Therefore, additional calculations will estimate the volume of each design layer in the void based on a specific thickness of each material, which will then help yield an estimated amount of material needed by type.

There are, however, many types of fill (4 types of stone fill are commonly used in Vermont, for instance) and embankment material, and the appropriate type and thickness varies by many factors relating to the site, like the surrounding hydrology and the use of the road. Many states currently use a program called DARWin, which takes basic information input by the user, cross references the information with state standards and the 1993 AASHTO Guide for Design of Pavement Structures, and outputs the roadway design parameters needed to conduct repair. In this case, our model would allow users to take the outputs of DARWin and use them as inputs to our fill calculation module, thereby eliminating the need for a site visit. In the second phase of this task we work with our DOT counterparts to incorporate fill type into our algorithm and ensure that the appropriate information passes between our alogrithm and DARWin. So cost savings will be realized from a more cost-effective design and from the elimination of the need for a site visit.

Whether the user is a field engineer trying to match pre-existing conditions or a design engineer using a program such as DARWin to calculate the roadway design, the interface will be the same. The third phase of this task will involve adapting the fill quantity and type algorithms for use in a reusable geoprocessing model interface. The model will consist of a series of geoprocessing tools within the commonly used ESRI ArcGIS platform and will utilize objects from ArcGIS 3D Analyst toolbox. When users click on the model, it will bring up an interface that will ask the user to draw a polygon around the segment of road for which they would like a fill estimate. Next, the user will be prompted to input the roadway design criteria by identifying the pavement surface, base, and subbase thicknesses and material types, as well as embankment armoring depth and material type. The module will take the typical cross section for the roadway, align it with the selected roadway segment centerline, apply it to the void identified in the 3D surface model, and provide an estimation of the type and quantity of fill required. If possible, it will also output a typical cross section for the roadway design. The geoprocessing tool will include functionality that allows users to automatically upload the outputs to the online database, which in turn will populate the damage-point web map (Task 5).

In any given year spring floods generally cause at least minor erosive damage to the transportation network. Working in collaboration with incident commanders at our partner state transportation departments we will receive notification when damage occurs and program our UAVs to fly these areas before they are repaired. Field crews will measure the volumes using ground-based measurement and photographic methods so that we can validate our estimates.

Output/Deliverables:
We will develop, validate, accurately assess and document a methodology for automating the calculation of the quantity of fill by type for road damage voids caused by flooding. We will produce a technical document and tutorial that outlines this methodology (see Task 6). We will also produce and make available an ESRI geoprocessing tool capable of performing the fill calculations.

Task 5: Development of web portal decision support tool

Task description:
We will work with our primary state DOT partner, VTrans, to develop a web portal that helps deliver incident information faster, more accurately, and with greater detail. Our proposed decision support system will feed information on damage locations and fill quantity/type into a common web portal from the desktop tools described in Tasks 3 and 4. The decision support system targets two main audiences. The first is the general public, which has a need to know about status of damage, road closures, surface conditions, and delays and the resulting impact on their transportation routes.  The second audience is state DOT personnel, who have a similar need to know about the damage on roadways and the resulting impact on the transportation network, but also need to know technical information about road repair.

This decision support tool will allow them to objectively detect most large damage sites is after a major event which in turn will allow for better strategizing about the response, particularly in terms of prioritizing sites for repair. The information on fill types and volumes available from this website will allow incident managers to more quickly and
precisely determine the amount and type of materials that will be needed to make repairs, which will boost efficiency of repair crews. It will also save costs by resulting in more accurate orders of fill materials, eliminating the need to over-order more expensive fill types due to uncertainty. Furthermore, by visualizing where the fill needs are located,
it will make it much easier for incident managers to design routes to deliver crews and materials to multiple sites in an efficient sequence. Having all this in an easy-to-use web portal will simplify and speed up the incident management process, eliminating many communication bottlenecks.

While public users will just have access to information on the location and severity of individual damage sites, DOT officials will have access through restricted login to greater amounts of information, including fill quantity and type estimates and void dimensions, as well as contextual information (e.g. grade, hydrology, etc.), all by geographic coordinate. Point incident and attribute level controls will be implemented so that the same underlying data can feed into both the public and access-restricted versions of the web-based decision support system, with different data fields tagged by their access level.

This web portal will consist of a basic zoomable map interface (e.g. Google Maps, similar to what is currently used for Vermont’s travel information web service) with damage points overlaid on it. A protocol will be developed to feed data from our damage identification analysis into a back-end database relying on Google Fusion Tables. Google Fusion Tables is a form of cloud computing that allows one to upload, display, and extract geospatial data, and have any number of authorized users modify that data. One key advantage is that this technology is that it is available to any organization or individual at no cost. Moreover, the multi-user editing functionality of Google Fusion Tables will allow incident commanders or other authorized personnel to update the information, for example, changing the status from “closed” to “delays” for a damaged section of road. The approach has an additional advantage in that the information can be made accessible for direct ingest into other web-based mapping portals without the need for download or perform any conversion.

The portal will pull data directly from Google Fusion Tables, resulting in a geocoded point on a map. Using automated GIS overlay analysis each point can be populated with a series of relevant attributes (e.g. proximity to stream, stream order, grade, soil type, road type, etc.) that will “pop up” as a callout when users click on a damage point. We envision that the product will look similar to the one developed by the Vermont Agency of Transportation following Hurricane Irene. Login-access users of the damage-point web map will be able to click on the damage points and then click on a number of links, including one to the fill volumes and type calculations, one to contextual information
(e.g. slope or hydrology), and one to background information about how the fill calculation was done and what assumptions were used. If feasible, the map interface will also allow users to query for damage locations based on certain criteria, such as location within a town or county, network/Euclidean distance to a certain facility or location, or type of road. Realizing that there is a need to have the actual GIS data, the web portal will also include a link to download the point information in industry-standard KML and Shapefile formats.

For state DOTs without any type of existing decision support system they will be able to implement our system with very few modifications. The reliance on open source and freely available technology will permit a low barrier to entry. For state DOTs with existing decision support systems on the web, our use of Open Geospatial Consortium (OGC) compliant formats will likely allow them to seamlessly integrate the information into their existing system.

We will also experiment with using social media to report information on damage locations. Social media is increasingly being used as a mechanism for disseminating information during a disaster. As part of the damage detection geoprocessing operation we will incorporate a routine that tweets the location of the damage via the Twitter social media platform. This capability is available within the FME software package, a commercial software package used for data translation and conversion that runs on both the desktop and server.

Output/Deliverables:
Outputs will include development of a front-end website prototype on our own servers which will pull data from Google Fusion Tables, which is a cloud-based platform. We will then work with our VTrans partners to make these data sets and web resources available to them so that they can freely integrate them into their online information systems. We will document the process of developing the portal and will write up manuals for both users and for website administrators.

Task 6: Project outreach and communication

Task description:
We intend to make the methods and technologies developed in this project easily transferable to other state DOTs and to professionals in the fields of incident management or remote sensing. Towards this end we will make publically available all of the documentation, computer models, and support materials described in the tasks above. In partnership with our state DOT colleagues, will we explore whether the damage-point web map could potentially be expanded to include all states in the region. This expansion would require extensive coordination of information flows. Finally, this task will include presentations at conferences/professional meetings and publication of scholarly articles.

Output/Deliverables:
We will complete, make available and disseminate all outreach materials. For the damage-detection methodology, this will include our knowledge base of classification/detection rules, which can then be ported and reused in object-based image-classification software using different imagery, as well as a detailed methodological document and video tutorial that will assist technicians in replicating this system. For the fill calculation task, it will include the ArcGIS geoprocessing tool files and user manual, a methodological document, and a set of video tutorials. For the decision support portal development, we will include a methodological document about setting up the interface and serving the data from Google Fusion Tables, as well as guides for users and administrators. We will hold a focus group meeting with select partners to get feedback on our outputs and determine what additional information or clarification may be needed for subsequent adopters to make use of the project’s methods. We will also follow up with VTrans and, if applicable, other New England DOTs, to determine if and how the methods we developed were actually employed and what improvements could potentially be made. Finally, we will write a final report (draft and revised versions), give presentations on the project at professional meetings and prepare manuscripts on the project for publication.

For an update on where the project is with respect to output/deliverables, please see the most recent Quarterly Report.

Comments are closed