Landscape Explorer Detailed Description

Detailed description and technical information about the development of the imagery used in the Landscape Explorer.

Learn how WLFW and partners sourced, processed, and produced this powerful interative, web application.

Historical aerial imagery is widely used to give us glimpses of our past environment. Both federal and state agencies maintain extensive archives of aerial imagery, but typically these images are served as stand-alone pictures, with minimal geographic context.

With the advent and expansion of web mapping capabilities, the ability to present historical imagery in a unified environment with modern aerial and satellite imagery (such as Google Maps) has been a goal of many researchers, cartographers, and land managers. Traditionally, processing these images to produce web-ready map layers was time-consuming, and required technicians to analyze each photo. However, recent innovations in photogrammetry and computer vision now allow us to take a semi-automated approach with processing.

These advances in automation allowed for the development of this map, which comprises more than 180,000 aerial images from across the western United States.

The majority of photos used for this project were collected by the U.S. Army after World War II to prepare for mobilization in case of future conflict. These Army photos compromise more than 90% of all imagery used for this project. Special thanks go to the United State Geological Service Earth Resource Observation and Science Team who, as part of their Single Frame Archive program, have digitized these photos and served them at no cost from their EarthExplorer application.

To produce the map product, we developed a semi-automated pipeline to clean and process the imagery. This process used a combination of commercial and open source software, including Mathworks Matlab, Agisoft Metashape, QGIS and GDAL. We also used Google Earth Engine to combine our regional projects and produce the final set of web tiles used in our mapping application.

Our basic approach to building this map product was to automate as much as possible. We used the USGS bulk download tool to ingest more than 24 TB of imagery, and automated a set of algorithms to crop and clean the imagery in Matlab. Orthorectification was performed with a semi-automated process in Metashape 1.7.4, and we used QGIS, Google satellite imagery, and the USGS Elevation Point Query Service to develop control points (locations with known coordinates in the historical aerial imagery).

The quality of the scanned historical aerial imagery was generally quite good, but there were a number of challenges in developing a consistent product when it was impossible to inspect and accept or reject each image. We commonly found that the aerial imagery had problems with exposure levels and vignetting. For example, vignetting was generally ubiquitous in historical imagery and introduced significant visual artifacts, particularly when viewing landscapes at a moderate zoom level. To address these issues we applied a contrast-limited adaptive histogram equalization or the haze reduction algorithm in Matlab.

After image cleaning, we import the imagery into Metashape to align project images. Image alignment relies on a technique called Structure from Motion (SfM), which attempts to identify common features across multiple images, and then uses the images to calculate the relative position of each image center point. The slide below, from Rob Fergus’s lecture notes, illustrates the relationship between common features in each image (X1j, X2j, and X3j), the true position of the feature (Xj) and the image center points (P1, P2, and P3).

In practice, the SfM approach provides users with a semi-automated approach to align the source imagery without cumbersome pre-processing. The image below from Torres and colleagues shows an example of the relative alignment of the source aerial images above the point features used to generate the alignment. Generally, each image is aligned with hundreds or thousands of features that are shared across multiple images. 

Once the relative location of each image center is known, the software can orthorectify the imagery using a digital elevation model to correct for terrain displacement. The digital elevation model can be generated from the software itself, using a process called Multi View Stereo (MVS), or we can use an elevation model from an outside source, such as the USGS. Below we show an example of a digital elevation model produced by MVS processing. By colorizing the data, you can see the mountainous terrain in the center of the imagery with higher elevations shaded in yellow and orange; lower elevation valleys and foothills are shown in green.

In generating the historical imagery product, we used both internally produced elevation models and elevation models from the USGS. We found that using an elevation model generated by our software generally produced a better product in mountainous regions, while the external model worked well in regions with less topography, such as the Great Plains.

The orthorectification process not only performs terrain correction, but also optimizes how the images are mosaicked together. Seam lines between images are often placed  along topographic breaks, such as ridgelines, or where there’s a sharp textural change in the imagery. Image selection is also optimized so that imagery appears nadir (looking straight down) rather than incorporating oblique parts of the imagery found close to the edges of individual photos. Generally, we needed roughly 30% overlap in the images to produce a high-quality orthorectified image. Below, you’ll see roughly 50 images mosaicked together, with the seam lines shown in black.

Along the seam lines, the images are blended to minimize artifacts along the photo edges. Below, we show the same scene with and without seam lines. Can you identify where the seam lines are located?

The final orthomosaic map was generated with a 1.0 meter resolution, meaning that each pixel has a nominal dimension of 1.0 x 1.0 meters. The resolution of the input imagery varied between 0.7 meters and 2.0 meters. In our map application, we provide zoom levels to roughly 2.0 meter resolution. If you need the 1.0 meter data, you can download the higher resolution imagery from our file server

Interested users are free to download and use the imagery for non-commercial purposes. We encourage the use of these data in research to inform and improve the management of our natural resources. We’ve used these data in a recent analysis of tree cover change in Montana; read about it hereContact Scott Morford if you have additional questions.

This project took roughly two years to complete, with contributions from multiple technicians. We thank Sean Carter, Eric Jensen, Kate Kolwicz, Kris Mueller, and Caitlyn Wade for their assistance on this project. This project was conceived by Scott Morford and Brady Allred. Scott Morford was the project manager and technical lead on the project. This project was completed by the Working Lands for Wildlife Science Team, housed within the Numerical Terradynamic Simulation Group at the University of Montana.