This lesson is in the early stages of development (Alpha version)

Introduction to Geospatial Raster and Vector Data with Python

Introduction to Raster Data

Overview

Teaching: 15 min
Exercises: 10 min
Questions
  • What format should I use to represent my data?

  • What are the main data types used for representing geospatial data?

  • What are the main attributes of raster data?

Objectives
  • Describe the difference between raster and vector data.

  • Describe the strengths and weaknesses of storing data in raster format.

  • Distinguish between continuous and categorical raster data and identify types of datasets that would be stored in each format.

This episode introduces the two primary types of geospatial data: rasters and vectors. After briefly introducing these data types, this episode focuses on raster data, describing some major features and types of raster data.

Data Structures: Raster and Vector

The two primary types of geospatial data are raster and vector data. Raster data is stored as a grid of values which are rendered on a map as pixels. Each pixel value represents an area on the Earth’s surface. Vector data structures represent specific features on the Earth’s surface, and assign attributes to those features. Vector data structures will be discussed in more detail in the next episode.

The R for Raster and Vector Data lesson will focus on how to work with both raster and vector data sets, therefore it is essential that we understand the basic structures of these types of data and the types of data that they can be used to represent.

About Raster Data

Raster data is any pixelated (or gridded) data where each pixel is associated with a specific geographical location. The value of a pixel can be continuous (e.g. elevation) or categorical (e.g. land use). If this sounds familiar, it is because this data structure is very common: it’s how we represent any digital image. A geospatial raster is only different from a digital photo in that it is accompanied by spatial information that connects the data to a particular location. This includes the raster’s extent and cell size, the number of rows and columns, and its coordinate reference system (or CRS).

Raster Concept

Source: National Ecological Observatory Network (NEON)

Some examples of continuous rasters include:

  1. Precipitation maps.
  2. Maps of tree height derived from LiDAR data.
  3. Elevation values for a region.

A map of elevation for Harvard Forest derived from the NEON AOP LiDAR sensor is below. Elevation is represented as continuous numeric variable in this map. The legend shows the continuous range of values in the data from around 300 to 420 meters.

Some rasters contain categorical data where each pixel represents a discrete class such as a landcover type (e.g., “forest” or “grassland”) rather than a continuous value such as elevation or temperature. Some examples of classified maps include:

  1. Landcover / land-use maps.
  2. Tree height maps classified as short, medium, and tall trees.
  3. Elevation maps classified as low, medium, and high elevation.

USA landcover classification

The map above shows the contiguous United States with landcover as categorical data. Each color is a different landcover category. (Source: Homer, C.G., et al., 2015, Completion of the 2011 National Land Cover Database for the conterminous United States-Representing a decade of land cover change information. Photogrammetric Engineering and Remote Sensing, v. 81, no. 5, p. 345-354)

The map above shows elevation data for the NEON Harvard Forest field site. We will be working with data from this site later in the workshop. In this map, the elevation data (a continuous variable) has been divided up into categories to yield a categorical raster.

Advantages and Disadvantages

With your neighbor, brainstorm potential advantages and disadvantages of storing data in raster format. Add your ideas to the Etherpad. The Instructor will discuss and add any points that weren’t brought up in the small group discussions.

Solution

Raster data has some important advantages:

  • representation of continuous surfaces
  • potentially very high levels of detail
  • data is ‘unweighted’ across its extent - the geometry doesn’t implicitly highlight features
  • cell-by-cell calculations can be very fast and efficient

The downsides of raster data are:

  • very large file sizes as cell size gets smaller
  • currently popular formats don’t embed metadata well (more on this later!)
  • can be difficult to represent complex information

Important Attributes of Raster Data

Extent

The spatial extent is the geographic area that the raster data covers. The spatial extent of an R spatial object represents the geographic edge or location that is the furthest north, south, east and west. In other words, extent represents the overall geographic coverage of the spatial object.

Spatial extent image

(Image Source: National Ecological Observatory Network (NEON))

Extent Challenge

In the image above, the dashed boxes around each set of objects seems to imply that the three objects have the same extent. Is this accurate? If not, which object(s) have a different extent?

Solution

The lines and polygon objects have the same extent. The extent for the points object is smaller in the vertical direction than the other two because there are no points on the line at y = 8.

Resolution

A resolution of a raster represents the area on the ground that each pixel of the raster covers. The image below illustrates the effect of changes in resolution.

Resolution image

(Source: National Ecological Observatory Network (NEON))

Raster Data Format for this Workshop

Raster data can come in many different formats. For this workshop, we will use the GeoTIFF format which has the extension .tif. A .tif file stores metadata or attributes about the file as embedded tif tags. For instance, your camera might store a tag that describes the make and model of the camera or the date the photo was taken when it saves a .tif. A GeoTIFF is a standard .tif image format with additional spatial (georeferencing) information embedded in the file as tags. These tags should include the following raster metadata:

  1. Extent
  2. Resolution
  3. Coordinate Reference System (CRS) - we will introduce this concept in a later episode
  4. Values that represent missing data (NoDataValue) - we will introduce this concept in a later lesson.

We will discuss these attributes in more detail in a later lesson. In that lesson, we will also learn how to use R to extract raster attributes from a GeoTIFF file.

More Resources on the .tif format

Multi-band Raster Data

A raster can contain one or more bands. One type of multi-band raster dataset that is familiar to many of us is a color image. A basic color image consists of three bands: red, green, and blue. Each band represents light reflected from the red, green or blue portions of the electromagnetic spectrum. The pixel brightness for each band, when composited creates the colors that we see in an image.

RGB multi-band raster image

(Source: National Ecological Observatory Network (NEON).)

We can plot each band of a multi-band image individually.

Or we can composite all three bands together to make a color image.

In a multi-band dataset, the rasters will always have the same extent, resolution, and CRS.

Other Types of Multi-band Raster Data

Multi-band raster data might also contain:

  1. Time series: the same variable, over the same area, over time. We will be working with time series data in the Plot Raster Data in Python episode.
  2. Multi or hyperspectral imagery: image rasters that have 4 or more (multi-spectral) or more than 10-15 (hyperspectral) bands. We won’t be working with this type of data in this workshop, but you can check out the NEON Data Skills Imaging Spectroscopy HDF5 in R tutorial if you’re interested in working with hyperspectral data cubes.

Key Points

  • Raster data is pixelated data where each pixel is associated with a specific location.

  • Raster data always has an extent and a resolution.

  • The extent is the geographical area covered by a raster.

  • The resolution is the area covered by each pixel of a raster.


Introduction to Vector Data

Overview

Teaching: 10 min
Exercises: 5 min
Questions
  • What are the main attributes of vector data?

Objectives
  • Describe the strengths and weaknesses of storing data in vector format.

  • Describe the three types of vectors and identify types of data that would be stored in each.

About Vector Data

Vector data structures represent specific features on the Earth’s surface, and assign attributes to those features. Vectors are composed of discrete geometric locations (x, y values) known as vertices that define the shape of the spatial object. The organization of the vertices determines the type of vector that we are working with: point, line or polygon.

Types of vector objects

Image Source: National Ecological Observatory Network (NEON)

Data Tip

Sometimes, boundary layers such as states and countries, are stored as lines rather than polygons. However, these boundaries, when represented as a line, will not create a closed object with a defined area that can be filled.

Identify Vector Types

The plot below includes examples of two of the three types of vector objects. Use the definitions above to identify which features are represented by which vector type.

Solution

Vector Type Examples State boundaries are polygons. The Fisher Tower location is a point. There are no line features shown.

Vector data has some important advantages:

The downsides of vector data include:

Vector datasets are in use in many industries besides geospatial fields. For instance, computer graphics are largely vector-based, although the data structures in use tend to join points using arcs and complex curves rather than straight lines. Computer-aided design (CAD) is also vector- based. The difference is that geospatial datasets are accompanied by information tying their features to real-world locations.

Vector Data Format for this Workshop

Like raster data, vector data can also come in many different formats. For this workshop, we will use the Shapefile format which has the extension .shp. A .shp file stores the geographic coordinates of each vertice in the vector, as well as metadata including:

Because the structure of points, lines, and polygons are different, each individual shapefile can only contain one vector type (all points, all lines or all polygons). You will not find a mixture of point, line and polygon objects in a single shapefile.

More Resources

More about shapefiles can be found on Wikipedia

Why not both?

Very few formats can contain both raster and vector data - in fact, most are even more restrictive than that. Vector datasets are usually locked to one geometry type, e.g. points only. Raster datasets can usually only encode one data type, for example you can’t have a multiband GeoTIFF where one layer is integer data and another is floating-point. There are sound reasons for this - format standards are easier to define and maintain, and so is metadata. The effects of particular data manipulations are more predictable if you are confident that all of your input data has the same characteristics.

Key Points

  • Vector data structures represent specific features on the Earth’s surface along with attributes of those features.

  • Vector objects are either points, lines, or polygons.


Coordinate Reference Systems

Overview

Teaching: 15 min
Exercises: 10 min
Questions
  • What is a coordinate reference system and how do I interpret one?

Objectives
  • Name some common schemes for describing coordinate reference systems.

  • Interpret a PROJ4 coordinate reference system description.

Coordinate Reference Systems

A data structure cannot be considered geospatial unless it is accompanied by coordinate reference system (CRS) information, in a format that geospatial applications can use to display and manipulate the data correctly. CRS information connects data to the Earth’s surface using a mathematical model.

CRS vs SRS

CRS (coordinate reference system) and SRS (spatial reference system) are synonyms and are commonly interchanged. We will use only CRS throughout this workshop.

The CRS associated with a dataset tells your mapping software (for example R) where the raster is located in geographic space. It also tells the mapping software what method should be used to flatten or project the raster in geographic space.

Maps of the United States in different projections

The above image shows maps of the United States in different projections. Notice the differences in shape associated with each projection. These differences are a direct result of the calculations used to flatten the data onto a 2-dimensional map. (Source: opennews.org)

There are lots of great resources that describe coordinate reference systems and projections in greater detail. For the purposes of this workshop, what is important to understand is that data from the same location but saved in different projections will not line up in any GIS or other program. Thus, it’s important when working with spatial data to identify the coordinate reference system applied to the data and retain it throughout data processing and analysis.

Components of a CRS

CRS information has three components:

Orange Peel Analogy

A common analogy employed to teach projections is the orange peel analogy. If you imagine that the earth is an orange, how you peel it and then flatten the peel is similar to how projections get made.

  • A datum is the choice of fruit to use. Is the earth an orange, a lemon, a lime, a grapefruit?

Datum Fruit Example

Image source

A projection is how you peel your orange and then flatten the peel.

Projection Citrus Peel Example

Image source

  • An additional parameter could include a definition of the location of the stem of the fruit. What other parameters could be included in this analogy?

Which projection should I use?

To decide if a projection is right for your data, answer these questions:

University of Colorado’s Map Projections and the Department of Geo-Information Processing has a good discussion of these aspects of projections. Online tools like Projection Wizard can also help you discover projections that might be a good fit for your data.

Data Tip

Take the time to figure identify a projection that is suited for your project. You don’t have to stick to the ones that are popular.

Describing Coordinate Reference Systems

There are several common systems in use for storing and transmitting CRS information, as well as translating among different CRSs. These systems generally comply with ISO 19111. Common systems for describing CRSs include EPSG, OGC WKT, PROJ strings.

EPSG

The EPSG system is a database of CRS information maintained by the International Association of Oil and Gas Producers. The dataset contains both CRS definitions and information on how to safely convert data from one CRS to another. Using EPSG is easy as every CRS has a integer identifier, e.g. WGS84 is EPSG:4326. The downside is that you can only use the CRSs EPSG defines and cannot customise them (some datasets do not have EPSG codes). Detailed information on the structure of the EPSG dataset is available on their website. epsg.io is an excellent website for finding suitable projections by location or for finding information about a particular EPSG code.

Well-Known Text

The Open Geospatial Consortium WKT standard is used by a number of important geospatial apps and software libraries. WKT is a nested list of geodetic parameters. The structure of the information is defined on their website. WKT is valuable in that the CRS information is more transparent than in EPSG, but can be more difficult to read and compare than PROJ since it is meant to necessarily represent more complex CRS information. Additionally, the WKT standard is implemented inconsistently across various software platforms, and the spec itself has some known issues).

PROJ

PROJ is an open-source library for storing, representing and transforming CRS information. PROJ strings continue to be used, but the format is deprecated by the PROJ C maintainers due to inaccuracies when converting to the WKT format. The data and python libraries we will be working with in this workshop use different underlying representations of CRSs under the hood for reprojecting. CRS information can still be represented with EPSG, WKT, or PROJ strings without consequence, but it is best to only use PROJ strings as a format for viewing CRS information, not for reprojecting data.

PROJ represents CRS information as a text string of key-value pairs, which makes it easy to read and interpret.

A PROJ4 string includes the following information:

Note that the zone is unique to the UTM projection. Not all CRSs will have a zone.

The UTM zones across the continental United States.

Image source: Chrismurf at English Wikipedia, via Wikimedia Commons (CC-BY).

Reading a PROJ4 String

Here is a PROJ4 string for one of the datasets we will use in this workshop:

+proj=utm +zone=18 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0

  • What projection, zone, datum, and ellipsoid are used for this data?
  • What are the units of the data?
  • Using the map above, what part of the United States was this data collected from?

Solution

  • Projection is UTM, zone 18, datum is WGS84, ellipsoid is WGS84.
  • The data is in meters.
  • The data comes from the eastern US seaboard.

Format interoperability

Many existing file formats were invented by GIS software developers, often in a closed-source environment. This led to the large number of formats on offer today, and considerable problems transferring data between software environments. The Geospatial Data Abstraction Library (GDAL) is an open-source answer to this issue.

GDAL is a set of software tools that translate between almost any geospatial format in common use today (and some not so common ones). GDAL also contains tools for editing and manipulating both raster and vector files, including reprojecting data to different CRSs. GDAL can be used as a standalone command-line tool, or built in to other GIS software. Several open-source GIS programs use GDAL for all file import/export operations.

Metadata

Spatial data is useless without metadata. Essential metadata includes the CRS information, but proper spatial metadata encompasses more than that. History and provenance of a dataset (how it was made), who is in charge of maintaining it, and appropriate (and inappropriate!) use cases should also be documented in metadata. This information should accompany a spatial dataset wherever it goes. In practice this can be difficult, as many spatial data formats don’t have a built-in place to hold this kind of information. Metadata often has to be stored in a companion file, and generated and maintained manually.

More Resources on CRS

Key Points

  • All geospatial datasets (raster and vector) are associated with a specific coordinate reference system.

  • A coordinate reference system includes datum, projection, and additional parameters specific to the dataset.


The Geospatial Landscape

Overview

Teaching: 10 min
Exercises: 0 min
Questions
  • What programs and applications are available for working with geospatial data?

Objectives
  • Describe the difference between various approaches to geospatial computing, and their relative strengths and weaknesses.

  • Name some commonly used GIS applications.

  • Name some commonly used Python packages that can access and process spatial data.

  • Describe pros and cons for working with geospatial data using a command-line versus a graphical user interface.

Standalone Software Packages

Most traditional GIS work is carried out in standalone applications that aim to provide end-to-end geospatial solutions. These applications are available under a wide range of licenses and price points. Some of the most common are listed below.

Open-source software

The Open Source Geospatial Foundation (OSGEO) supports several actively managed GIS platforms:

Commercial software

Online + Cloud computing

Private companies have that released SDK platforms for large scale GIS analysis:

Publically funded open-source platforms for large scale GIS analysis:

GUI vs CLI

The earliest computer systems operated without a graphical user interface (GUI), relying only on the command-line interface (CLI). Since mapping and spatial analysis are strongly visual tasks, GIS applications benefited greatly from the emergence of GUIs and quickly came to rely heavily on them. Most modern GIS applications have very complex GUIs, with all common tools and procedures accessed via buttons and menus.

Benefits of using a GUI include:

Downsides of using a GUI include:

In scientific computing, the lack of reproducibility in point-and-click software has come to be viewed as a critical weakness. As such, scripted CLI-style workflows are again becoming popular, which leads us to another approach to doing GIS: via a programming language. This is the approach we will be using throughout this workshop.

GIS in programming languages

A number of powerful geospatial processing libraries exist for general-purpose programming languages like Java and C++. However, the learning curve for these languages is steep and the effort required is excessive for users who only need a subset of their functionality.

Higher-level scripting languages like Python and R are easier to learn and use. Both now have their own packages that wrap up those geospatial processing libraries and make them easy to access and use safely. A key example is the Java Topology Suite (JTS), which is implemented in C++ as GEOS. GEOS is accessible in Python via the shapely package (and geopandas, which make suse of shapely) and in R via sf. R and Python also have interface packages for GDAL, and for specific GIS apps.

This last point is a huge advantage for GIS-by-programming; these interface packages give you the ability to access functions unique to particular programs, but have your entire workflow recorded in a central document - a document that can be re-run at will. Below are lists of some of the key spatial packages for Python, which we will be using in the remainder of this workshop.

These packages along with the matplotlib package are all we need for spatial data visualisation. Python also has many fundamental scientific packages that are relevant in the geospatial domain. Below is a list of particularly fundamental packages. numpy, scipy, and scikit-image are all excellent options for working with rasters, as arrays.

An overview of these and other Python spatial packages can be accessed here.

As a programming language, Python can be a CLI tool. However, using Python together with an IDE (Integrated Development Environment) application allows some GUI features to become part of your workflow. IDEs allow the best of both worlds. They provide a place to visually examine data and other software objects, interact with your file system, and draw plots and maps, but your activities are still command-driven - recordable and reproducible. There are several IDEs available for Python. JupyterLab is well-developed and the most widely used option for data science in Python. VSCode and Spyder are other popular options for data science.

Traditional GIS apps are also moving back towards providing a scripting environment for users, further blurring the CLI/GUI divide. ESRI have adopted Python into their software, and QGIS is both Python and R-friendly.

Key Points

  • Many software packages exist for working with geospatial data.

  • Command-line programs allow you to automate and reproduce your work.

  • JupyterLab provides a user-friendly interface for working with Python.


Intro to Raster Data in Python

Overview

Teaching: 40 min
Exercises: 20 min
Questions
  • What is a raster dataset?

  • How do I work with and plot raster data in Python?

  • How can I handle missing or bad data values for a raster?

Objectives
  • Describe the fundamental attributes of a raster dataset.

  • Explore raster attributes and metadata using Python.

  • Read rasters into Python using the rioxarray package.

Things You’ll Need To Complete This Episode

See the lesson homepage for detailed information about the software, data, and other prerequisites you will need to work through the examples in this episode.

In this episode, we will introduce the fundamental principles, packages and metadata/raster attributes that are needed to work with raster data in Python. We will discuss some of the core metadata elements that we need to understand to work with rasters, including Coordinate Reference Systems, no data values, and resolution. We will also explore missing and bad data values as stored in a raster and how Python handles these elements.

We will use 1 package in this episode to work with raster data - rioxarray, which is based on the popular rasterio package for working with rasters and xarray for working with multi-dimensional arrays. Make sure that you have rioxarray installed and imported.

import rioxarray

Introduce the Data

A brief introduction to the datasets can be found on the Geospatial workshop setup page.

For more detailed information about the datasets, check out the Geospatial workshop data page.

Open a Raster and View Raster File Attributes

We will be working with a series of GeoTIFF files in this lesson. The GeoTIFF format contains a set of embedded tags with metadata about the raster data. We can use the function rioxarray.open_rasterio() to read the geotiff file and then inspect this metadata. By calling the variable name in the jupyter notebook we can get a quick look at the shape and attributes of the data.

surface_HARV = rioxarray.open_rasterio("data/NEON-DS-Airborne-Remote-Sensing/HARV/DSM/HARV_dsmCrop.tif")
surface_HARV
<xarray.DataArray (band: 1, y: 1367, x: 1697)>
[2319799 values with dtype=float64]
Coordinates:
  * band         (band) int64 1
  * y            (y) float64 4.714e+06 4.714e+06 ... 4.712e+06 4.712e+06
  * x            (x) float64 7.315e+05 7.315e+05 ... 7.331e+05 7.331e+05
    spatial_ref  int64 0
Attributes:
    transform:     (1.0, 0.0, 731453.0, 0.0, -1.0, 4713838.0)
    _FillValue:    -9999.0
    scales:        (1.0,)
    offsets:       (0.0,)
    grid_mapping:  spatial_ref

The first call to rioxarray.open_rasterio() opens the file and returns an object that we store in a variable, surface_HARV.

The output tells us that we are looking at an xarray.DataArray, with 1 band, 1367 columns, and 1697 rows. We can also see the number of pixel values in the DataArray, and the type of those pixel values, which is floating point, or (float64). The DataArray also stores different values for the coordinates of the DataArray. When using rioxarray, the term coordinates refers to spatial coordinates like x and y but also the band coordinate. Each of these sequences of values has its own data type, like float64 for the spatial coordinates and int64 for the band coordinate. The transform represents the conversion between array coordinates (non-spatial) and spatial coordinates.

This DataArray object also has a couple attributes that are accessed like .rio.crs, .rio.nodata, and .rio.bounds(), which contain the metadata for the file we opened. Note that many of the metadata are accessed as attributes without (), but bounds() is a function and needs parentheses.

print(surface_HARV.rio.crs)
print(surface_HARV.rio.nodata)
print(surface_HARV.rio.bounds())
print(surface_HARV.rio.width)
print(surface_HARV.rio.height)
EPSG:32618
-9999.0
(731453.0, 4712471.0, 733150.0, 4713838.0)
1697
1367

The Coordinate Reference System, or surface_HARV.rio.crs, is reported as the string EPSG:32618. The nodata value is encoded as -9999.0 and the bounding box corners of our raster are represented by the output of .bounds() as a tuple (like a list but you can’t edit it). The height and width match what we saw when we printed the DataArray, but by using .rio.width we can access these values if we need them in calculations.

We will be exploring this data throughout this episode. By the end of this episode, you will be able to understand and explain the metadata output.

Data Tip - Object names

To improve code readability, file and object names should be used that make it clear what is in the file. The data for this episode were collected from Harvard Forest so we’ll use a naming convention of datatype_HARV.

After viewing the attributes of our raster, we can examine the raw value sof the array with .values:

surface_HARV.values
array([[[408.76998901, 408.22998047, 406.52999878, ..., 345.05999756,
         345.13998413, 344.97000122],
        [407.04998779, 406.61999512, 404.97998047, ..., 345.20999146,
         344.97000122, 345.13998413],
        [407.05999756, 406.02999878, 403.54998779, ..., 345.07000732,
         345.08999634, 345.17999268],
        ...,
        [367.91000366, 370.19000244, 370.58999634, ..., 311.38998413,
         310.44998169, 309.38998413],
        [370.75997925, 371.50997925, 363.41000366, ..., 314.70999146,
         309.25      , 312.01998901],
        [369.95999146, 372.6000061 , 372.42999268, ..., 316.38998413,
         309.86999512, 311.20999146]]])

This can give us a quick view of the values of our array, but only at the corners. Since our raster is loaded in python as a DataArray type, we can plot this in one line similar to a pandas DataFrame with DataArray.plot().

surface_HARV.plot()

Raster plot with earthpy.plot using the viridis color scale

Nice plot! Notice that rioxarray helpfully allows us to plot this raster with spatial coordinates on the x and y axis (this is not the default in many cases with other functions or libraries).

Plotting Tip

For more aesthetic looking plots, matplotlib allows you to customize the style with plt.style.use. However, if you want more control of the look of your plot, matplotlib has many more functions to change the position and appearnce of plot elements.

Show plot

Here is the result of using a ggplot like style for our surface model plot.

import matplotlib.pyplot as plt
plt.style.use("ggplot")
surface_HARV.plot()

plot of chunk unnamed-chunk-5

This map shows the elevation of our study site in Harvard Forest. From the legend, we can see that the maximum elevation is ~400, but we can’t tell whether this is 400 feet or 400 meters because the legend doesn’t show us the units. We can look at the metadata of our object to see what the units are. Much of the metadata that we’re interested in is part of the CRS, and it can be accessed with .rio.crs. We introduced the concept of a CRS in an earlier lesson (TODO replace link).

Now we will see how features of the CRS appear in our data file and what meanings they have.

View Raster Coordinate Reference System (CRS) in Python

We can view the CRS string associated with our Python object using thecrs attribute.

print(surface_HARV.rio.crs)
EPSG:32618

You can convert the EPSG code to a PROJ4 string with earthpy.epsg, another python dict which maps epsg codes (keys) to PROJ4 strings (values)

import earthpy
earthpy.epsg['32618']
'+proj=utm +zone=18 +datum=WGS84 +units=m +no_defs'

Challenge

What units are our data in?

Answers

+units=m tells us that our data is in meters. We could also get this information from the attribute surface_HARV.crs.linear_units.

Understanding CRS in Proj4 Format

Let’s break down the pieces of proj4 string. The string contains all of the individual CRS elements that Python or another GIS might need. Each element is specified with a + sign, similar to how a .csv file is delimited or broken up by a ,. After each + we see the CRS element being defined. For example projection (proj=) and datum (datum=).

UTM Proj4 String

Our projection string for surface_HARV specifies the UTM projection as follows:

'+proj=utm +zone=18 +datum=WGS84 +units=m +no_defs'

Note that the zone is unique to the UTM projection. Not all CRSs will have a zone. Image source: Chrismurf at English Wikipedia, via Wikimedia Commons (CC-BY).

The UTM zones across the continental United States. From: https://upload.wikimedia.org/wikipedia/commons/8/8d/Utm-zones-USA.svg

Calculate Raster Min and Max Values

It is useful to know the minimum or maximum values of a raster dataset. In this case, given we are working with elevation data, these values represent the min/max elevation range at our site.

We can compute these and other descriptive statistics with min and max

print(surface_HARV.min())
print(surface_HARV.max())
<xarray.DataArray ()>
array(305.07000732)
Coordinates:
    spatial_ref  int64 0
<xarray.DataArray ()>
array(416.06997681)
Coordinates:
    spatial_ref  int64 0

The information above includes a report of the number of observations, min and max values, mean, and variance. We specified the axis=None argument so that statistics were computed for the whole array, rather than for each row in the array.

You could also get each of these values one by one using numpy. What if we wanted to calculate 25% and 75% quartiles?

import numpy
print(numpy.percentile(surface_HARV, 25))
print(numpy.percentile(surface_HARV, 75))
345.5899963378906
374.2799987792969

Data Tip - Set min and max values

You may notice that numpy.percentile didn’t require an axis=None argument. This is because axis=None is the default for most numpy functions. It’s always good to check out the docs on a function to see what the default arguments are, particularly when working with multi-dimensional image data. To do so, we can usehelp(numpy.percentile) or ?numpy.percentile if you are using jupyter notebook or jupyter lab.

We can see that the elevation at our site ranges from 305.0700073m to 416.0699768m.

Raster Bands

The Digital Surface Model that we’ve been working with is a single band raster. This means that there is only one dataset stored in the raster: surface elevation in meters for one time period. However, a raster dataset can contain one or more bands.

Multi-band raster image

We can view the number of bands in a raster by looking at the .shape attribute of the DataArray. The band number comes first when geotiffs are red with the .open_rasterio() function.

rgb_HARV = rioxarray.open_rasterio("data/NEON-DS-Airborne-Remote-Sensing/HARV/RGB_Imagery/HARV_RGB_Ortho.tif")
rgb_HARV.shape
(3, 2317, 3073)

It’s always a good idea to examine the shape of the raster array you are working with and make sure it’s what you expect. Many functions, especially ones that plot images, expect a raster array to have a particular shape.

Jump to a later episode in this series for information on working with multi-band rasters: Work with Multi-band Rasters in Python.

Dealing with Missing Data

Raster data often has a “no data value” associated with it and for raster datasets read in by rioxarray this value is referred to as nodata. This is a value assigned to pixels where data is missing or no data were collected. However, there can be different cases that cause missing data, and it’s common for other values in a raster to represent different cases. The most common example is missing data at the edges of rasters.

By default the shape of a raster is always rectangular. So if we have a dataset that has a shape that isn’t rectangular, some pixels at the edge of the raster will have no data values. This often happens when the data were collected by an sensor which only flew over some part of a defined region.

In the RGB image below, the pixels that are black have no data values. The sensor did not collect data in these areas. rioxarray assigns a specific number as missing data to the .rio.nodata attribute when the dataset is read, based on the file’s own metadata. the GeoTiff’s nodata attribute is assigned to the value -1.7e+308and in order to run calculations on this image that ignore these edge values or plot the image without the nodata values being displayed on the color scale, rioxarray masks them out.

rgb_HARV.plot.imshow()

plot of chunk demonstrate-no-data-black

From this plot we see something interesting, while our no data values were masked along the edges, the color channel’s no data values don’t all line up. The colored pixels at the edges between white black result from there being no data in one or two channels at a given pixel. 0 could conceivably represent a valid value for reflectance (the units of our pixel values) so it’s good to make sure we are masking values at the edges and not valid data values within the image.

While this plot tells us where we have no data values, the color scale look strange, because our plotting function expects image values to be normalized between a certain range (0-1 or 0-255). By using surface_HARV.plot.imshow with the robust=True argument, we can normalize our data by the maximum and minimum to fit our data between the correct range for plotting purposes.

rgb_HARV.plot.imshow(robust=True)

plot of chunk demonstrate-no-data-masked

The value that is conventionally used to take note of missing data (the no data value) varies by the raster data type. For floating-point rasters, the figure -3.4e+38 is a common default, and for integers, -9999 is common. Some disciplines have specific conventions that vary from these common values.

In some cases, other nodata values may be more appropriate. An nodata value should be a) outside the range of valid values, and b) a value that fits the data type in use. For instance, if your data ranges continuously from -20 to 100, 0 is not an acceptable nodata value! Or, for categories that number 1-15, 0 might be fine for nodata, but using -.000003 will force you to save the GeoTIFF on disk as a floating point raster, resulting in a bigger file.

Key Points

  • The GeoTIFF file format includes metadata about the raster data.

  • rioxarray stores CRS information as a CRS object that can be converted to an EPSG code or PROJ4 string.

  • The GeoTIFF file may or may not store the correct no data value(s).

  • We can find the correct value(s) in the raster’s external metadata or by plotting the raster.

  • rioxarray and xarray are for working with multidimensional arrays like pandas is for working with tabular data with many columns


Reproject Raster Data with Rioxarray

Overview

Teaching: 60 min
Exercises: 20 min
Questions
  • How do I work with raster data sets that are in different projections?

Objectives
  • Reproject a raster in Python using rasterio.

  • Accomplish the same task with rioxarray and xarray.

Things You’ll Need To Complete This Episode

See the lesson homepage for detailed information about the software, data, and other prerequisites you will need to work through the examples in this episode.

Sometimes we encounter raster datasets that do not “line up” when plotted or analyzed. Rasters that don’t line up are most often in different Coordinate Reference Systems (CRS), otherwise known as “projections”. This episode explains how to line up rasters in different, known CRSs.

Raster Projection in R

If you loaded two rasters with different projections in QGIS 3 or ArcMap/ArcPro, you’d see that they would align since these software reproject “on-the-fly”. But with R or Python, you’ll need to reproject your data yourself in order to plot or use these rasters together in calculations.

For this episode, we will be working with the Harvard Forest Digital Terrain Model (DTM). This differs from the surface model data we’ve been working with so far in that the digital terrain model (DTM) includes the tops of trees, while the digital surface model (DSM) shows the ground level beneath the tree canopy.

Our goal is to get these data into the same projection with the rioxarray.reproject() function so that we can use both rasters to calculate tree canopy height, also called a Canopy Height Model (CHM).

First, we need to read in the DSM and DTM rasters.

Reading in the data with xarray looks similar to using rasterio directly, but the output is a xarray object called a DataArray. You can use a xarray.DataArray in calculations just like a numpy array. Calling the variable name of the DataArray also prints out all of its metadata information. Geospatial information is not read in if you don’t import rioxarray before calling the open_rasterio function.

import rioxarray

surface_HARV = rioxarray.open_rasterio("data/NEON-DS-Airborne-Remote-Sensing/HARV/DSM/HARV_dsmCrop.tif")
terrain_HARV = rioxarray.open_rasterio("data/NEON-DS-Airborne-Remote-Sensing/HARV/DTM/HARV_dtmCrop_WGS84.tif")

surface_HARV
<xarray.DataArray (band: 1, y: 1367, x: 1697)>
[2319799 values with dtype=float64]
Coordinates:
  * band     (band) int64 1
  * y        (y) float64 4.714e+06 4.714e+06 4.714e+06 ... 4.712e+06 4.712e+06
  * x        (x) float64 7.315e+05 7.315e+05 7.315e+05 ... 7.331e+05 7.331e+05
Attributes:
    transform:      (1.0, 0.0, 731453.0, 0.0, -1.0, 4713838.0)
    crs:            +init=epsg:32618
    res:            (1.0, 1.0)
    is_tiled:       0
    nodatavals:     (-3.4e+38,)
    scales:         (1.0,)
    offsets:        (0.0,)
    AREA_OR_POINT:  Area

We can use the CRS attribute from one of our datasets to reproject the other dataset so that they are both in the same projection. The only argument that is required is the dst_crs argument, which takes the CRS of the result of the reprojection.

terrain_HARV_UTM18 = terrain_HARV.rio.reproject(dst_crs=surface_HARV.rio.crs)

terrain_HARV_UTM18 
<xarray.DataArray (band: 1, y: 1493, x: 1796)>
array([[[-9999., -9999., -9999., ..., -9999., -9999., -9999.],
        [-9999., -9999., -9999., ..., -9999., -9999., -9999.],
        [-9999., -9999., -9999., ..., -9999., -9999., -9999.],
        ...,
        [-9999., -9999., -9999., ..., -9999., -9999., -9999.],
        [-9999., -9999., -9999., ..., -9999., -9999., -9999.],
        [-9999., -9999., -9999., ..., -9999., -9999., -9999.]]])
Coordinates:
  * x            (x) float64 7.314e+05 7.314e+05 ... 7.332e+05 7.332e+05
  * y            (y) float64 4.714e+06 4.714e+06 ... 4.712e+06 4.712e+06
  * band         (band) int64 1
    spatial_ref  int64 0
Attributes:
    transform:      (1.001061424448915, 0.0, 731402.3156760389, 0.0, -1.00106...
    scales:         (1.0,)
    offsets:        (0.0,)
    AREA_OR_POINT:  Area
    _FillValue:     -9999.0
    grid_mapping:   spatial_ref

Data Tip

You might wonder why the result of terrain_HARV.rio.reproject() shows -9999 at the edges whereas when we read in the data, surface_HARV did not show the -9999 values. This is because xarray by default will wait until the last necessary moment before actually running the computations on an xarray DataArray. This form of evaluation is called lazy, as opposed to eager, where functions are always computed when they are called. If you ever want a lazy DataArray to reveal it’s underlying values, you can use the .compute() function. rioxarray will only show the values in the corners of the array.

Show code

surface_HARV.compute()
<xarray.DataArray (band: 1, y: 1367, x: 1697)>
array([[[408.76998901, 408.22998047, 406.52999878, ..., 345.05999756,
  345.13998413, 344.97000122],
[407.04998779, 406.61999512, 404.97998047, ..., 345.20999146,
  344.97000122, 345.13998413],
[407.05999756, 406.02999878, 403.54998779, ..., 345.07000732,
  345.08999634, 345.17999268],
...,
[367.91000366, 370.19000244, 370.58999634, ..., 311.38998413,
  310.44998169, 309.38998413],
[370.75997925, 371.50997925, 363.41000366, ..., 314.70999146,
  309.25      , 312.01998901],
[369.95999146, 372.6000061 , 372.42999268, ..., 316.38998413,
  309.86999512, 311.20999146]]])
Coordinates:
* band     (band) int64 1
* y        (y) float64 4.714e+06 4.714e+06 4.714e+06 ... 4.712e+06 4.712e+06
* x        (x) float64 7.315e+05 7.315e+05 7.315e+05 ... 7.331e+05 7.331e+05
Attributes:
    transform:      (1.0, 0.0, 731453.0, 0.0, -1.0, 4713838.0)
    crs:            +init=epsg:32618
    res:            (1.0, 1.0)
    is_tiled:       0
    nodatavals:     (-3.4e+38,)
    scales:         (1.0,)
    offsets:        (0.0,)
    AREA_OR_POINT:  Area

And we can also save our DataArray that we created with rioxarray to a file.s

reprojected_path = "data/NEON-DS-Airborne-Remote-Sensing/HARV/DTM/HARV_dtmCrop_UTM18.tif"
terrain_HARV_UTM18.rio.to_raster(reprojected_path)

Exercise

Inspect the metadata for terrain_HARV_UTM18 and surface_HARV. Are the projections the same? What metadata attributes are different? How might this affect calculations we make between arrays?

Solution

# view crs for DTM
print(terrain_HARV_UTM18.rio.crs)

# view crs for DSM
print(surface_HARV.rio.crs)
EPSG:32618
EPSG:32618

Good, the CRSs are the same. But …

# view noddata value for DTM
print(terrain_HARV_UTM18.rio.nodata)

# view nodata value for DSM
print(surface_HARV.rio.nodata)
-9999.0
-3.4e+38

The nodata values are different. Before we plot or calculate both of these DataArrays together, we should make sure they have the same nodata value. Furthermore …

# view shape for DTM
print(terrain_HARV_UTM18.shape)

# view shape for DSM
print(surface_HARV.shape)
(1, 1492, 1801)
(1, 1367, 1697)

The shapes are not the same which means these data cover slightly different extents and locations. In the next episode we will need to align these DataArrays before running any calculations. rioxarray provides functionality to align multiple geospatial DataArrays.

Let’s plot our handiwork so far! We can use the xarray.DataArray.plot function to show the DTM. But if we run the following code, something doesn’t look right …

import matplotlib.pyplot as plt
plt.figure()
terrain_HARV_UTM18.plot(cmap="viridis")
plt.title("Harvard Forest Digital Terrain Model")

plot of chunk unnamed-chunk-5

Challenge

Whoops! What did we forget to do to the DTM DataArray before plotting?

Answers

Our array has a nodata value, -9999.0, which causes the color of our plot to be stretched over too wide a range. We’d like to only display valid values, so before plotting we can filter out the nodata values using the where() function and the .rio.nodata attribute of our DataArray.

terrain_HARV_UTM18 _valid = terrain_HARV_UTM18.where(
    terrain_HARV_UTM18  != terrain_HARV_UTM18.rio.nodata)
plt.figure()
terrain_HARV_UTM18_valid.plot(cmap="viridis")
plt.title("Harvard Forest Digital Terrain Model")

plot of chunk unnamed-chunk-5 If we had saved terrain_HARV_UTM18 to a file and then read it in with open_rasterio’s masked=True argument the raster’s nodata value would be masked and we would not need to use the where() function to do the masking before plotting.

Challenge: Reproject, then Plot a Digital Terrain Model

Create 2 maps in a UTM projection of the San Joaquin Experimental Range field site, using theSJER_dtmCrop.tif and SJER_dsmCrop_WGS84.tif files. Use rioxarray and matplotlib.pyplot (to add a title). Reproject the data as necessary to make sure each map is in the same UTM projection and save the reprojected file with the file name “data/NEON-DS-Airborne-Remote-Sensing/SJER/DSM/SJER_dsmCrop_WGS84.tif”.

Answers

If we read in these files with the argument masked=True, then the nodata values will be masked automatically and set to numpy.nan, or Not a Number. This can make plotting easier since only valid raster values will be shown. However, it’s important to remember that numpy.nan values still take up space in our raster just like nodata values, and thus they still affect the shape of the raster. Rasters need to be the same shape for raster math to work in python. In the next lesson, we will examine how to prepare rasters of different shapes for calculations.

terrain_HARV_SJER = rioxarray.open_rasterio("data/NEON-DS-Airborne-Remote-Sensing/SJER/DTM/SJER_dtmCrop.tif", masked=True)
surface_HARV_SJER = rioxarray.open_rasterio("data/NEON-DS-Airborne-Remote-Sensing/SJER/DSM/SJER_dsmCrop_WGS84.tif", masked=True)
reprojected_surface_model = surface_HARV_SJER.rio.reproject(dst_crs=terrain_HARV_SJER.rio.crs)
plt.figure()
reprojected_surface_model.plot()
plt.title("SJER Reprojected Surface Model")
reprojected_surface_model.rio.to_raster("data/NEON-DS-Airborne-Remote-Sensing/SJER/DSM/SJER_dsmCrop_WGS84.tif")
plt.figure()
terrain_HARV_SJER.plot()
plt.title("SJER Terrain Model")

plot of chunk unnamed-chunk-5 plot of chunk unnamed-chunk-5

Key Points

  • In order to plot or do calculations with two raster data sets, they must be in the same CRS.

  • rioxarray and xarray provide simple syntax for accomplishing fundamental geospatial operations.

  • rioxarray is built on top of rasterio, and you can use rasterio directly to accomplish fundamental geospatial operations.


Raster Calculations in Python

Overview

Teaching: 40 min
Exercises: 20 min
Questions
  • How do I subtract one raster from another and extract pixel values for defined locations?

Objectives
  • Perform a subtraction between two rasters using python’s builtin math operators to generate a Canopy Height Model (CHM).

  • Calculate a classified raster using the CHM values.

Things You’ll Need To Complete This Episode

See the lesson homepage for detailed information about the software, data, and other prerequisites you will need to work through the examples in this episode.

We often want to combine values of and perform calculations on rasters to create a new output raster. This episode covers how to subtract one raster from another using basic raster math. It also covers how to extract pixel values from a set of locations - for example a buffer region around locations at a field site.

Raster Calculations in Python & Canopy Height Models

We often want to perform calculations on two or more rasters to create a new output raster. For example, if we are interested in mapping the heights of trees across an entire field site, we might want to calculate the difference between the Digital Surface Model (DSM, tops of trees) and the Digital Terrain Model (DTM, ground level). The resulting dataset is referred to as a Canopy Height Model (CHM) and represents the actual height of trees, buildings, etc. with the influence of ground elevation removed.

Source: National Ecological Observatory Network (NEON)

More Resources

Load the Data

For this episode, we will use the DTM and DSM from the NEON Harvard Forest Field site and San Joaquin Experimental Range, which we already have loaded from previous episodes. Let’s load them again with open_rasterio using the argument masked=True.

import rioxarray

surface_HARV = rioxarray.open_rasterio("data/NEON-DS-Airborne-Remote-Sensing/HARV/DSM/HARV_dsmCrop.tif", masked=True)
terrain_HARV_UTM18 = rioxarray.open_rasterio("data/NEON-DS-Airborne-Remote-Sensing/HARV/DTM/HARV_dtmCrop_UTM18.tif", masked=True)

Raster Math

We can perform raster calculations by subtracting (or adding, multiplying, etc) two rasters. In the geospatial world, we call this “raster math”, and typically it refers to operations on rasters that have the same width and height (including nodata pixels). We saw from the last episode’s challenge that this is not the case with out DTM and DSM. Even though the reproject function gets our rasters into the same CRS, they have slighlty different extents. We can now use the reproject_match function, which both reprojects and clips a raster to the CRS and extent of another raster.

terrain_HARV_matched = terrain_HARV_UTM18.rio.reproject_match(surface_HARV)

In fact, we could have used reproject_match on the original DTM model, “HARV_dtmCrop_WGS84.tif”. If we had, this would mean one less time our DTM was interpolated with reprojection, though this has a negligible impact on the data for our purposes.

Let’s subtract the DTM from the DSM to create a Canopy Height Model. We’ll use rioxarray so that we can easily plot our result and keep track of the metadata for our CHM.

canopy_HARV = surface_HARV - terrain_HARV_matched
canopy_HARV.compute()

We can now plot the output CHM. If we use the argument robust=True, our plot’s color values are stretched between the 2nd and 98th percentiles of the data, which results in clearer distinctions between forested and non-forested areas.

import matplotlib.pyplot as plt # in case it has not been imported recently
canopy_HARV.plot(cmap="viridis")
plt.title("Canopy Height Model for Harvard Forest, Z Units: Meters")

plot of chunk unnamed-chunk-5

Notice that the range of values for the output CHM is between 0 and 30 meters. Does this make sense for trees in Harvard Forest?

Maps are great but it can also be informative to plot histograms of values to better understand the distribution. We can accomplish this using a built-in xarray method we have been already been using, plot

plt.figure()
plt.style.use('ggplot') # adds a style to improve the aesthetics
canopy_HARV.plot.hist()
plt.title("Histogram of Canopy Height in Meters")

Challenge: Explore CHM Raster Values

It’s often a good idea to explore the range of values in a raster dataset just like we might explore a dataset that we collected in the field. The histogram we just made is a good start but there’s more we can do to improve our understanding of the data.

  1. What is the min and maximum value for the Harvard Forest Canopy Height Model (canopy_HARV) that we just created?
  2. Plot a histogram with 100 bins instead of 8. What do you notice that wasn’t clear before?
  3. Plot the canopy_HARV raster using breaks that make sense for the data. Include an appropriate color palette for the data, plot title and no axes ticks / labels.

Answers

1) Recall, if there were nodata values in our raster like -9999.0, we would need to filter them out with .where().

canopy_HARV.min().values
canopy_HARV.max().values
array(-1.)
array(38.16998291)

2) Increasing the number of bins gives us a much clearer view of the distribution.

canopy_HARV.plot.hist(bins=50)

Classifying Continuous Rasters in Python

Now that we have a sense of the distribution of our canopy height raster, we can reduce the complexity of our map by classifying it. Classification involves sorting raster values into unique classes, and in python, we can accomplish this using the numpy.digitize function.

import numpy as np

# Defines the bins for pixel values
class_bins = [canopy_HARV.min().values, 2, 10, 20, np.inf]

# Classifies the original canopy height model array
canopy_height_classified = np.digitize(canopy_HARV, class_bins)
print(type(canopy_height_classified))
<class 'numpy.ndarray'>

The result is a numpy.ndarray, but we can put this into a DataArray along with the spatial metadata from our canopy_HARV, so that our resulting plot shows the spatial coordinates.

import xarray
canopy_height_classified = xarray.DataArray(canopy_height_classes, coords = canopy_HARV.coords)
plt.style.use("default")
plt.figure()
canopy_height_classified.plot()

Plot Tip

This plot looks nice but its legend could be improved. matplotlib.pyplot has all the tools needed to create a custom legend with unique labels for our classified map. See the Earth Lab’s lesson for more details.

Reassigning Geospatial Metadata and Exporting a GeoTIFF

When we computed the CHM, the output no longer contains reference to a nodata value, like -9999.0, which was associated with the DTM and DSM. Some calculations, like numpy.digitize can remove all geospatial metadata. Of what can be lost, the CRS and nodata value are particularly important to keep track of. Before we export the product of our calculation to a Geotiff with the to_raster function, we need to reassign this metadata.

canopy_HARV.rio.write_crs(surface_HARV.rio.crs, inplace=True)
canopy_HARV.rio.set_nodata(-9999.0, inplace=True)

When we write this raster object to a GeoTIFF file we’ll name it CHM_HARV.tiff. This name allows us to quickly remember both what the data contains (CHM data) and for where (HARVard Forest). The to_raster() function by default writes the output file to your working directory unless you specify a full file path.

os.mkdirs("./data/outputs/", exist_ok=True)
canopy_HARV.rio.to_raster("./data/outputs/CHM_HARV.tif")

Challenge: Explore the NEON San Joaquin Experimental Range Field Site

Data are often more interesting and powerful when we compare them across various locations. Let’s compare some data collected over Harvard Forest to data collected in Southern California. The NEON San Joaquin Experimental Range (SJER) field site located in Southern California has a very different ecosystem and climate than the NEON Harvard Forest Field Site in Massachusetts.

Import the SJER DSM and DTM raster files and create a Canopy Height Model. Then compare the two sites. Be sure to name your Python objects and outputs carefully, as follows: objectType_SJER (e.g. surface_SJER). This will help you keep track of data from different sites!

  1. You should have the DSM and DTM data for the SJER site already loaded from the Reproject Raster Data with Rioxarray episode.) Don’t forget to check the CRSs and units of the data.
  2. Create a CHM from the two raster layers and check to make sure the data are what you expect.
  3. Plot the CHM from SJER.
  4. Export the SJER CHM as a GeoTIFF.
  5. Compare the vegetation structure of the Harvard Forest and San Joaquin Experimental Range.

Answers

1) Read in the data again if you haven’t already with masked=True.

surface_SJER = rioxarray.open_rasterio("data/NEON-DS-Airborne-Remote-Sensing/SJER/DSM/SJER_dsmCrop.tif", masked=True)
terrain_SJER_UTM18 = rioxarray.open_rasterio("data/NEON-DS-Airborne-Remote-Sensing/SJER/DTM/SJER_dtmCrop_WGS84.tif", masked=True)
print(terrain_SJER_UTM18.shape)
print(surface_SJER.shape)

2) Reproject and clip one raster to the extent of the smaller raster using reproject_match. Your output raster, may have nodata values at the border, these are fine and can be removed for later calculations if needed. Then,calculate the CHM.

terrain_SJER_UTM18_matched = terrain_SJER_UTM18.rio.reproject_match(surface_SJER)
canopy_SJER = surface_SJER - terrain_SJER_UTM18_matched

3) Plot the CHM with the same color map as HARV and save the CHM to the outputs folder.

plt.figure()
canopy_SJER.plot(robust=True, cmap="viridis")
plt.title("Canopy Height Model for San Joaquin Experimental Range, Z Units: Meters")
plt.savefig("fig/03-SJER-CHM-map-05.png")
canopy_SJER.rio.to_raster("./data/outputs/CHM_SJER.tif")

4) Compare the SJER and HARV CHMs. Tree heights are much shorter in SJER. You can confirm this by looking at the histograms of the two CHMs.

fig, ax = plt.figure(figsize=(9,6))
canopy_height_HARV_xarr.plot.hist(ax = ax, bins=50, color = "green")
plt.figure(figsize=(9,6))
canopy_SJER.plot.hist(ax = ax, bins=50, color = "brown")

Key Points

  • Python’s built in math operators are fast and simple options for raster math.

  • numpy.digitize can be used to classify raster values in order to generate a less complicated map.

  • DataArrays can be created from scratch from numpy arrays as well as read in from existing files.


Work With Multi-Band Rasters in Python FIXME

Overview

Teaching: 40 min
Exercises: 20 min
Questions
  • How can I visualize individual and multiple bands in a raster object?

Objectives
  • Identify a single vs. a multi-band raster file.

  • Import multi-band rasters into Python using the rasterio package.

  • Plot multi-band color image rasters in R using the earthpy package.

FIXME

Key Points

  • A single raster file can contain multiple bands or layers.

  • Individual bands within a DataArray can be accessed, analyzed, and visualized using the same plot function as single bands.


Open and Plot Shapefiles in Python

Overview

Teaching: 20 min
Exercises: 10 min
Questions
  • How can I distinguish between and visualize point, line and polygon vector data?

Objectives
  • Know the difference between point, line, and polygon vector elements.

  • Load point, line, and polygon shapefiles with geopandas.

  • Access the attributes of a spatial object with geopandas.

Things You’ll Need To Complete This Episode

See the lesson homepage for detailed information about the software, data, and other prerequisites you will need to work through the examples in this episode.

Starting with this episode, we will be moving from working with raster data to working with vector data. In this episode, we will open and plot point, line and polygon vector data stored in shapefile format in R. These data refer to the NEON Harvard Forest field site, which we have been working with in previous episodes. In later episodes, we will learn how to work with raster and vector data together and combine them into a single plot.

Import Shapefiles

We will use the geopandas package to work with vector data in Python. We will also use the rioxarray.

import geopandas as gpd

The shapefiles that we will import are:

The first shapefile that we will open contains the boundary of our study area (or our Area Of Interest or AOI, hence the name aoi_boundary). To import shapefiles we use the geopandas function read_file().

Let’s import our AOI:

aoi_boundary_HARV = gpd.read_file(
  "data/NEON-DS-Site-Layout-Files/HARV/HarClip_UTMZ18.shp")

Shapefile Metadata & Attributes

When we import the HarClip_UTMZ18 shapefile layer into Python (as our aoi_boundary_HARV object) it comes in as a DataFrame, specifically a GeoDataFrame. read_file() also automatically stores geospatial information about the data. We are particularly interested in describing the format, CRS, extent, and other components of the vector data, and the attributes which describe properties associated with each individual vector object.

Data Tip

The Explore and Plot by Shapefile Attributes episode provides more information on both metadata and attributes and using attributes to subset and plot data.

Spatial Metadata

Key metadata for all shapefiles include:

  1. Object Type: the class of the imported object.
  2. Coordinate Reference System (CRS): the projection of the data.
  3. Extent: the spatial extent (i.e. geographic area that the shapefile covers) of the shapefile. Note that the spatial extent for a shapefile represents the combined extent for all spatial objects in the shapefile.

Each GeoDataFrame has a "geometry" column that contains geometries. In the case of our aoi_boundary_HARV, this geometry is represented by a shapely.geometry.Polygon object. geopandas uses the shapely library to represent polygons, lines, and points, so the types are inherited from shapely.

We can view shapefile metadata using the .crs, .bounds and .type attributes. First, let’s view the geometry type for our AOI shapefile. To view the geometry type, we use the pandas method .type function on the GeoDataFrame, aoi_boundary_HARV.

aoi_boundary_HARV.type
shapely.geometry.polygon.Polygon

To view the CRS metadata:

aoi_boundary_HARV.crs
{'init': 'epsg:32618'}
import earthpy
earthpy.epsg['32618']
'+proj=utm +zone=18 +datum=WGS84 +units=m +no_defs'

Our data in the CRS UTM zone 18N. The CRS is critical to interpreting the object’s extent values as it specifies units. To find the extent of our AOI in the projected coordinates, we can use the .bounds() function:

aoi_boundary_HARV.bounds
            minx          miny           maxx          maxy
0  732128.016925  4.713209e+06  732251.102892  4.713359e+06

The spatial extent of a shapefile or shapely spatial object represents the geographic “edge” or location that is the furthest north, south east and west. Thus is represents the overall geographic coverage of the spatial object. Image Source: National Ecological Observatory Network (NEON).

Extent image

We can convert these coordinates to a bounding box or acquire the index the dataframe to access the geometry. Either of these polygons can be used to clip rasters (more on that later).

Reading a Shapefile from a csv

So far we have been loading file formats that were specifically built to hold spatial information. But often, point data is stored in table format, with a column for the x coordinates and a column for the y coordinates. The easiest way to get this type of data into a GeoDataFrame is with the geopandas function geopandas.points_from_xy, which takes list-like sequences of x and y coordinates. In this case, we can get these list-like sequences from columns of a pandas DataFrame that we get from read_csv.

# we get the projection of the point data from our Canopy Height Model, 
# after examining the pandas DataFrame and seeing that the CRSs are the same
import rioxarray
CHM_HARV <-
  rioxarray.open("data/NEON-DS-Airborne-Remote-Sensing/HARV/CHM/HARV_chmCrop.tif")

# plotting locations in CRS coordinates using CHM_HARV's CRS
plot_locations_HARV =
  pd.read_csv("data/NEON-DS-Site-Layout-Files/HARV/HARV_PlotLocations.csv")
plot_locations_HARV = gpd.GeoDataFrame(plot_locations_HARV, 
                    geometry=gpd.points_from_xy(plot_locations_HARV.easting, plot_locations_HARV.northing), 
                    crs=CHM_HARV.rio.crs)

Plotting a Shapefile

Any GeoDataFrame can be plotted in CRS units to view the shape of the object with .plot().

aoi_boundary_HARV.plot()

We can customize our boundary plot by setting the figsize, edgecolor, and color. Making some polygons transparent will come in handy when we need to add multiple spatial datasets to a single plot.

aoi_boundary_HARV.plot(figsize=(5,5), edgecolor="purple", facecolor="None")

Under the hood, geopandas is using matplotlib to generate this plot. In the next episode we will see how we can add DataArrays and other shapefiles to this plot to start building an informative map of our area of interest.

Spatial Data Attributes

We introduced the idea of spatial data attributes in an earlier lesson. Now we will explore how to use spatial data attributes stored in our data to plot different features.

Challenge: Import Line and Point Shapefiles

Using the steps above, import the HARV_roads and HARVtower_UTM18N layers into Python using geopandas. Name the HARV_roads shapefile as the variable lines_HARV and the HARVtower_UTM18N shapefile point_HARV.

Answer the following questions:

  1. What type of Python spatial object is created when you import each layer?

  2. What is the CRS and extent (bounds) for each object?

  3. Do the files contain points, lines, or polygons?

  4. How many spatial objects are in each file?

Answers

First we import the data:

lines_HARV = gpd.read_file("data/NEON-DS-Site-Layout-Files/HARV/HARV_roads.shp")
point_HARV = gpd.read_file("data/NEON-DS-Site-Layout-Files/HARV/HARVtower_UTM18N.shp")

Then we check the types:

lines_HARV.type
point_HARV.type

We also check the CRS and extent of each object:

print(lines_HARV.crs)
print(point_HARV.bounds)
print(lines_HARV.crs)
print(point_HARV.bounds)

To see the number of objects in each file, we can look at the output from when we print the results in a Jupyter notebook of call len() on a GeoDataFrame. lines_HARV contains 13 features (all lines) and point_HARV contains only one point.

Key Points

  • Shapefile metadata include geometry type, CRS, and extent.

  • Load spatial objects into Python with the geopandas.read_file() method.

  • Spatial objects can be plotted directly with geopandas.GeoDataFrame.plot().


Explore and Plot by Shapefile Attributes

Overview

Teaching: 40 min
Exercises: 20 min
Questions
  • How can I compute on the attributes of a spatial object?

Objectives
  • Query attributes of a spatial object.

  • Subset spatial objects using specific attribute values.

  • Plot a shapefile, colored by unique attribute values.

# learners will have this data loaded from previous episodes
point_HARV = gpd.read_file("data/NEON-DS-Site-Layout-Files/HARV/HARVtower_UTM18N.shp")
lines_HARV = gpd.read_file("data/NEON-DS-Site-Layout-Files/HARV/HARV_roads.shp")
aoi_boundary_HARV <- gpd.read_file(
  "data/NEON-DS-Site-Layout-Files/HARV/HarClip_UTMZ18.shp")

Things You’ll Need To Complete This Episode

See the lesson homepage for detailed information about the software, data, and other prerequisites you will need to work through the examples in this episode.

This episode continues our discussion of shapefile attributes and covers how to work with shapefile attributes in Python. It covers how to identify and query shapefile attributes, as well as how to subset shapefiles by specific attribute values. Finally, we will learn how to plot a shapefile according to a set of attribute values.

Load the Data

We will continue using the geopandas, and rioxarray and matplotlib.pyplot packages in this episode. Make sure that you have these packages loaded. We will continue to work with the three shapefiles that we loaded in the Open and Plot Shapefiles in R episode.

Query Shapefile Metadata

As we discussed in the Open and Plot Shapefiles in R episode, we can view metadata associated with a GeoDataFrame using:

We started to explore our point_HARV object in the previous episode. We can view the object with point_HARV or print a summary of the object itself to the console.

point_HARV

We view the columns in lines_HARV with .columns to count the number of attributes associated with a spatial object too. Note that the geometry is just another column and counts towards the total.

lines_HARV.columns

Challenge: Attributes for Different Spatial Classes

Explore the attributes associated with the point_HARV and aoi_boundary_HARV spatial objects.

  1. How many attributes does each have?
  2. Who owns the site in the point_HARV data object?
  3. Which of the following is NOT an attribute of the point_HARV data object?

    A) Latitude B) County C) Country

Answers

1) To find the number of attributes, we use the len() and .columns attribute:

print(len(point_HARV.columns))
print(len(aoi_boundary_HARV.columns))

2) Ownership information is in a column named Ownership:

point_HARV.Ownership

3) To see a list of all of the attributes, we can use the .columns method:

point_HARV.columns

“Country” is not an attribute of this object.

Explore Values within One Attribute

We can explore individual values stored within a particular attribute. Comparing attributes to a spreadsheet or a data frame, this is similar to exploring values in a column. We did this with the gapminder dataframe in an earlier lesson. For GeoDataFrames, we can use the same syntax: GeoDataFrame.attributeName or GeoDataFrame["attributeName"].

We can see the contents of the TYPE field of our lines shapefile:

lines_HARV.TYPE

To see only unique values within the TYPE field, we can use the np.unique() function for extracting the possible values of a categorical (or numerical) variable.

np.unique(lines_HARV.TYPE)

Subset Shapefiles

We can use the filter() function from dplyr that we worked with in an earlier lesson to select a subset of features from a spatial object in Python, just like with data frames.

For example, we might be interested only in features that are of TYPE “footpath”. Once we subset out this data, we can use it as input to other code so that code only operates on the footpath lines.

footpath_HARV = lines_HARV[lines_HARV.TYPE == "footpath"]
len(footpath_HARV)

Our subsetting operation reduces the features count to 2. This means that only two feature lines in our spatial object have the attribute TYPE == footpath. We can plot only the footpath lines:

footpath_HARV.plot()

There are two features in our footpaths subset. Why does the plot look like there is only one feature? Let’s adjust the colors used in our plot. If we have 2 features in our vector object, we can plot each using a unique color by assigning a color map, or cmap to each geometry/row in our GeoDataFrame. We can also alter the default line thickness by using the size = parameter, as the default value can be hard to see.

footpath_HARV.plot(cmap="viridis", linewidth=4)

Now, we see that there are in fact two features in our plot!

Challenge: Subset Spatial Line Objects Part 1

Subset out all woods road from the lines layer and plot it. There are many more color maps to use, so if you’d like, do a web search to find a matplotlib cmap that works better for this plot than viridis.

Answers

First we will save an object with only the boardwalk lines:

woods_road_HARV = lines_HARV[lines_HARV.TYPE == "woods_road_HARV"]

Let’s check how many features there are in this subset:

len(woods_road_HARV)

Now let’s plot that data:

woods_road_HARV.plot(cmap="viridis", linewidth=3)

Adjust Line Width

We adjusted line color by applying an arbitrary color map earlier. If we want a unique line color for each attribute category in our GeoDataFrame, we can use the following argument, column, as well as some style arguments to improv ethe visuals.

We already know that we have four different TYPE levels in the lines_HARV object, so we will set four different line colors.

import matplotlib.pyplot as plt
plt.style.use("ggplot")
lines_HARV.plot(column="TYPE", linewidth=3, legend=True, figsize=(16,10))

Our map is starting together, in the next lesson we will add our Canopy Height Model that we calculated in an earlier episode.

Challenge: Plot Polygon by Attribute

  1. Create a map of the state boundaries in the United States using the data located in your downloaded data folder: NEON-DS-Site-Layout-Files/US-Boundary-Layers\US-State-Boundaries-Census-2014. Apply a fill color to each state using its region value. Add a legend.

Answers

First we read in the data and check how many levels there are in the region column:

state_boundary_US =
gpd.read_file("data/NEON-DS-Site-Layout-Files/US-Boundary-Layers/US-State-Boundaries-Census-2014.shp")

np.unique(state_boundary_US.region)

Now we can create our plot:

state_boundary_US.plot(column = "region", linewidth = 2, legend = True, figsize=(20,5))

Key Points

  • A GeoDataFrame in geopandas is similar to standard pandas data frames and can be manipulated using the same functions.

  • Almost any feature of a plot can be customized using the various functions and options in the matplotlib package.


Plot Multiple Shapefiles with Geopandas FIXME

Overview

Teaching: 40 min
Exercises: 20 min
Questions
  • How can I create map compositions with custom legends using geopandas?

  • How can I plot raster and vector data together?

Objectives
  • Plot multiple shapefiles in the same plot.

  • Apply custom symbols to spatial objects in a plot.

  • Create a multi-layered plot with raster and vector data.

Things You’ll Need To Complete This Episode

See the lesson homepage for detailed information about the software, data, and other prerequisites you will need to work through the examples in this episode.

This episode explains how to crop a raster using the extent of a vector shapefile. We will also cover how to extract values from a raster that occur within a set of polygons, or in a buffer (surrounding) region around a set of points.

# Learners will have these data and libraries loaded from earlier episodes

import rioxarray
import geopandas as gpd
import matplotlib.pyplot as plt
import pandas as pd

# shapefiles
point_HARV = read_file("data/NEON-DS-Site-Layout-Files/HARV/HARVtower_UTM18N.shp")
lines_HARV = read_file("data/NEON-DS-Site-Layout-Files/HARV/HARV_roads.shp")
aoi_boundary_HARV = read_file("data/NEON-DS-Site-Layout-Files/HARV/HarClip_UTMZ18.shp")

# CHM
CHM_HARV <-
  rioxarray.open("data/NEON-DS-Airborne-Remote-Sensing/HARV/CHM/HARV_chmCrop.tif")

plot_locations_HARV =
  pd.read_csv("data/NEON-DS-Site-Layout-Files/HARV/HARV_PlotLocations.csv")
plot_locations_HARV = gpd.GeoDataFrame(plot_locations_HARV, 
                    geometry=gpd.points_from_xy(plot_locations_HARV.easting, plot_locations_HARV.northing), 
                    crs=CHM_HARV.rio.crs)

Crop a Raster to Vector Extent

We often work with spatial layers that have different spatial extents. The spatial extent of a shapefile or R spatial object represents the geographic “edge” or location that is the furthest north, south east and west. Thus is represents the overall geographic coverage of the spatial object.

Extent illustration Image Source: National Ecological Observatory Network (NEON)

The graphic below illustrates the extent of several of the spatial layers that we have worked with in this workshop:

# code not shown, for demonstration purposes only 


Frequent use cases of cropping a raster file include reducing file size and creating maps. Sometimes we have a raster file that is much larger than our study area or area of interest. It is often more efficient to crop the raster to the extent of our study area to reduce file sizes as we process our data. Cropping a raster can also be useful when creating pretty maps so that the raster layer matches the extent of the desired vector layers.

Crop a Raster Using Vector Extent

We can use the crop() function to crop a raster to the extent of another spatial object. To do this, we need to specify the raster to be cropped and the spatial object that will be used to crop the raster. R will use the extent of the spatial object as the cropping boundary.

To illustrate this, we will crop the Canopy Height Model (CHM) to only include the area of interest (AOI). Let’s start by plotting the full extent of the CHM data and overlay where the AOI falls within it. The boundaries of the AOI will be colored blue, and we use fill = NA to make the area transparent.

```{r crop-by-vector-extent} ggplot() + geom_raster(data = CHM_HARV_df, aes(x = x, y = y, fill = HARV_chmCrop)) + scale_fill_gradientn(name = “Canopy Height”, colors = terrain.colors(10)) + geom_sf(data = aoi_boundary_HARV, color = “blue”, fill = NA) + coord_sf()


Now that we have visualized the area of the CHM we want to subset, we can
perform the cropping operation. We are going to create a new object with only
the portion of the CHM data that falls within the boundaries of the AOI. The function `crop()` is from the raster package and doesn't know how to deal with `sf` objects. Therefore, we first need to convert `aoi_boundary_HARV` from a `sf` object to "Spatial" object.

```{r}
CHM_HARV_Cropped <- crop(x = CHM_HARV, y = as(aoi_boundary_HARV, "Spatial"))

Now we can plot the cropped CHM data, along with a boundary box showing the full CHM extent. However, remember, since this is raster data, we need to convert to a data frame in order to plot using ggplot. To get the boundary box from CHM, the st_bbox() will extract the 4 corners of the rectangle that encompass all the features contained in this object. The st_as_sfc() converts these 4 coordinates into a polygon that we can plot:

```{r show-cropped-area} CHM_HARV_Cropped_df <- as.data.frame(CHM_HARV_Cropped, xy = TRUE)

ggplot() + geom_sf(data = st_as_sfc(st_bbox(CHM_HARV)), fill = “green”, color = “green”, alpha = .2) +
geom_raster(data = CHM_HARV_Cropped_df, aes(x = x, y = y, fill = HARV_chmCrop)) + scale_fill_gradientn(name = “Canopy Height”, colors = terrain.colors(10)) + coord_sf()


The plot above shows that the full CHM extent (plotted in green) is much larger
than the resulting cropped raster. Our new cropped CHM now has the same extent
as the `aoi_boundary_HARV` object that was used as a crop extent (blue border
below).

```{r view-crop-extent}
ggplot() +
  geom_raster(data = CHM_HARV_Cropped_df,
              aes(x = x, y = y, fill = HARV_chmCrop)) + 
  geom_sf(data = aoi_boundary_HARV, color = "blue", fill = NA) + 
  scale_fill_gradientn(name = "Canopy Height", colors = terrain.colors(10)) + 
  coord_sf()

We can look at the extent of all of our other objects for this field site.

``` {r view-extent} st_bbox(CHM_HARV) st_bbox(CHM_HARV_Cropped) st_bbox(aoi_boundary_HARV) st_bbox(plot_locations_sp_HARV)


Our plot location extent is not the largest but is larger than the AOI Boundary.
It would be nice to see our vegetation plot locations plotted on top of the
Canopy Height Model information.

> ## Challenge: Crop to Vector Points Extent
> 
> 1. Crop the Canopy Height Model to the extent of the study plot locations.
> 2. Plot the vegetation plot location points on top of the Canopy Height Model.
> 
> > ## Answers
> > 
> > ```{r challenge-code-crop-raster-points}
> > 
> > CHM_plots_HARVcrop <- crop(x = CHM_HARV, y = as(plot_locations_sp_HARV, "Spatial"))
> > 
> > CHM_plots_HARVcrop_df <- as.data.frame(CHM_plots_HARVcrop, xy = TRUE)
> > 
> > ggplot() + 
> >   geom_raster(data = CHM_plots_HARVcrop_df, aes(x = x, y = y, fill = HARV_chmCrop)) + 
> >   scale_fill_gradientn(name = "Canopy Height", colors = terrain.colors(10)) + 
> >   geom_sf(data = plot_locations_sp_HARV) + 
> >   coord_sf()
> > ```
> {: .solution}
{: .challenge}

In the plot above, created in the challenge, all the vegetation plot locations
(black dots) appear on the Canopy Height Model raster layer except for one. One is
situated on the blank space to the left of the map. Why?

A modification of the first figure in this episode is below, showing the
relative extents of all the spatial objects. Notice that the extent for our
vegetation plot layer (black) extends further west than the extent of our CHM
raster (bright green). The `crop()` function will make a raster extent smaller, it
will not expand the extent in areas where there are no data. Thus, the extent of our
vegetation plot layer will still extend further west than the extent of our
(cropped) raster data (dark green).

```{r, echo = FALSE}
# code not shown, demonstration only
# create CHM_plots_HARVcrop as a shape file
CHM_plots_HARVcrop_sp <- st_as_sf(CHM_plots_HARVcrop_df, coords = c("x", "y"), crs = utm18nCRS)
# approximate the boundary box with random sample of raster points
CHM_plots_HARVcrop_sp_rand_sample = sample_n(CHM_plots_HARVcrop_sp, 10000)

```{r repeat-compare-data-extents, ref.label=”compare-data-extents”, echo = FALSE}


## Define an Extent

So far, we have used a shapefile to crop the extent of a raster dataset.
Alternatively, we can also the `extent()` function to define an extent to be
used as a cropping boundary. This creates a new object of class extent. Here we
will provide the `extent()` function our xmin, xmax, ymin, and ymax (in that
order).

```{r}
new_extent <- extent(732161.2, 732238.7, 4713249, 4713333)
class(new_extent)

Data Tip

The extent can be created from a numeric vector (as shown above), a matrix, or a list. For more details see the extent() function help file (?raster::extent).

Once we have defined our new extent, we can use the crop() function to crop our raster to this extent object.

```{r crop-using-drawn-extent} CHM_HARV_manual_cropped <- crop(x = CHM_HARV, y = new_extent)


To plot this data using `ggplot()` we need to convert it to a dataframe. 

```{r}
CHM_HARV_manual_cropped_df <- as.data.frame(CHM_HARV_manual_cropped, xy = TRUE)

Now we can plot this cropped data. We will show the AOI boundary on the same plot for scale.

```{r show-manual-crop-area} ggplot() + geom_sf(data = aoi_boundary_HARV, color = “blue”, fill = NA) + geom_raster(data = CHM_HARV_manual_cropped_df, aes(x = x, y = y, fill = HARV_chmCrop)) + scale_fill_gradientn(name = “Canopy Height”, colors = terrain.colors(10)) + coord_sf()


## Extract Raster Pixels Values Using Vector Polygons

Often we want to extract values from a raster layer for particular locations -
for example, plot locations that we are sampling on the ground. We can extract all pixel values within 20m of our x,y point of interest. These can then be summarized into some value of interest (e.g. mean, maximum, total).

![Extract raster information using a polygon boundary. From https://www.neonscience.org/sites/default/files/images/spatialData/BufferSquare.png](../images//BufferSquare.png)

To do this in R, we use the `extract()` function. The `extract()` function
requires:

* The raster that we wish to extract values from,
* The vector layer containing the polygons that we wish to use as a boundary or
boundaries,
* we can tell it to store the output values in a data frame using
`df = TRUE`. (This is optional, the default is to return a list, NOT a data frame.) .

We will begin by extracting all canopy height pixel values located within our
`aoi_boundary_HARV` polygon which surrounds the tower located at the NEON Harvard
Forest field site.

```{r extract-from-raster}
tree_height <- extract(x = CHM_HARV,
                       y = as(aoi_boundary_HARV, "Spatial"),
                       df = TRUE)

str(tree_height)

When we use the extract() function, R extracts the value for each pixel located within the boundary of the polygon being used to perform the extraction - in this case the aoi_boundary_HARV object (a single polygon). Here, the function extracted values from 18,450 pixels.

We can create a histogram of tree height values within the boundary to better understand the structure or height distribution of trees at our site. We will use the column layer from our data frame as our x values, as this column represents the tree heights for each pixel.

```{r view-extract-histogram} ggplot() + geom_histogram(data = tree_height, aes(x = HARV_chmCrop)) + ggtitle(“Histogram of CHM Height Values (m)”) + xlab(“Tree Height”) + ylab(“Frequency of Pixels”)


 We can also use the
`summary()` function to view descriptive statistics including min, max, and mean
height values. These values help us better understand vegetation at our field
site.

```{r}
summary(tree_height$HARV_chmCrop)

Summarize Extracted Raster Values

We often want to extract summary values from a raster. We can tell R the type of summary statistic we are interested in using the fun = argument. Let’s extract a mean height value for our AOI. Because we are extracting only a single number, we will not use the df = TRUE argument.

```{r summarize-extract } mean_tree_height_AOI <- extract(x = CHM_HARV, y = as(aoi_boundary_HARV, “Spatial”), fun = mean)

mean_tree_height_AOI


It appears that the mean height value, extracted from our LiDAR data derived
canopy height model is 22.43 meters.

## Extract Data using x,y Locations

We can also extract pixel values from a raster by defining a buffer or area
surrounding individual point locations using the `extract()` function. To do this
we define the summary argument (`fun = mean`) and the buffer distance (`buffer = 20`)
which represents the radius of a circular region around each point. By default, the units of the
buffer are the same units as the data's CRS. All pixels that are touched by the buffer region are included in the extract.

![Extract raster information using a buffer region. From: https://www.neonscience.org/sites/default/files/images/spatialData/BufferCircular.png](../images/BufferCircular.png)

Source: National Ecological Observatory Network (NEON).

Let's put this into practice by figuring out the mean tree height in the
20m around the tower location (`point_HARV`). Because we are extracting only a single number, we
will not use the `df = TRUE` argument. 

```{r extract-point-to-buffer }
mean_tree_height_tower <- extract(x = CHM_HARV,
                                  y = as(point_HARV, "Spatial"),
                                  buffer = 20,
                                  fun = mean)

mean_tree_height_tower

Challenge: Extract Raster Height Values For Plot Locations

1) Use the plot locations object (plot_locations_sp_HARV) to extract an average tree height for the area within 20m of each vegetation plot location in the study area. Because there are multiple plot locations, there will be multiple averages returned, so the df = TRUE argument should be used.

2) Create a plot showing the mean tree height of each area.

Answers

```{r hist-tree-height-veg-plot}

extract data at each plot location

mean_tree_height_plots_HARV <- extract(x = CHM_HARV, y = as(plot_locations_sp_HARV, “Spatial”), buffer=20, fun = mean, df = TRUE)

view data

mean_tree_height_plots_HARV

plot data

ggplot(data = mean_tree_height_plots_HARV, aes(ID, HARV_chmCrop)) + geom_col() + ggtitle(“Mean Tree Height at each Plot”) + xlab(“Plot ID”) + ylab(“Tree Height (m)”) ```

Key Points

  • Use the matplotlib.pyplot.axis object to add multiple layers to a plot.

  • Multi-layered plots can combine raster and vector datasets.


Convert from .csv to a Shapefile in Python FIXME

Overview

Teaching: 40 min
Exercises: 20 min
Questions
  • How can I import CSV files as shapefiles in Python?

Objectives
  • Import .csv files containing x,y coordinate locations as a GeoDataFrame.

  • Export a spatial object to a .geojson file.

FIXME

Key Points

  • Know the projection (if any) of your point data prior to converting to a spatial object.


Intro to Raster Data in Python FIXME

Overview

Teaching: 40 min
Exercises: 20 min
Questions
  • What do I do when vector data don’t line up?

Objectives
  • Plot vector objects with different CRSs in the same plot.

FIXME

Key Points

  • In order to plot two vector data sets together, they must be in the same CRS.

  • Use the GeoDataFrame.to_crs() method to convert between CRSs.


Manipulate Raster Data in Python FIXME

Overview

Teaching: 40 min
Exercises: 20 min
Questions
  • How can I crop raster objects to vector objects, and extract the summary of raster pixels?

Objectives
  • Crop a raster to the extent of a vector layer with earthpy.

  • Extract values from a raster that correspond to a vector file overlay with rasterstats.

FIXME

Key Points


Raster Time Series Data in Python FIXME

Overview

Teaching: 40 min
Exercises: 20 min
Questions
  • How can I view and and plot data for different times of the year?

Objectives
  • Understand the format of a time series raster dataset.

  • Work with time series rasters.

  • Import a set of rasters stored in a single directory.

  • Create a multi-paneled plot.

  • Convert character data to datetime format.

FIXME

Key Points


Derive Values from Raster Time Series FIXME

Overview

Teaching: 40 min
Exercises: 20 min
Questions
  • How can I calculate, extract, and export summarized raster pixel data?

Objectives
  • Extract summary pixel values from a raster.

  • Save summary values to a .csv file.

  • Plot summary pixel values using pandas.plot().

  • Compare NDVI values between two different sites.

FIXME

Key Points


Create Publication-quality Graphics FIXME

Overview

Teaching: 40 min
Exercises: 20 min
Questions
  • How can I create a publication-quality graphic and customize plot parameters?

Objectives

FIXME

Key Points