The ability of policymakers to monitor the extent and conditions of cropland is vital to ensuring food security. Events like the Iowa derecho in August 2020 show just how necessary this capability is as the natural disaster severely harmed corn crops in the state to the point that the percentage of “good or excellent crops” actually dropped from 73% to 45% between the beginning and end of August. Even excluding disaster events, the USDA’s Agricultural Census found that US farmland shrunk by 2% (14.3 million acres) between 2012 and 2017. Recognizing that loss and the potential related production ramifications is important to ensuring domestic availability of crops.
Despite its necessity, the identification of agricultural fields and quantifying their extent is a resource- and labor-intensive process. Since the launch of the first earth observation (EO) satellites in the 1970’s though, this task has become easier. And in the last two decades, the explosion of freely available EO data at medium to fine scales and at high temporal resolutions has only expanded the kinds of monitoring available. With such a wide availability, the focus now is on comparing different EO platforms and determining which is best for given tasks.
Harvest’s Dr. Matt Hansen recently co-authored a paper seeking to answer this question in the journal Science of Remote Sensing. Hansen and his co-authors Dr. Xiao-Peng Song (Texas Tech University), Dr. Wenli Huang (Wuhan University), and Dr. Peter Potapov (University of Maryland) compared freely available data from the US and EU, specifically medium-scale optical data from the Landsat 7 and 8 sensors and the Sentinel-2 sensors; medium-scale radar imagery from Sentinel-1; and the coarse-resolution optical imagery from MODIS. They performed two sets of tests:
The first test found that all sensors were very accurate when used for crop classification. The least accurate sensor, coarse-resolution MODIS, still had an accuracy of 92% for both corn and soybeans, while the other three platforms had accuracies that ranged from 94.8 – 96.8%. Combining all sensors only marginally improved classification to 97% for both crops.
Satellite and image acquisition date of the most important feature in the per-block decision tree classification models. The 60 blocks from the high and medium strata are shown on the maps. (a). Most important satellite sensor for mapping soybean. (b). Most important satellite sensor for mapping corn. (c). Most important month for mapping soybean. (d). Most important month for mapping corn.
The second test looked at how including and excluding different data layers in the classification model impacted the accuracy. The data layers that the team was interested in were the individual optical and radar bands from each sensor as well as the date that each image was collected. By examining the individual impact of each band and the date it was collected, the team was able to determine not only which bands were most useful, but the optimal collection times.
This analysis found that soybeans were most accurately classified by Landsat, specifically the sensor’s two short wave infrared (SWIR) bands. Corn, on the other hand, was classified with the most accuracy using Sentinel-2’s red edge, near infrared (NIR) and shortwave infrared (SWIR). These results show that optical data was more useful for crop identification, however the authors note that radar data also produced highly accurate results and would actually be the first choice in frequently cloudy regions. As for time periods, corn was identified most accurately in July while soybean’s time frame was July and August.
The amount of data acquisitions per satellite sensor from June 1st to September 30th, 2017 to 2019 over the Conterminous United States. Maps were generated on Google Earth Engine.
Given the optimal time periods and the rapid phenological changes that crops undergo, accurate crop mapping requires EO data with high temporal resolution. The authors point out that this requirement can be met with optical imagery, as Landsat and Sentinel-2 are able to be combined into one time series (essentially reducing satellite revisit time to 3-5 days). Current radar EO data, they note however, is unable to match this high temporal resolution and the authors argue that extending radar temporal resolution is important, especially given its capability in cloudy areas.
Overall the study shows that current, freely available EO data is fully capable of mapping large agricultural zones with great accuracy (~95%). While optical imagery tends to have slightly higher accuracy than radar imagery (~2% greater), the latter’s capability to pierce cloudy skies shows that it can be the preferred data source in cloudy environments. The full open source paper can be found here.