Skip to main content
PLOS One logoLink to PLOS One
. 2022 Nov 28;17(11):e0277425. doi: 10.1371/journal.pone.0277425

Using Sentinel-1, Sentinel-2, and Planet satellite data to map field-level tillage practices in smallholder systems

Yin Liu 1, Preeti Rao 1,2, Weiqi Zhou 1, Balwinder Singh 3,4, Amit K Srivastava 5, Shishpal P Poonia 3, Derek Van Berkel 1, Meha Jain 1,*
Editor: Jaishanker Raghunathan Nair6
PMCID: PMC9704639  PMID: 36441682

Abstract

Remote sensing can be used to map tillage practices at large spatial and temporal scales. However, detecting such management practices in smallholder systems is challenging given that the size of fields is smaller than historical readily-available satellite imagery. In this study we used newer, higher-resolution satellite data from Sentinel-1, Sentinel-2, and Planet to map tillage practices in the Eastern Indo-Gangetic Plains in India. We specifically tested the classification performance of single sensor and multiple sensor random forest models, and the impact of spatial, temporal, or spectral resolution on classification accuracy. We found that when considering a single sensor, the model that used Planet imagery (3 m) had the highest classification accuracy (86.55%) while the model that used Sentinel-1 data (10 m) had the lowest classification accuracy (62.28%). When considering sensor combinations, the model that used data from all three sensors achieved the highest classification accuracy (87.71%), though this model was not statistically different from the Planet only model when considering 95% confidence intervals from bootstrap analyses. We also found that high levels of accuracy could be achieved by only using imagery from the sowing period. Considering the impact of spatial, temporal, and spectral resolution on classification accuracy, we found that improved spatial resolution from Planet contributed the most to improved classification accuracy. Overall, it is possible to use readily-available, high spatial resolution satellite data to map tillage practices of smallholder farms, even in heterogeneous systems with small field sizes.

Introduction

Conventional tillage (CT) is typically used to prepare agricultural fields for planting as it controls weeds, removes most plant residue from the previous crop, and creates a fine seedbed [1]. Yet, CT negatively impacts long-term soil health due to increased erosion, the loss of organic matter, and reduced water retention [2]. Zero tillage (ZT), where seeds are planted directly into untilled fields that typically contain previous crop residue, has been shown to provide agronomic and economic benefits, including minimizing soil disturbance, reducing soil erosion, and maintaining soil cover [35]. Globally, the amount of land area under ZT has increased since the 1990s [6], yet quantifying the exact area under ZT has been challenging given that typical methods used to collect such information, such as censuses, are not implemented in all regions of the world due to financial, accessibility, and labor constraints [7]. This is particularly true in smallholder systems, where small-scale studies have suggested that ZT adoption rates have increased steadily in recent years [8]. Understanding ZT adoption in smallholder systems is critically important given that it has been shown to be an important way to sustainably intensify cereal grains in these systems [911]. Remote sensing can offer an alternative and low-cost way to quantify ZT adoption at large spatial and temporal scales.

Numerous detection techniques have been developed to map tillage practices using remote sensing [12, 13], yet these approaches may not be suitable for detecting tillage practices in smallholder systems [14, 15]. This is largely because the size of smallholder fields (< 2 ha) is typically smaller than the spatial resolution of historically-available satellite data that were used in previous studies, such as Landsat (30 m) [14, 16, 17]. Over the last five years, new higher-spatial resolution satellites, such as Sentinel-1 (10 m), Sentinel-2 (10 m), and Planet (3 m), have become available, and studies have shown that these sensors are better able to capture field-level variation of smallholder farms [1821]. It is possible that these higher resolution sensors may be able to effectively map tillage practices in smallholder systems, yet to our knowledge, no studies exist that have used these higher resolution satellite datasets to map tillage practices in smallholder systems.

Previous studies that have used satellite data to map tillage practices have found that using multiple sensors in classification algorithms can improve accuracy. For example, Azzari et al. (2019) found that combining optical Landsat satellite data and radar Sentinel-1 data led to higher classification accuracies of tillage practices across the United States Midwest, although the contribution of Sentinel-1 data was small. In smallholder systems, it is possible that combining Planet, Sentinel-1, and Sentinel-2 satellite data may improve classification accuracies. Sentinel-1 has the advantage of being less sensitive to water vapor and cloud cover as radar data have the ability to penetrate cloud cover, providing data more regularly through time than optical sensors [22]. This is important in smallholder systems given that they are largely found throughout the tropics with periods of high rainfall and cloud cover [18]. Sentinel-2 imagery has multiple spectral bands that cover the visible near-infrared (VNIR), shortwave-infrared (SWIR), and red-edge wavelengths; red-edge spectral bands are particularly critical for mapping vegetation characteristics [23, 24] and have been found to increase the accuracy of mapping crop residue cover [25, 26]. Planet imagery has higher spatial resolution (3 m) than Sentinel-1 and Sentinel-2 imagery (10 m), which may reduce the effect of mixed pixels at field edges [21, 27].

Though previous studies have shown that classification accuracy of mapping agricultural characteristics, including crop type and yield, can be improved by using images throughout the growing season [19, 2830], it is possible that only using images during the early part of the growing season may result in high classification accuracies for mapping tillage practices. This is because most spectral and phenological differences are likely to occur at the start of the growing season as fields are prepared and seedlings germinate. Producing accurate maps of tillage practices using only early season imagery could allow for within-season mapping of ZT areas; such information could be important for policy makers and decision-makers who could use such maps in real time [31].

This study examines the ability of Planet, Sentinel-1, and Sentinel-2 imagery to map field-level ZT and CT in smallholder farming systems. We focus our study in the state of Bihar in the Eastern Indo-Gangetic Plains in India, which is a region where ZT adoption has increased over the last decade [32], where field sizes are very small (< 0.3 ha on average), and where farm management practices are heterogeneous [33]. We aim to answer the following questions in this study:

  1. How effectively can single sensor and multiple sensor combinations of Planet, Sentinel-1, and Sentinel-2 map field-level tillage practices of smallholder farms? Which features are the most important for classification accuracy?

  2. Can we use only early season imagery to effectively map tillage practices, which can be used to provide within-season maps of ZT adoption?

  3. Does improved spatial, temporal, or spectral resolution result in higher classification accuracy?

This study is one of the first to examine how well tillage practices can be mapped in smallholder systems using newer high-resolution satellite imagery. While the analyses presented in this paper are specific to our study area in India, it is likely that the broad findings from our study can be used to inform the most effective ways to map tillage practices in other smallholder systems across the globe.

Materials

Study area and ground truth data

The study was conducted in Arrah district, Bihar, India (25.47°N, 84.52°E) in 2017–2018 (Fig 1). Bihar has fertile soil and a large amount of rainfall, but its agricultural productivity is one of the lowest among all Indian states [10]. Although the adoption of ZT technology in Bihar has increased through time, it is still limited because many farmers lack awareness of ZT and do not have access to the machinery required for ZT [10]. While it is difficult to estimate the amount of area under ZT due to a lack of available ground data, surveys suggest that ZT adoption is variable across villages and can range from 0% to 98% of village land area in Bihar [10]. We focused on a 30 by 70 km2 region where there was variation in tilling practices, and we collected ground truth data from 20 villages distributed across the study area. This region is predominantly comprised of smallholder fields (< 0.3 ha on average), with farms covering over 80% of land area and with over 80% of the region’s population taking part in farming [34]. There are two main cropping seasons in this region, the monsoon (kharif) season, which spans from June to October and is when most farmers plant rice, and the winter (rabi) season, which spans from November to April and is when most farmers plant wheat [19]. Our study only focused on wheat fields planted during the winter cropping season.

Fig 1. Schematic diagram of the study area.

Fig 1

Maps showing (A) the location of our study area in Bihar, (B) our study polygons plotted in Arrah district, Bihar and (C) a zoom in of some of our polygons overlaid on Planet imagery.

Field management across this region is extremely heterogeneous, making it a complex system in which to map tillage practices using one universal algorithm. The sowing dates of wheat vary widely across the study region, ranging from mid-November to early January [35]. Wheat variety planted and input use, including fertilizer and irrigation, is also highly variable, with significant heterogeneity even across neighboring fields [19, 35]. Most farmers clear rice residues prior to sowing wheat, though some farmers leave partial residues in field. Rice residues are typically removed in this region as farmers harvest rice manually, which leaves behind little to no residue, farmers use rice residues for crop feed, and farmers use older generation ZT machinery that is less effective at planting wheat seeds within standing rice residues. Given that most CT and ZT fields are cleared of rice residues prior to wheat planting, this makes it additionally challenging to map ZT in this region.

Ground truth data were collected during April to May 2018 by field staff from the Cereal Systems Initiative of South Asia, which is a program under the International Maize and Wheat Improvement Center (CIMMYT-CSISA). The survey collected information on tillage practices, including whether the field was under CT or ZT, and other field management practices (See S1 Table in S1 File for survey). Example photos of ZT and CT fields are shown in Fig 2. In addition, we collected GPS locations at the four corners and at the center of each field which were later used to manually digitize field boundaries. We did this by overlaying all GPS points on high-resolution imagery in Google Earth Pro (https://www.google.com/earth/versions/#earth-pro) and manually drawing polygons that connected the four corner GPS locations. We then manually adjusted these polygons to match visible field boundaries that we could see in the high-resolution imagery. We selected the image date within Google Earth Pro that was closest to our time of survey to ensure that visible field boundaries in the imagery were consistent with the field boundaries that we surveyed on the ground. Our survey data were collected from a total of 160 fields, with 65 fields representing ZT and 95 fields representing CT.

Fig 2.

Fig 2

Photos of example zero tillage (A) and conventional tillage (B) fields.

Satellite data

We selected the time period for our analysis by considering the timing of cropping cycles in this region as well as the phenologies of ZT and CT fields (Fig 3). Considering the timing of cropping cycles, wheat was planted in our study area from November 11 to December 31, with tillage occurring from mid-October to early November. To ensure that we captured the full range of phenological change when the field was fallow prior to planting wheat, we used October 1st as the first date of our study period. Considering NDVI (Normalized Difference Vegetation Index) phenologies, we observed that NDVI values became similar in both ZT and CT fields after mid-February (Fig 3). Thus, we used March 1st, 2018 as the last date of our study period. We defined the sowing season as October 1 to December 31, since this time period spanned the full range of sowing dates in our dataset, and defined the full season as October 1 to March 1. From October to December, NDVI is higher in ZT fields compared to CT fields (Fig 3), likely because farmers maintain monsoon rice crop residue on ZT fields but not CT fields. This is because ZT machinery allows farmers to plant wheat seeds into existing rice stubble, whereas in CT fields rice residue is removed by harvesting or by being incorporated into the soil through tilling.

Fig 3. Phenology curves generated from averaged NDVI values of all ZT (red) and CT (blue) fields for the 2017–18 growing season using Sentinel-2 imagery.

Fig 3

The drop in NDVI values from October to December represents the end of the rice crop, and captures senescence, harvest, and the removal of residues. The increase in NDVI values from December until March represents wheat biomass growth after planting.

To analyze the effectiveness of higher-resolution, readily-available satellite imagery for detecting tillage practices in smallholder systems, we used images from three different satellite sensors: 1) Synthetic Aperture Radar (SAR) Sentinel-1 [36], 2) multi-spectral Sentinel-2 [37], and 3) multi-spectral PlanetScope [38]. We obtained Sentinel-1 and Sentinel-2 data through Google Earth Engine (GEE) [39] and Planet imagery through the Planet API [40].

We obtained 13 Sentinel-1 C-band Level-1 Ground Range Detected images during our study period (Table 1). They were acquired on a descending orbit in Interferometric Wide swath mode (IW). Prior to ingestion into GEE, the data were preprocessed using the Sentinel-1 Toolbox [41]. Since speckle filtering was not done prior to ingestion, we implemented speckle filtering using the Refined Lee speckle filter code developed by Guido Lemoine (https://code.earthengine.google.com/2ef38463ebaf5ae133a478f173fd0ab5) and converted backscatter values to decibels using a logarithmic transformation. The bands and indices we used from Sentinel-1 are shown in Table 2. The intensity cross-ratio (CR) VV/VH is included as previous studies have shown its utility for differentiating vegetation types [42]. We resampled all Sentinel-1 images to 3 m resolution to match fine-scale Planet data using bilinear interpolation in GEE.

Table 1. List of acquisition dates of satellite images used in our study.

Dataset Sentinel-1 Sentinel-2 Planet

Sowing season
Peak season
10/03/2017
10/15/2017
10/27/2017
11/08/2017
11/20/2017
12/02/2017
12/14/2017
12/26/2017
10/08/2017
10/23/2017
10/28/2017
11/12/2017
11/22/2017
12/02/2017
12/12/2017
10/08/2017
10/13/2017
10/15/2017
10/24/2017
11/01/2017
11/04/2017
11/07/2017
11/13/2017
11/18/2017
12/08/2017
12/14/2017
01/07/2018
01/19/2018
01/31/2018
02/12/2018
02/24/2018
01/31/2018
02/05/2018
02/15/2018
02/20/2018
01/23/2018
02/03/2018
02/11/2018
02/21/2018
02/27/2018

Table 2. Band and index information for the three sensors used in this study.

Sensor Spectral Index & Band Description (Mean Wavelength: μm) Reference
Sentinel-1 VV
VH
vertical transmit/vertical receive
vertical transmit/horizontal receive
CR Log ratio of (VV/VH) [42]

Sentinel-2
B2
B3
B4
B5
B6
B7
B8
B8A
B11
B12
Blue (0.490)
Green (0.560)
Red (0.665)
Red Edge 1 (0.705)
Red Edge 2 (0.740)
Red Edge 3 (0.783)
NIR (0.842)
Red Edge 4 (0.865)
SWIR 1(1.610)
SWIR 2 (2.190)
NDTI
CRC
NDVI
GCVI
OSAVI
NDI5
NDI7
STI
(SWIR1 –SWIR2) / (SWIR1 + SWIR2)
(SWIR1 –Green) / (SWIR1 + Green)
(NIR—Red) / (NIR + Red)
(NIR / Green) − 1
(NIR—Red) / (NIR + Red + 0.16)
(NIR—SWIR1) / (NIR + SWIR1)
(NIR–SWIR2) / (NIR + SWIR2)
SWIR1 / SWIR2
[48]
[49]
[50]
[51]
[52]
[48]
[48]
[53]
Planet B1
B2
B3
B4
Blue (0.485)
Green (0.545)
Red (0.630)
NIR (0.820)
GCVI
OSAVI
NDVI
(NIR / Green) − 1
(NIR—Red) / (NIR + Red + 0.16)
(NIR—Red) / (NIR + Red)
[51]
[49]
[50]

We obtained 11 Sentinel-2 Level-1C Top-Of-Atmosphere (TOA) scenes from GEE for our defined study period (Table 1). We only selected images that had less than 10% cloud cover, and visually inspected all selected images to ensure that there was no cloud cover over our field polygons. We then applied surface reflectance (SR) correction to all tiles in GEE using the radiative transfer emulator Second Simulation of the Satellite Signal in the Solar Spectrum (6S) [43] and code from Sam Murphy [44] to derive the surface reflectance values for the visible (blue, green, and red at 10 m), NIR (10 m), red-edge (20 m), and SWIR (20 m) bands. The 6S algorithm generates interpolated look-up tables (LUTs) under different atmospheric conditions considering solar zenith, ozone, and surface altitude. These LUTs are then used to calculate atmospheric correction coefficients which convert TOA radiance to SR. The bands and indices that we used from Sentinel-2 are shown in Table 2. We omitted B1, B9 and B10 because these bands represent atmospheric features, including aerosols, water vapor, and cirrus, and are not measures of the surface reflectance of land features. Eight common spectral indices were included as they have been shown to help differentiate vegetation and/or tillage practices in the previous literature (Table 2). All Sentinel-2 images and bands (10 m and 20 m) were resampled to 3 m resolution to match fine-scale Planet data using bilinear interpolation in GEE. We used bilinear interpolation as it has been shown to better preserve distributions of band values compared to other common resampling approaches [45].

We obtained 16 low-cloud Level-3B surface reflectance Planet images (Table 1) that had been atmospherically corrected by Planet using the 6S radiative transfer model with ancillary data from MODIS [38]. We defined low-cloud images as those with less than 5% cloud cover, and we further visually inspected all filtered images to ensure that there was no cloud cover over our field polygons. All individual image tiles were mosaicked using color matching of the overlapping regions using the raster package [46] in R Project Software [47]. We overlaid all Planet imagery over Sentinel-2 imagery and used visual inspection to ensure the georeferencing of these two datasets matched as previous studies have found that early iterations of Planet data needed additional georeferencing [19]. We found that the images aligned well and did not need additional georeferencing. Previous studies have shown that there is still noise remaining in the Planet surface reflectance corrected data downloaded directly from Planet [19, 20]. Thus, we conducted an additional correction by histogram stretching Planet data to match Sentinel-2 imagery using methods from Jain et al. [19] (S1 Fig). The bands and indices that we used from Planet are shown in Table 2. We computed the same indices as those calculated from Sentinel-2 using the red, green, and NIR bands (Table 2).

Methods

Sampling strategy, feature selection, and model development

We used random forest (RF), an ensemble-based algorithm, to classify ZT versus CT fields. Previous studies have shown that random forest often leads to high classification accuracies with less computation time compared to other active learning models, such as support vector machines [54]. For training and validation, we used 70% of the field polygons as our training data and 30% of the field polygons as our validation data. These validation polygons were completely separate from those polygons used to train, and provide an independent source of data for validation. To ensure even representation in our training dataset regardless of field size, we randomly sampled twenty pixels from each field; for fields that were smaller than twenty pixels, we considered all available pixels within that field. In addition, we sampled an equal ratio of ZT to CT fields in both our training and validation datasets. To reduce the effect of multicollinearity on our analyses given the large number of features considered in our models, we removed highly correlated features (r > 0.9) using the caret package [55] in R Project Software. We set the number of trees for RF parameters as 500 and the number of features as √p, where p equals the number of features in the dataset. All RF classifier operations were run using the package randomForest [56] in R Project Software. The input datasets for all seven models for sowing season and full season analyses are shown in Table 3.

Table 3. Feature components of different sensor and sensor combinations and full and sowing season models.

Model Sensor & Sensor combinations No. of Features No. of Selected Features Feature Components
Full Model Sentinel-1 39 15 (2 bands + 1 index) × 13 dates
Sentinel-2 198 56 (10 bands + 8 indices) × 11 dates
Planet 112 34 (4 bands + 3 index) × 16 dates
Sentinel-1 + Sentinel-2 237 61 (2 bands + 1 index) × 13 dates +
(10 bands + 8 indices) × 11 dates
Sentinel-1 + Planet 151 45 (2 bands + 1 index) × 13 dates +
(4 bands + 3 index) × 16 dates
Sentinel-2 + Planet 310 77 (10 bands + 8 indices) × 11 dates+
(4 bands + 3 index) × 16 dates
Sentinel-1+ Sentinel-2 + Planet 349 88 (2 bands + 1 index) × 13 dates +
(10 bands + 8 indices) × 11 dates+
(4 bands + 3 index) × 16 dates
Sowing Model Sentinel-1 24 11 (2 bands + 1 index) × 8 dates
Sentinel-2 126 38 (10 bands + 8 indices) × 7 dates
Planet 77 25 (4 bands + 3 index) × 11 dates
Sentinel-1 + Sentinel-2 150 41 (2 bands + 1 index) × 8 dates +
(10 bands + 8 indices) × 7 dates
Sentinel-1 + Planet 101 30 (2 bands + 1 index) × 8 dates +
(4 bands + 3 index) × 11 dates
Sentinel-2 + Planet 203 49 (10 bands + 8 indices) × 7 dates+
(4 bands + 3 index) × 11 dates
Sentinel-1+ Sentinel-2 + Planet 222 55 (2 bands + 1 index) × 8 dates +
(10 bands + 8 indices) × 7 dates+
(4 bands + 3 index) × 11 dates

We evaluated our model using bootstrap analysis for 400 iterations as previous work has shown this leads to results with a 95% confidence level and avoids a power loss of more than 1% [57]. Given that each bootstrap iteration produces a different set of selected features, we averaged results across all 400 models (average number of selected features shown in Table 3). To calculate variable importance, we identified the top five most important features as ranked by the average mean decrease in accuracy. Given that all 400 bootstrap models produced different results, we present the five most important variables found across all models by averaging the mean decrease in accuracy for each variable across all models, and ranking them from the greatest to smallest value.

Impact of spatial, temporal, and spectral resolution

To better understand the effect of improved spatial, temporal, or spectral resolution on classification accuracies, we examined the individual contribution of each in our models. First, to assess the contribution of spatial resolution on classification accuracies, we resampled the spatial resolution of Planet (3 m) to 10 m to match the spatial resolution of Sentinel-2 imagery using bilinear interpolation in GEE. We reran our single sensor Planet model using the aggregated, coarser resolution (10 m) data, and compared model results with those from the model using the original Planet data (3 m). Second, to identify the impact of improved temporal resolution on classification accuracy, we reduced the number of images used in our Planet analysis to only those dates that were similar to those available with Sentinel-2 data (Table 1). We then reran our single sensor Planet model using these limited dates (7 dates), and compared the results of this model with those from the original Planet model that included all available image dates (11 dates). Finally, to assess the impact of increased spectral information on classification accuracies, we reduced the number of bands and indices used in the single sensor Sentinel-2 model to match those used in the single sensor Planet model (Table 2). We then reran our single sensor Sentinel-2 model using these limited spectral bands and indices (7 bands and indices), and compared the results of this model with those from the original Sentinel-2 model that included all available bands and indices (18 bands and indices). We focused only on the sowing period for running these comparison models.

Results

Identifying which sensor or sensor combination results in the highest accuracies and variable importance

Table 4 shows mean overall accuracy and 95% confidence intervals from each of our models for the full study period. Considering which sensor and sensor combinations led to the highest classification accuracy (research question 1), we found that for single sensor models, Planet led to the best performing model. The Planet model obtained accuracies that were 3–5% higher than the next best performing single sensor model that used Sentinel-2 data. Findings indicated that the model that used only Sentinel-1 data performed poorly, with accuracies at least 20% lower than models using Sentinel-2 or Planet. Combining Sentinel-2 data and Planet data led to higher accuracies than individual sensor models, though this two-sensor model had an increase in accuracy of only 1% compared to the Planet model. Adding Sentinel-1 data did little to improve classification accuracy, and in many cases reduced overall accuracy compared to individual sensor models that used Sentinel-2 or Planet imagery, similar to Azzari et al. [14]. The highest classification accuracy in both the sowing period and full period models was obtained with the three-sensor model. The accuracies of the three sensor models, however, were only 1% better than the single sensor model that used Planet data. It is important to note that when considering 95% confidence intervals, many of the differences between the best performing single sensor and multiple sensor models were not significant. Most importantly, the Planet single sensor model performed similarly to most of the two sensor models and the three sensor model. The Planet single sensor model, however, did outperform the single sensor Sentinel-1 and Sentinel-2 models, especially for the sowing season models.

Table 4. Classification results and 95% confidence intervals for single and multiple sensor models for the full study period (Oct—Mar).

Sensor & Sensor combinations Full Model Overall Accuracy Width of 95% Confidence Interval
Sentinel-1 62.28% 1.69%
Sentinel-2 83.24% 2.37%
Planet 86.55% 1.77%
Sentinel-1 + Sentinel-2 82.65% 1.86%
Sentinel-1 + Planet 86.39% 1.81%
Sentinel-2 + Planet 87.61% 1.85%
Sentinel-1 + Sentinel-2 + Planet 87.71% 1.88%

Table 5 shows the features that ranked in the top five most important of all features considered for each sensor and sensor combination for the full study period. For the models using PlanetScope data, band 1 (blue) from October 8th was always the most important feature, followed by band 1 from October 15th and GCVI from October 15th. Considering models built using Sentinel-2 data, important features ranged throughout the growing season, from early October to the end of January. Band 2 (blue) appeared to be the most common band ranked in the top five across all models that included Sentinel-2. Regarding the Sentinel-1 models, images similarly spanned the length of the growing season, from mid-November until late January. When incorporating multiple sensor data into classification models, Planet bands were largely selected as the top most important variables when the sensor was included in multi-sensor models. Bands from Sentinel-2 were the second most frequently selected important variables, though they were always less important than Planet variables when both sensors were included. Bands from Sentinel-1 never appeared in the top most important variables in multi-sensor models.

Table 5. Top five importance features for single and multiple sensor models for the full study period (Oct—Mar).

S1 S2 PS S1+PS S2+PS S1+S2 PS+S1+S2
1 1120_VH 0131_B2 1008_B1 PS_1008_B1 PS_1008_B1 S2_0131_B2 PS_1008_B1
2 0131_VH 1008_B2 1015_B1 PS_1008_GCVI PS_1015_B1 S2_1112_B5 PS_1015_B1
3 1202_VH 1023_GCVI 1008_GCVI PS_1015_B1 PS_1008_GCVI S2_1023_GCVI PS_1008_GCVI
4 0131_VV 1112_B5 1104_B4 PS_1015_GCVI S2_0131_B1 S2_1008_B2 S2_0131_B2
5 1120_CR 0131_CRC 1113_B1 PS_1113_B1 S2_1008_B2 S2_0131_CRC PS_1015_GCVI

The accuracy of using only early season imagery

Considering whether using only images from the sowing period could lead to high classification accuracies (research question 2), we found that models that used only image dates from the sowing period obtained accuracies that were very similar to the full season model, usually within a 1% difference that was smaller than the 95% confidence intervals (Table 6). This was especially true for the single or multiple sensor models that included Planet data. The biggest differences between the sowing versus full period models were seen for models that used Sentinel-2 data, with overall accuracies decreasing by approximately 3% for the sowing date model compared to the full model. These results suggest that using only images during the sowing season is as effective as using images throughout the growing season for mapping tillage practices in this region.

Table 6. Classification results and 95% confidence intervals for single and multiple sensor models for the sowing study period (Oct—Dec).

Sensor & Sensor combinations Sowing Model Overall Accuracy Width of 95% Confidence Interval
Sentinel-1 61.24% 1.79%
Sentinel-2 80.8% 2.42%
Planet 85.78% 1.76%
Sentinel-1 + Sentinel-2 79.95% 1.99%
Sentinel-1 + Planet 86.03% 1.88%
Sentinel-2 + Planet 86.93% 1.93%
Sentinel-1 + Sentinel-2 + Planet 86.84% 1.90%

The impact of improved spatial, temporal, and spectral resolution on classification accuracies

Finally, considering the impact of improved spatial, temporal, and spectral resolution (research question 3), we found that the model that used Planet satellite data aggregated to 10 m resolution led to a reduction in accuracy of 4.5% compared to the original Planet model using 3 m resolution data (Table 7). This result suggests that improved spatial resolution (3 m vs 10 m) moderately increases the accuracy of mapping field-level tillage practices. With regard to temporal resolution, we found that the model that used 7 Planet scenes had a reduction in accuracy of approximately 1% compared to the Planet model that used all 11 available scenes (Table 7). This result shows that increased temporal resolution from Planet does little to improve classification accuracy. Finally, with respect to spectral resolution, we found that the model that used Sentinel-2 data with only the bands and indices available with Planet led to a reduction in accuracy of 0.5% compared to the original Sentinel-2 model (Table 7). This result suggests that the increased spectral resolution of Sentinel-2 does not play a significant role in mapping field-level tillage practices. Considering 95% confidence intervals, only the difference due to a change in spatial resolution (10 m vs 3 m) was significant.

Table 7. Classification results and 95% confidence intervals for low and high spatial, temporal, and spectral resolution models.

Changing Resolution Low Resolution Model Overall Accuracy Width of 95% Confidence Interval High Resolution Model Overall Accuracy Width of 95% Confidence Interval
Spatial 81.32% (Planet images aggregated to 10 m) 2.33% 85.78% (Planet images at 3 m) 1.76%
Temporal 84.55% (7 Planet scenes) 1.84% 85.78% (11 Planet scenes) 1.76%
Spectral 80.17% (Sentinel-2 images with 7 bands and indices) 2.33% 80.80% (Sentinel-2 images with 18 bands and indices) 2.42%

Discussion

Our study examined which satellite sensor and sensor combinations as well as time periods resulted in the highest classification accuracies for mapping tillage practices for smallholder farms in the Eastern Indo-Gangetic Plains in India. We found that models that included Planet data led to the highest classification accuracies, and that models that included Sentinel-1 led to the lowest classification accuracies. Though previous studies have found that using multiple sensors can lead to more accurate classification of tillage practices [14, 58], our results showed that the best performing two sensor and three sensor models only improved accuracies by approximately 1% compared to models that only used Planet data, and this difference was not significant considering 95% confidence intervals. This suggests that in the case of smallholder farms, Planet data alone may be able to effectively map tillage practices, at least during dry growing seasons with limited cloud cover as found in our study. Considering time periods, models built using data from the sowing period were as effective as models that used data throughout the growing season. This suggests that it may be possible to map tillage practices with high accuracy after sowing has ended, providing the ability to produce real-time, within season maps of zero tillage practices at scale. Our results broadly show that tillage practices can be mapped with high accuracy (> 86%) when using relatively new high-resolution, readily available satellite imagery, even in heterogeneous, smallholder systems.

We believe that the main reason Planet imagery performed better than imagery from other sensors is due to its higher spatial resolution. This is because our analyses that examined the individual effect of improved spatial, temporal, and spectral resolutions found that reduced spatial resolution led to the greatest change in model accuracies (4.5% compared to ~ 1%). The reason improved spatial resolution is likely important for model accuracy is because Planet’s improved spatial resolution of 3 m leads to fewer mixed pixels than when using coarser Sentinel-2 imagery (10 m resolution; Fig 4; S2 Fig) given the small size of fields within our study region (< 0.3 ha). Interestingly, even though Planet also has improved temporal coverage compared to Sentinel-2, this increased temporal availability did not largely increase accuracy (~ 1%). Sentinel-2 data led to models with moderate accuracy, and these models were ~5% lower in accuracy compared to Planet models. Overall we found that Sentinel-1 led to low classification accuracies and did little to improve multi-sensor model accuracies. Although Sentinel-1 provides complementary information, such as surface moisture and roughness, to optical data, optical data are much better able to discriminate differences between zero and conventional tillage. Therefore, models that include Sentinel-1 imagery probably led to reduced accuracy compared to optical-only models (Table 4) because less helpful radar data was selected at some tree nodes in these models. These results are similar to those found by Azzari et al. [14], who mapped tillage practices across the United States Midwest.

Fig 4.

Fig 4

The boundaries of two field polygons overlaid on a (A) Planet image (3 m) and (B) Sentinel-2 image (10 m). Sentinel imagery was freely downloaded from the Copernicus Open Access Hub (https://scihub.copernicus.eu/).

We also found that models that relied on data from the sowing season had similar accuracies to models that used data from the full study period. The importance of early season imagery was also evidenced in variable importance estimates for the full growing season (Table 5). Particularly for models that used Planet, early season imagery from October and early November were the most important variables. This suggests that the factors that are most important for distinguishing between ZT and CT likely occur during the field-preparation and sowing periods. Mechanistically this makes sense given that ZT fields are often covered in crop residue in this region, while CT fields are bare. This is because under ZT farmers do not till the soil and can plant wheat seeds within the remaining rice residue from the previous season. This residue may lead to higher NDVI values in ZT fields compared to CT fields (Fig 3) due to remaining green vegetated biomass from the prior rice harvest [59]. Furthermore, we found that the blue bands from Planet and Sentinel-2 were often the most important predictors, likely because data from the blue wavelength have been shown to effectively detect soil properties [60] and distinguish between soil and vegetation cover [61].

When interpreting findings, there are several considerations that should be noted. First, we predicted only a binary variable of ZT versus CT, instead of a continuous variable representing tillage intensity. In reality, farmers who practice CT have heterogeneous management, with farmers varying the number of times they till their fields. Previous studies have found that it is possible to accurately classify tillage intensity of large-scale farms [14], and future work should explore whether this is also possible in smallholder systems. Second, we conducted our study during the largely dry winter growing season which has limited cloud cover compared to India’s main growing season during the monsoon. It is possible that which sensor(s) lead to the highest classification accuracies may differ during cloudy seasons where optical image availability is more limited. Previous studies, for example, have shown that Sentinel-1 becomes more important for improving classification accuracies during periods of high cloud cover when optical imagery are unavailable [62]. This is largely because studies have shown that Sentinel-1 C-band data can appropriately detect vegetation phenologies across a wide range of land-cover types [63, 64], and our data suggest that ZT versus CT fields have distinct vegetation phenologies (Fig 3), particularly during the early part of the growing season. Third, our conclusions are only based on the results of one classification model, random forest, and future studies should assess whether other classification models can lead to improved accuracies. Finally, our study is limited in spatial and temporal scale; we only applied our analysis to one cropping system (rice-wheat), in one year (2017–18), and in one region (Arrah district, Bihar). Future work should examine how generalizable our findings are to other rice-wheat cropping systems in India and across multiple years. A recent study has shown that Sentinel-2 can be used to accurately map tillage practices in rice-wheat systems across Northern India over multiple years [65]. More broadly, future work should assess how generalizable our findings are to different smallholder farming systems with different cropping patterns in other parts of the world.

Conclusions

In this study, we assessed the ability of three readily-available, high-resolution sensors (Planet, Sentinel-2, and Sentinel-1) to detect zero tillage and conventional tillage across smallholder farming systems in the Eastern Indo-Gangetic Plains in India. We find that it is possible to use readily-available, high spatial resolution satellite data to map tillage practices of smallholder farms. In particular, Planet satellite data resulted in high classification accuracy models (> 86%) and including data from additional sensors did little to improve accuracies. Tillage practices can also be mapped effectively using only data from the period of sowing, suggesting that real-time, within season maps of tillage can be produced at scale. Our work highlights the important role of micro-satellite data to map agricultural characteristics of smallholder farms, which is exciting given that the temporal resolution of such imagery is only expected to increase over the coming years as additional satellites are launched.

Supporting information

S1 Fig

Histograms for Sentinel-2 (shown as blue), raw PlanetScope (shown as red), and histogram-matched PlanetScope (shown as grey) for images from October 8, 2017 for the (a) Blue, (b) Green, (c) Red, and (d) NIR bands.

(TIF)

S2 Fig. RGB images of Sentinel-2 and PlanetScope with some ZT/CT fields overlaid on imagery from October, 8, 2017 and VH band image of Sentinel-1 on November, 8, 2017.

(TIF)

S1 File. Survey conducted with farmers in the study.

(DOCX)

Acknowledgments

We thank the CSISA-CIMMYT field team who collected the field polygon and survey data in Bihar.

Data Availability

We have now included all minimal data sets in the Zenodo data repository (https://doi.org/10.5281/zenodo.6703973).

Funding Statement

This study was supported by the National Aeronautics and Space Administration Land Cover and Land Use Change Program through a grant awarded to MJ (Grant Number: NNX17AH97G).

References

  • 1.Conservation Agriculture; García-Torres L., Benites J., Martínez-Vilela A., Holgado-Cabrera A., Eds.; Springer; Netherlands: Dordrecht, 2003; ISBN 978-90-481-6211-6. [Google Scholar]
  • 2.Ndoli A.; Baudron F.; Sida T.S.; Schut A.G.T.; van Heerwaarden J.; Giller K.E. Conservation Agriculture with Trees Amplifies Negative Effects of Reduced Tillage on Maize Performance in East Africa. Field Crops Research 2018, 221, 238–244, doi: 10.1016/j.fcr.2018.03.003 [DOI] [Google Scholar]
  • 3.Keil A.; Mitra A.; McDonald A.; Malik R.K. Zero-Tillage Wheat Provides Stable Yield and Economic Benefits under Diverse Growing Season Climates in the Eastern Indo-Gangetic Plains. International Journal of Agricultural Sustainability 2020, 18, 567–593, doi: 10.1080/14735903.2020.1794490 [DOI] [Google Scholar]
  • 4.Mondal S.; Chakraborty D.; Bandyopadhyay K.; Aggarwal P.; Rana D.S. A Global Analysis of the Impact of Zero‐tillage on Soil Physical Condition, Organic Carbon Content, and Plant Root Response. Land Degrad Dev 2020, 31, 557–567, doi: 10.1002/ldr.3470 [DOI] [Google Scholar]
  • 5.Nyborg M.; Malhi S.S. Effect of Zero and Conventional Tillage on Barley Yield and Nitrate Nitrogen Content, Moisture and Temperature of Soil in North-Central Alberta. Soil and Tillage Research 1989, 15, 1–9, doi: 10.1016/0167-1987(89)90059-7 [DOI] [Google Scholar]
  • 6.Derpsch R.; Friedrich T.; Kassam A.; Hongwen L. Current Status of Adoption of No-till Farming in the World and Some of Its Main Benefits. Biol Eng 2010, 3, 25, doi: [DOI] [Google Scholar]
  • 7.Kubitza C.; Krishna V.V.; Schulthess U.; Jain M. Estimating Adoption and Impacts of Agricultural Management Practices in Developing Countries Using Satellite Data. A Scoping Review. Agron. Sustain. Dev. 2020, 40, 16, doi: 10.1007/s13593-020-0610-2 [DOI] [Google Scholar]
  • 8.Kassam A.; Friedrich T.; Derpsch R. Global Spread of Conservation Agriculture. International Journal of Environmental Studies 2019, 76, 29–51, doi: 10.1080/00207233.2018.1494927 [DOI] [Google Scholar]
  • 9.Aryal J.P.; Sapkota T.B.; Jat M.L.; Bishnoi D.K. On-Farm Economic and Environmental Impact of Zero-Tillage Wheat: A Case of North-West India. Ex. Agric. 2015, 51, 1–16, doi: 10.1017/S001447971400012X [DOI] [Google Scholar]
  • 10.Keil A.; Dsouza A.; McDonald A. Zero-Tillage Is a Proven Technology for Sustainable Wheat Intensification in the Eastern Indo-Gangetic Plains: What Determines Farmer Awareness and Adoption? Food Sec. 2017, 9, 723–743, doi: 10.1007/s12571-017-0707-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Erenstein O.; Laxmi V. Zero Tillage Impacts in India’s Rice–Wheat Systems: A Review. Soil and Tillage Research 2008, 100, 1–14, doi: 10.1016/j.still.2008.05.001 [DOI] [Google Scholar]
  • 12.Obade V. de P.; Gaya C. Mapping Tillage Practices Using Spatial Information Techniques. Environmental Management 2020, 66, 722–731, doi: 10.1007/s00267-020-01335-z [DOI] [PubMed] [Google Scholar]
  • 13.Zheng B.; Campbell J.B.; Serbin G.; Galbraith J.M. Remote Sensing of Crop Residue and Tillage Practices: Present Capabilities and Future Prospects. Soil and Tillage Research 2014, 138, 26–34, doi: 10.1016/j.still.2013.12.009 [DOI] [Google Scholar]
  • 14.Azzari G.; Grassini P.; Edreira J.I.R.; Conley S.; Mourtzinis S.; Lobell D.B. Satellite Mapping of Tillage Practices in the North Central US Region from 2005 to 2016. Remote Sensing of Environment 2019, 221, 417–429, doi: 10.1016/j.rse.2018.11.010 [DOI] [Google Scholar]
  • 15.Beeson P.C.; Daughtry C.S.T.; Wallander S.A. Estimates of Conservation Tillage Practices Using Landsat Archive. Remote Sensing 2020, 12, 2665, doi: 10.3390/rs12162665 [DOI] [Google Scholar]
  • 16.Bricklemyer R.S.; Lawrence R.L.; Miller P.R.; Battogtokh N. Predicting Tillage Practices and Agricultural Soil Disturbance in North Central Montana with Landsat Imagery. Agriculture, Ecosystems & Environment 2006, 114, 210–216, doi: 10.1016/j.agee.2005.10.005 [DOI] [Google Scholar]
  • 17.Sharma S.; Dhakal K.; Wagle P.; Kilic A. Retrospective Tillage Differentiation Using the Landsat‐5 TM Archive with Discriminant Analysis. Agrosyst. geosci. environ. 2020, 3, doi: 10.1002/agg2.20000 [DOI] [Google Scholar]
  • 18.Jin Z.; Azzari G.; You C.; Di Tommaso S.; Aston S.; Burke M.; Lobell D.B. Smallholder Maize Area and Yield Mapping at National Scales with Google Earth Engine. Remote Sensing of Environment 2019, 228, 115–128, doi: 10.1016/j.rse.2019.04.016 [DOI] [Google Scholar]
  • 19.Jain M.; Srivastava A.; Balwinder-Singh; Joon R.; McDonald A.; Royal K.; et al. Mapping Smallholder Wheat Yields and Sowing Dates Using Micro-Satellite Data. Remote Sensing 2016, 8, 860, doi: 10.3390/rs8100860 [DOI] [Google Scholar]
  • 20.Jain M.; Balwinder-Singh; Rao P.; Srivastava A.K.; Poonia S.; Blesh J.; et al. The Impact of Agricultural Interventions Can Be Doubled by Using Satellite Data. Nat Sustain 2019, 2, 931–934, doi: 10.1038/s41893-019-0396-x [DOI] [Google Scholar]
  • 21.Rao P.; Zhou W.; Bhattarai N.; Srivastava A.K.; Singh B.; Poonia S.; et al. Using Sentinel-1, Sentinel-2, and Planet Imagery to Map Crop Type of Smallholder Farms. Remote Sensing 2021, 13, 1870, doi: 10.3390/rs13101870 [DOI] [Google Scholar]
  • 22.Liu Y.; Li L.; Chen Q.; Shu M.; Zhang Z.; Liu X. Building Damage Assessment of Compact Polarimetric SAR Using Statistical Model Texture Parameter. In Proceedings of the 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA); November 2017; pp. 1–4. [Google Scholar]
  • 23.Delegido J.; Verrelst J.; Alonso L.; Moreno J. Evaluation of Sentinel-2 Red-Edge Bands for Empirical Estimation of Green LAI and Chlorophyll Content. Sensors 2011, 11, 7063–7081, doi: 10.3390/s110707063 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Sun Y.; Qin Q.; Ren H.; Zhang T.; Chen S. Red-Edge Band Vegetation Indices for Leaf Area Index Estimation from Sentinel-2/MSI Imagery. IEEE Trans. Geosci. Remote Sensing 2020, 58, 826–840, doi: 10.1109/TGRS.2019.2940826 [DOI] [Google Scholar]
  • 25.Ding Y.; Zhang H.; Wang Z.; Xie Q.; Wang Y.; Liu L.; et al. A Comparison of Estimating Crop Residue Cover from Sentinel-2 Data Using Empirical Regressions and Machine Learning Methods. Remote Sensing 2020, 12, 1470, doi: 10.3390/rs12091470 [DOI] [Google Scholar]
  • 26.Najafi P.; Navid H.; Feizizadeh B.; Eskandari I.; Blaschke T. Fuzzy Object-Based Image Analysis Methods Using Sentinel-2A and Landsat-8 Data to Map and Characterize Soil Surface Residue. Remote Sensing 2019, 11, 2583, doi: 10.3390/rs11212583 [DOI] [Google Scholar]
  • 27.Li W.; Jiang J.; Guo T.; Zhou M.; Tang Y.; Wang Y.; et al. Generating Red-Edge Images at 3 m Spatial Resolution by Fusing Sentinel-2 and Planet Satellite Products. Remote Sensing 2019, 11, 1422, doi: 10.3390/rs11121422 [DOI] [Google Scholar]
  • 28.Sun C.; Bian Y.; Zhou T.; Pan J. Using of Multi-Source and Multi-Temporal Remote Sensing Data Improves Crop-Type Mapping in the Subtropical Agriculture Region. Sensors 2019, 19, 2401, doi: 10.3390/s19102401 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Van Niel T.G.; McVicar T.R. Determining Temporal Windows for Crop Discrimination with Remote Sensing: A Case Study in South-Eastern Australia. Computers and Electronics in Agriculture 2004, 45, 91–108, doi: 10.1016/j.compag.2004.06.003 [DOI] [Google Scholar]
  • 30.Wei S.; Zhang H.; Wang C.; Wang Y.; Xu L. Multi-Temporal SAR Data Large-Scale Crop Mapping Based on U-Net Model. Remote Sensing 2019, 11, 68, doi: 10.3390/rs11010068 [DOI] [Google Scholar]
  • 31.Marenya P.P.; Kassie M.; Jaleta M.; Rahut D.B.; Erenstein O. Predicting Minimum Tillage Adoption among Smallholder Farmers Using Micro-Level and Policy Variables. Agric Econ 2017, 5, 12, doi: 10.1186/s40100-017-0081-1 [DOI] [Google Scholar]
  • 32.Keil A. Scaling Zero-Tillage Wheat through Custom-Hiring Services in the Eastern Indo-Gangetic Plains. Available online: https://repository.cimmyt.org/bitstream/handle/10883/19286/59259.pdf?sequence=1 (accessed on 25 May 2021). [Google Scholar]
  • 33.Government of India. Bihar’s Agriculture Development: Opportunities & Challenges Report of the Special Task Force on Bihar. Available online: https://niti.gov.in/planningcommission.gov.in/docs/aboutus/taskforce/tsk_adoc.pdf (accessed on 25 May 2021).
  • 34.Salam A.; Anwer E.; Alam S. Agriculture and the Economy of Bihar: An Analysis. International Journal of Scientific and Research Publications 2013, 3, 19. [Google Scholar]
  • 35.Newport D.; Lobell D.B.; Balwinder-Singh; Srivastava A.K.; Rao P.; Umashaanker M.; et al. Factors Constraining Timely Sowing of Wheat as an Adaptation to Climate Change in Eastern India. Weather, Climate, and Society 2020, 12, 515–528, doi: 10.1175/WCAS-D-19-0122.1 [DOI] [Google Scholar]
  • 36.Torres R.; Snoeij P.; Geudtner D.; Bibby D.; Davidson M.; Attema E.; et al. GMES Sentinel-1 Mission. Remote Sensing of Environment 2012, 120, 9–24, doi: 10.1016/j.rse.2011.05.028 [DOI] [Google Scholar]
  • 37.Drusch M.; Del Bello U.; Carlier S.; Colin O.; Fernandez V.; Gascon F.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sensing of Environment 2012, 120, 25–36, doi: 10.1016/j.rse.2011.11.026 [DOI] [Google Scholar]
  • 38.Team Planet. Planet Imagery Product Specifications Available online: https://assets.planet.com/docs/Planet_Combined_Imagery_Product_Specs_letter_screen.pdf (accessed on 25 May 2021). [Google Scholar]
  • 39.Gorelick N.; Hancher M.; Dixon M.; Ilyushchenko S.; Thau D.; Moore R. Google Earth Engine: Planetary-Scale Geospatial Analysis for Everyone. Remote Sensing of Environment 2017, 202, 18–27, doi: 10.1016/j.rse.2017.06.031 [DOI] [Google Scholar]
  • 40.Planet Team. Team, P. Planet Application Program Interface: In Space for Life on Earth; Planet Team: San Francisco, CA, USA, 2017. Available online: https://api.planet.com (accessed on 25 May 2021).
  • 41.Veci L.; Prats-Iraola P.; Scheiber R.; Collard F.; Fomferra N.; Engdahl M. The Sentinel-1 Toolbox. In Proceedings of the Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS); IEEE, July 14 2014; pp. 1–3. [Google Scholar]
  • 42.Vreugdenhil M.; Wagner W.; Bauer-Marschallinger B.; Pfeil I.; Teubner I.; Rüdiger C.; et al. Sensitivity of Sentinel-1 Backscatter to Vegetation Dynamics: An Austrian Case Study. Remote Sensing 2018, 10, 1396, doi: 10.3390/rs10091396 [DOI] [Google Scholar]
  • 43.Wilson R.T. Py6S: A Python Interface to the 6S Radiative Transfer Model. Computers & Geosciences 2013, 51, 166–171, doi: 10.1016/j.cageo.2012.08.002 [DOI] [Google Scholar]
  • 44.Murphy, S. 6S Emulator; https://github.com/samsammurphy/6S_emulator, 2018;
  • 45.Khlopenkov K.V.; Trishchenko A.P. Implementation and Evaluation of Concurrent Gradient Search Method for Reprojection of MODIS Level 1B Imagery. IEEE Trans. Geosci. Remote Sensing 2008, 46, 2016–2027, doi: 10.1109/TGRS.2008.916633 [DOI] [Google Scholar]
  • 46.Hijmans, R.J. Geographic Data Analysis and Modeling [R Package Raster Version 3.4–10] Available online: https://CRAN.R-project.org/package=raster (accessed on 25 May 2021).
  • 47.The R Development Core Team R: A Language and Environment for Statistical Computing: Reference Index; R Foundation for Statistical Computing: Vienna, 2010; ISBN 978-3-900051-07-5. [Google Scholar]
  • 48.Peña-Barragán J.M.; Ngugi M.K.; Plant R.E.; Six J. Object-Based Crop Identification Using Multiple Vegetation Indices, Textural Features and Crop Phenology. Remote Sensing of Environment 2011, 115, 1301–1316, doi: 10.1016/j.rse.2011.01.009 [DOI] [Google Scholar]
  • 49.Sullivan D.G.; Truman C.C.; Schomberg H.H.; Endale D.M.; Strickland T.C. Evaluating Techniques for Determining Tillage Regime in the Southeastern Coastal Plain and Piedmont. Agron. J. 2006, 98, 1236–1246, doi: 10.2134/agronj2005.0294 [DOI] [Google Scholar]
  • 50.Fletcher S., R. Using Vegetation Indices as Input into Random Forest for Soybean and Weed Classification. AJPS 2016, 07, 2186–2198, doi: 10.4236/ajps.2016.715193 [DOI] [Google Scholar]
  • 51.Gitelson A.A.; Viña A.; Arkebauer T.J.; Rundquist D.C.; Keydan G.; Leavitt B. Remote Estimation of Leaf Area Index and Green Leaf Biomass in Maize Canopies: Remote Estimation of Leaf Area Index. Geophys. Res. Lett. 2003, 30, n/a-n/a, doi: 10.1029/2002GL016450 [DOI] [Google Scholar]
  • 52.Steven M.D. The Sensitivity of the OSAVI Vegetation Index to Observational Parameters. Remote Sensing of Environment 1998, 63, 49–60, doi: 10.1016/S0034-4257(97)00114-4 [DOI] [Google Scholar]
  • 53.Van Deventer A.P.; Ward A.D.; Gowda P.H.; Lyon J.G. Using Thematic Mapper Data to Identify Contrasting Soil Plains and Tillage Practices. Photogrammetric Engineering and Remote Sensing 1997, 63, 87–93. [Google Scholar]
  • 54.Mao W.; Lu D.; Hou L.; Liu X.; Yue W. Comparison of Machine-Learning Methods for Urban Land-Use Mapping in Hangzhou City, China. Remote Sensing 2020, 12, 2817, doi: 10.3390/rs12172817 [DOI] [Google Scholar]
  • 55.Kuhn M. Building Predictive Models in R Using the Caret Package. J. Stat. Soft. 2008, 28, doi: 10.18637/jss.v028.i05 [DOI] [Google Scholar]
  • 56.Ayyadevara V.K. Pro Machine Learning Algorithms; Apress: Berkeley, CA, 2018; ISBN 978-1-4842-3563-8. [Google Scholar]
  • 57.Davidson R.; MacKinnon J.G. Bootstrap Tests: How Many Bootstraps? Econometric Reviews 2000, 19, 55–68, doi: 10.1080/07474930008800459 [DOI] [Google Scholar]
  • 58.Beeson P.C.; Daughtry C.S.T.; Hunt E.R.; Akhmedov B.; Sadeghi A.M.; Karlen D.L.; et al. Multispectral Satellite Mapping of Crop Residue Cover and Tillage Intensity in Iowa. Journal of Soil and Water Conservation 2016, 71, 385–395, doi: 10.2489/jswc.71.5.385 [DOI] [Google Scholar]
  • 59.Daughtry C.S.T.; Hunt E.R.; Doraiswamy P.C.; McMurtrey J.E. Remote Sensing the Spatial Distribution of Crop Residues. Agron. J. 2005, 97, 864–871, doi: 10.2134/agronj2003.0291 [DOI] [Google Scholar]
  • 60.Lopez-Granados F.; Jurado-Exposito M.; Pena-Barragan J.M.; Garcia-Torres L. Using geostatistical and remote sensing approaches for mapping soil properties. European Journal of Agronomy 2005, 23, 3, 279–289. [Google Scholar]
  • 61.USGS. Mapping, Remote sensing, and Geospatial data. https://www.usgs.gov/faqs/what-are-best-landsat-spectral-bands-use-my-research. Accessed June 21, 2022.
  • 62.Lopes M.; Frison P.; Crowson M.; Warren‐Thomas E.; Hariyadi B.; Kartika W.D.; et al. Improving the Accuracy of Land Cover Classification in Cloud Persistent Areas Using Optical and Radar Satellite Image Time Series. Methods Ecol Evol 2020, 11, 532–541, doi: 10.1111/2041-210X.13359 [DOI] [Google Scholar]
  • 63.Song Y.; Wang J. Mapping Winter Wheat Planting Area and Monitoring Its Phenology Using Sentinel-1 Backscatter Time Series. Remote Sensing 2019, 11, 449, doi: 10.3390/rs11040449 [DOI] [Google Scholar]
  • 64.Supriatna S. Spatio-Temporal Analysis of Rice Field Phenology Using Sentinel-1 Image in Karawang Regency West Java, Indonesia. GEOMATE 2019, 17, doi: 10.21660/2019.62.8782 [DOI] [Google Scholar]
  • 65.Zhou W.; Rao P.; Jat M.L.; Singh B.; Singh R.; Schulthess U.; et al. Using Sentinel-2 to Track Field-Level Tillage Practices at Regional Scales in Smallholder Systems. Remote Sensing 2021, 13, 24, 5108. [Google Scholar]

Decision Letter 0

Jaishanker Raghunathan Nair

31 May 2022

PONE-D-22-11912Using Sentinel-1, Sentinel-2, and Planet Satellite Data to Map Field-Level Tillage Practices in Smallholder SystemsPLOS ONE

Dear Dr. Jain,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite your attention to the queries raised by the reviewers and to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 15 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jaishanker Raghunathan Nair, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

4. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ

5. We note that Figure 1 in your submission contain [map/satellite] images which may be copyrighted. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For these reasons, we cannot publish previously copyrighted maps or satellite images created using proprietary data, such as Google software (Google Maps, Street View, and Earth). For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

a. You may seek permission from the original copyright holder of Figure(s) [#] to publish the content specifically under the CC BY 4.0 license.  

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

b. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

The following resources for replacing copyrighted map figures may be helpful:

USGS National Map Viewer (public domain): http://viewer.nationalmap.gov/viewer/

The Gateway to Astronaut Photography of Earth (public domain): http://eol.jsc.nasa.gov/sseop/clickmap/

Maps at the CIA (public domain): https://www.cia.gov/library/publications/the-world-factbook/index.html and https://www.cia.gov/library/publications/cia-maps-publications/index.html

NASA Earth Observatory (public domain): http://earthobservatory.nasa.gov/

Landsat: http://landsat.visibleearth.nasa.gov/

USGS EROS (Earth Resources Observatory and Science (EROS) Center) (public domain): http://eros.usgs.gov/#

Natural Earth (public domain): http://www.naturalearthdata.com/

6. Please upload a new copy of Figure 4 as the detail is not clear. Please follow the link for more information: https://blogs.plos.org/plos/2019/06/looking-good-tips-for-creating-your-plos-figures-graphics/" https://blogs.plos.org/plos/2019/06/looking-good-tips-for-creating-your-plos-figures-graphics/

Additional Editor Comments:

A fairly well presented work. Please ensure the technical queries of the reviewers are addressed before final acceptance.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Introduction:

There is relevance and clarity in what authors wish to investigate.

Materials and Methods:

The appropriateness of duration of Ground truth data, which were collected during April to May 2018 (after harvesting of the season being investigated), for winter crop may be justified (realising fully well that it is one of the most difficult information to get).

There is clarity and correctness in the method adopted.

Results: Line 304 (“... in many cases reduced overall accuracy compared to individual sensor models that...”) – Plausible reasons may be forwarded. Is it because field data is not adequate (enough)? If one were to have more field data, would such things happen (a scenario similar to Hughe’s effect starting to become evident)?

Similar comment for Table 4 wherein Sentinel - 2 accuracy is greater than Sentinel 1 + Sentinel 2 (The reason for this decrease in accuracy may be provided / speculated.).

Line 320: What causes blue band of PlanetScope to come out as the most valuable for discrimination (“...PlanetScope data, band 1 (blue) from October 8th was always the most important feature,...”) needs to be discussed. Could it be due to (relatively) poor atmospheric correction of blue band (lowest wavelength)?

Discussion and conclusion:

The results are logical, corrigible and supported by a comprehensive analysis.

Reviewer #2: This is a very nice paper assessing the usefulness of Planet, Sentinel-2 and Sentinel-1 satellite images for classifying zero tillage (ZT) vs conventional tillage (CT) field management practices in a region in the Indo-Gangetic Plains in India. The authors have done a good job in collecting a large number of satellite images and in situ data, which allows to draw solid conclusions. My only major comment is that the authors should stress the limitations of the study even more than already done. In the end, the study just considers wheat fields in the winter season 2017-18 in this region. The amount of crop residue left from the monsoon season seems to be the main indicator based on which it is possible to distinguish ZT and CT. This might be different in other regions, seasons or crops.

Specific comments

The paper is overall very well written. Yet sometime it is used the same phrases in a repetitive manner: e.g. “We used …” three times in the lines 239-243, or “results suggests …” also three times from lines 355 to 363. But there are many more examples. So please go through the text and try to reduce these repetitions.

Line 164: “Full range of” what?

Figure 3: Explain in the accompanying text already here why NDVI is higher for ZT than CT in October and November.

Line 272: Delete “conducted analyses that”

Line 402: “Who” instead of “which”

Line 237: It is problematic to state “We believe that other ML algorithms would produce other results”. Personally, I think you are right but you cannot know until you do it.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Markand Oza

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Nov 28;17(11):e0277425. doi: 10.1371/journal.pone.0277425.r002

Author response to Decision Letter 0


6 Oct 2022

Editor

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Thank you for this point. We have ensured that our manuscript’s style matches the PLOS ONE style template.

2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

To address several of the reviewers’ points, we have added 3 references to our manuscript

60. Lopez-Granados, F.; Jurado-Exposito, M.; Pena-Barragan, J.M.; Garcia-Torres, L. Using geostatistical and remote sensing approaches for mapping soil properties. European Journal of Agronomy 2005, 23, 3, 279-289.

61. USGS. Mapping, Remote sensing, and Geospatial data. https://www.usgs.gov/faqs/what-are-best-landsat-spectral-bands-use-my-research. Accessed June 21, 2022.

65. Zhou, W.; Rao, P.; Jat, M.L.; Singh, B.; Singh, R.; Schulthess, U.; Poonia, S.; Bijarniya, D.; Singh, L.K.; Kumar, M.; Jain, M. Using Sentinel-2 to Track Field-Level Tillage Practices at Regional Scales in Smallholder Systems. Remote Sensing 2021, 13, 24, 5108

3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

Thanks for this important point. We have now included all minimal data sets in the Zenodo data repository (https://doi.org/10.5281/zenodo.6703973). We have added this to the data availability statement at the end of our manuscript.

4. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ

Thanks, we have now added the ORCID iD for the corresponding author, Meha Jain (0000-0002-6821-473X).

5. We note that Figure 1 in your submission contain [map/satellite] images which may be copyrighted. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For these reasons, we cannot publish previously copyrighted maps or satellite images created using proprietary data, such as Google software (Google Maps, Street View, and Earth). For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

a. You may seek permission from the original copyright holder of Figure(s) [#] to publish the content specifically under the CC BY 4.0 license.

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

b. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

The following resources for replacing copyrighted map figures may be helpful:

USGS National Map Viewer (public domain): http://viewer.nationalmap.gov/viewer/

The Gateway to Astronaut Photography of Earth (public domain): http://eol.jsc.nasa.gov/sseop/clickmap/

Maps at the CIA (public domain): https://www.cia.gov/library/publications/the-world-factbook/index.html and https://www.cia.gov/library/publications/cia-maps-publications/index.html

NASA Earth Observatory (public domain): http://earthobservatory.nasa.gov/

Landsat: http://landsat.visibleearth.nasa.gov/

USGS EROS (Earth Resources Observatory and Science (EROS) Center) (public domain): http://eros.usgs.gov/#

Natural Earth (public domain): http://www.naturalearthdata.com/

Thank you. We have removed the basemap that was potentially of copyright worry, and now should have no issues. The image shown in panel C is high-res imagery used for our analysis obtained from Planet data and we made the image ourselves.

6. Please upload a new copy of Figure 4 as the detail is not clear. Please follow the link for more information: https://blogs.plos.org/plos/2019/06/looking-good-tips-for-creating-your-plos-figures-graphics/" https://blogs.plos.org/plos/2019/06/looking-good-tips-for-creating-your-plos-figures-graphics/

Thanks, we have now uploaded a new Figure 4. Please let us know if this is not sufficient.

Additional Editor Comments:

A fairly well presented work. Please ensure the technical queries of the reviewers are addressed before final acceptance.

Thank you for the chance to address your and reviewers’ comments. We believe the manuscript has been greatly improved thanks to these suggestions.

Reviewer #1

Introduction:

There is relevance and clarity in what authors wish to investigate.

Thank you very much for your comments.

Materials and Methods:

The appropriateness of duration of Ground truth data, which were collected during April to May 2018 (after harvesting of the season being investigated), for winter crop may be justified (realising fully well that it is one of the most difficult information to get).

There is clarity and correctness in the method adopted.

Thank you for these comments.

Results:

Line 304 (“... in many cases reduced overall accuracy compared to individual sensor models that...”) – Plausible reasons may be forwarded. Is it because field data is not adequate (enough)? If one were to have more field data, would such things happen (a scenario similar to Hughe’s effect starting to become evident)?

Thank you for this interesting and important point. We believe that the reduced accuracy from Sentinel-1 is because radar data are not able to detect differences between zero tillage and conventional tillage as well as optical data. These findings are similar to those found in other studies that have used Sentinel-1 data along with optical imagery to map tillage practices (e.g., Azzari et al. 2021). We have now stated that our results are similar to previous studies in the results section (after line 304 above), and we have discussed the reasons for this further in the discussion section.

“Adding Sentinel-1 data did little to improve classification accuracy, and in many cases reduced overall accuracy compared to individual sensor models that used Sentinel-2 or Planet imagery, similar to Azzari et al. [4].” (page 9, lines 320-322).

“Although Sentinel-1 provides complementary information, such as surface moisture and roughness, to optical data, optical data are much better able to discriminate differences between zero and conventional tillage. Therefore, models that include Sentinel-1 imagery probably lead to reduced accuracy compared to optical-only models (Table 4) because less helpful radar data is selected at some tree nodes in these models.” (page 12, lines 427-432).

Similar comment for Table 4 wherein Sentinel - 2 accuracy is greater than Sentinel 1 + Sentinel 2 (The reason for this decrease in accuracy may be provided / speculated.).

Thanks again for this helpful comment, and we have now cited Table 4 in our explanation for why models that include Sentinel-2 have reduced accuracy.

“Although Sentinel-1 provides complementary information, such as surface moisture and roughness, to optical data, optical data are much better able to discriminate differences between zero and conventional tillage. Therefore, models that include Sentinel-1 imagery probably lead to reduced accuracy compared to optical-only models (Table 4) because less helpful radar data is selected at some tree nodes in these models.” (page 12, lines 427-432).

Line 320: What causes blue band of PlanetScope to come out as the most valuable for discrimination (“...PlanetScope data, band 1 (blue) from October 8th was always the most important feature,...”) needs to be discussed. Could it be due to (relatively) poor atmospheric correction of blue band (lowest wavelength)?

Thanks for your interesting point. We found that the blue bands from PlanetScope and Sentinel-2 often come out to be the most important variables for detecting zero tillage versus conventional tillage. We believe this is because the blue band has been shown to distinguish between soil and vegetation cover, and has also been shown to effectively map soil properties, such as soil organic carbon, that may differ between conventional and zero tillage fields. We have added these explanations in the discussion section.

“This suggests that the factors that are most important for distinguishing between ZT and CT likely occur during the field-preparation and sowing periods. Mechanistically this makes sense given that ZT fields are often covered in crop residue in this region, while CT fields are bare. This is because under ZT farmers do not till the soil and can plant wheat seeds within the remaining rice residue from the previous season. This residue may lead to higher NDVI values in ZT fields compared to CT fields (Fig 3) due to remaining green vegetated biomass from the prior rice harvest [59]. Furthermore, we found that the blue bands from Planet and Sentinel-2 were often the most important predictors, likely because data from the blue wavelength have been shown to effectively detect soil properties [60] and distinguish between soil and vegetation cover [61].” (page 12, lines 443-453).

Discussion and conclusion:

The results are logical, corrigible and supported by a comprehensive analysis.

Thank you for these comments.

Reviewer #2

This is a very nice paper assessing the usefulness of Planet, Sentinel-2 and Sentinel-1 satellite images for classifying zero tillage (ZT) vs conventional tillage (CT) field management practices in a region in the Indo-Gangetic Plains in India. The authors have done a good job in collecting a large number of satellite images and in situ data, which allows to draw solid conclusions.

Thank you very much for these comments.

My only major comment is that the authors should stress the limitations of the study even more than already done. In the end, the study just considers wheat fields in the winter season 2017-18 in this region. The amount of crop residue left from the monsoon season seems to be the main indicator based on which it is possible to distinguish ZT and CT. This might be different in other regions, seasons or crops.

Thanks for this important suggestion. We agree that our study is limited in terms of crop type, study region, and time period, and it would be worthwhile to expand the discussion of the limitations of our study and its ability to be generalized to other regions. We have expanded our discussion of limitations in the discussion section.

“Finally, our study is limited in spatial and temporal scale; we only applied our analysis to one cropping system (rice-wheat), in one year (2017-18), and in one region (Arrah district, Bihar). Future work should examine how generalizable our findings are to other rice-wheat cropping systems in India and across multiple years. A recent study has shown that Sentinel-2 can be used to accurately map tillage practices in rice-wheat systems across Northern India over multiple years [65]. More broadly, future work should assess how generalizable our findings are to different smallholder farming systems with different cropping patterns in other parts of the world.” (page 13, lines 481-489).

Specific comments

The paper is overall very well written. Yet sometimes it is used the same phrases in a repetitive manner: e.g. “We used …” three times in the lines 239-243, or “results suggests …” also three times from lines 355 to 363. But there are many more examples. So please go through the text and try to reduce these repetitions.

Thanks for catching this. We have edited the text as suggested, and also read through the manuscript to remove other instances of repetition.

Line 164: “Full range of” what?

Thanks. We have changed this to ‘full range of phenological change…’ (page 4, line 165).

Figure 3: Explain in the accompanying text already here why NDVI is higher for ZT than CT in October and November.

We have added the following text to address this suggestion.

“From October to December, NDVI is higher in ZT fields compared to CT fields (Fig. 3), likely because farmers maintain monsoon rice crop residue on ZT fields but not CT fields. This is because ZT machinery allows farmers to plant wheat seeds into existing rice stubble, whereas in CT fields rice residue is removed by harvesting or by being incorporated into the soil through tilling.” (page 5, lines 171-176).

Line 272: Delete “conducted analyses that”

Done.

Line 402: “Who” instead of “which”

Done.

Line 237: It is problematic to state “We believe that other ML algorithms would produce other results”. Personally, I think you are right but you cannot know until you do it.

This is a good suggestion, and we have removed this statement. Instead, we have listed our use of only random forest as a limitation.

“Third, our conclusions are only based on the results of one classification model, random forest, and future studies should assess whether other classification models can lead to improved accuracies.” (page 13, lines 479-481).

Decision Letter 1

Jaishanker Raghunathan Nair

27 Oct 2022

Using Sentinel-1, Sentinel-2, and Planet Satellite Data to Map Field-Level Tillage Practices in Smallholder Systems

PONE-D-22-11912R1

Dear Dr. Jain,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Jaishanker Raghunathan Nair, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Jaishanker Raghunathan Nair

14 Nov 2022

PONE-D-22-11912R1

Using Sentinel-1, Sentinel-2, and Planet Satellite Data to Map Field-Level Tillage Practices in Smallholder Systems

Dear Dr. Jain:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Jaishanker Raghunathan Nair

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig

    Histograms for Sentinel-2 (shown as blue), raw PlanetScope (shown as red), and histogram-matched PlanetScope (shown as grey) for images from October 8, 2017 for the (a) Blue, (b) Green, (c) Red, and (d) NIR bands.

    (TIF)

    S2 Fig. RGB images of Sentinel-2 and PlanetScope with some ZT/CT fields overlaid on imagery from October, 8, 2017 and VH band image of Sentinel-1 on November, 8, 2017.

    (TIF)

    S1 File. Survey conducted with farmers in the study.

    (DOCX)

    Data Availability Statement

    We have now included all minimal data sets in the Zenodo data repository (https://doi.org/10.5281/zenodo.6703973).


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES