Skip to main content
Journal of the Royal Society of New Zealand logoLink to Journal of the Royal Society of New Zealand
. 2022 Sep 19;53(1):69–81. doi: 10.1080/03036758.2022.2118321

Showcasing the TAIAO project: providing resources for machine learning from images of New Zealand's natural environment

Nick Lim 1,CONTACT, Albert Bifet 1, Daniel Bull 1, Eibe Frank 1, Yunzhe Jia 1, Jacob Montiel 1, Bernhard Pfahringer 1
PMCID: PMC11459783  PMID: 39439992

ABSTRACT

Proper management of the earth's natural resources is imperative to combat further degradation of the natural environment. However, the environmental datasets necessary for informed resource planning and conservation can be costly to collect and annotate. Consequently, there is a lack of publicly available datasets, particularly annotated image datasets relevant for environmental conservation, that can be used for the evaluation of machine learning algorithms to determine their applicability in real-world scenarios. To address this, the Time-evolving Data Science and Artificial Intelligence for Advanced Open Environmental Science (TAIAO) project in New Zealand aims to provide a collection of datasets and accompanying example notebooks for their analysis. This paper showcases three New Zealand-based annotated image datasets that form part of the collection. The first dataset contains annotated images of various predator species, mainly small invasive mammals, taken using low-light camera traps predominantly at night. The second provides aerial photography of the Waikato region in New Zealand, in which stands of Kahikatea (a native New Zealand tree) have been marked up using manual segmentation. The third is a dataset containing orthorectified high-resolution aerial photography, paired with satellite imagery taken by Sentinel-2. Additionally, the TAIAO web platform also contains a collated list of other datasets provided and licensed by our data partners that may be of interest to other researchers.

KEYWORDS: Research resource, environmental science, image dataset, aerial photograhy, camera traps, remote sensing, machine learning

Introduction

Environmental data science is strategically essential to New Zealand because it supports research on climate change impacts, adaptation to concomitant changes in the environment, and conservation of the biosphere (Gibert et al. 2018). Effective data science can take an essential role in the New Zealand government's goals of improving the quality of freshwater and reaching zero carbon by 2050.

While the technical means for data collection in environmental data science are becoming ever more accessible, the sheer volume of these data can be challenging to clean, annotate, and verify. This is compounded by the fact that environmental datasets, like other specialised datasets, tend to be niche and require expert domain knowledge to accurately label.

From a machine learning point-of-view, datasets that accurately reflect the variability in a natural environment are useful to evaluate the applicability of machine learning algorithms in real-world settings. Images in many popular image datasets tend to be taken in ideal conditions and not be reflective of real-world variability (Tsipras et al. 2020). One of the objectives of the TAIAO project on environmental data science in New Zealand that funds the work presented in this paper is to provide datasets that are suitable for the application and evaluation of machine learning techniques, with a special emphasis on data collected on the New Zealand environment.

We first present a brief overview of the TAIAO project before discussing the three datasets that form the primary contribution of this paper, including results of machine learning techniques applied to them. The first dataset involves species identification from camera trap imagery taken in New Zealand's native forest. The second dataset consists of aerial photography of the primarily pastoral Waikato region in New Zealand that has been manually annotated to segment stands of Kahikatea, a tree that is native to New Zealand. The third dataset links high-resolution aerial photography of New Zealand land cover to low-resolution satellite imagery and can be used to evaluate the accuracy of machine learning-based super-resolution techniques.

Time-evolving data science/Artificial intelligence for advanced open environmental science (TAIAO)

TAIAO: Time-Evolving Data Science/Artificial Intelligence for Advanced Open Environmental Science (2020) is a New Zealand government-supported multi-year programme aimed at improving the data science capabilities for environmental researchers in New Zealand. In addition to upskilling New Zealand researchers, the programme aims to help advance the state-of-the-art in environmental data science by enabling the development of new machine learning algorithms with a special emphasis on methods that are suitable for processing data collected on the New Zealand environment. In line with this aim, an important objective of TAIAO is to build an open-source framework to implement machine learning on environmental data and to provide an openly available repository with datasets to improve reproducibility in environmental data science.

The TAIAO programme is a multi-institute and multi-domain programme. It includes data scientists, data engineers, environmental scientists, and machine learning researchers from undergraduate to post-graduate levels. Moreover, collaboration extends beyond technical aspects to include regional councils, iwi,1 and co-governance entities to implement the methods developed by the project to support governance and management decisions.

Reproducible notebooks

Figure 1 provides a conceptual view of the TAIAO software platform and how the corresponding components of the platform interact. It consists of a catalogue of datasets, a library of transformations that can potentially be applied to a dataset (denoted in Figure 1 as Transf.), and a catalogue of so-called ‘notebooks’ containing programme code for processing the data along with documentation of the code and the purpose of the notebook.

Figure 1.

Figure 1.

Conceptual view of the TAIAO platform and the interaction of the corresponding components.

The TAIAO software platform uses Jupyter notebooks to ensure reproducibility and sufficient documentation because the Jupyter software is open-sourced and lightweight. Each notebook is associated with a task and includes a description of how the data can be accessed. The notebooks also document the application for which the dataset was gathered as well as the questions the data provider and researcher seek to answer with the data. We envision that as more notebooks in the platform are developed, the project will improve the accessibility of data science research to environmental scientists by providing examples of the application of state-of-the-art techniques.

Indigenous data science

TAIAO and the New Zealand government are committed to ‘Vision Mātauranga’ (Kaiser and Saunders 2021), which aims to unlock the potential of traditional indigenous knowledge and recognise the value of the tradition and knowledge passed down through the people living in the land. As part of this commitment, TAIAO recognises the community as partners in science and innovation, and guardians of natural resources and indigenous knowledge.

TAIAO is also committed to the principles of ‘Te Mana Raraunga’ (2018), which recognises indigenous data rights and sovereignty, practising regular dialogue with the owners of the data on how it is being used. It works closely with local co-governance entities and iwi to understand the interests and issues of the community. Additionally, TAIAO holds regular dialogues and meetings to review the environmental findings and review the capability developed.

Featured datasets

This paper features three annotated datasets provided by the TAIAO project. The first dataset contains annotated images of various predator species present in New Zealand, mainly small invasive mammal species, taken using night vision cameras. The second dataset contains aerial photography of the Waikato region, annotated with the segmentation of stands of Kahikatea, a native New Zealand tree. The third is a dataset containing orthorectified high-resolution aerial photography, paired with satellite imagery taken by the Sentinel-2 satellites. Users may access the datasets through the following DOI identifiers:

Predator camera dataset

The Karioi Predator Camera dataset is derived from videos originally provided by a New Zealand regional council. These videos were recorded from 2018–2020 in the Mount Karioi region by motion-activated cameras. The raw video files consist of 2101 thirty-second clips of videos captured using motion-triggered cameras deployed in the native forests located at Mount Karioi in New Zealand. These videos exhibit between thirty and sixty frames per second, at a 1920×1080 resolution, and are predominantly taken at night. Due to the nature of the predator animals considered, they tend to hide in the foliage, rendering this a particularly challenging machine learning problem.

A large proportion of the videos (approximately 85%) are false positives and do not contain predator species of interest. The main challenge to building an image classification dataset from these videos comes from the large number of frames (approximately 2.5 million) that need to be processed to find images of the predator species. In order to reduce the amount of manual work required for this, the background subtraction function provided by the Matlab software, namely vision.ForegroundDetector with 5-frame subtraction, was applied to help detect the movement of the predator species in the video. Subsequently, the blob detector function vision.BlobAnalysis, with minimumBlobArea set to 49, was used to further filter out frames solely exhibiting movement of leaves and plants. Through the combination of these two processes, the number of frames needed for manual processing was reduced to a manageable 90,000 frames. The frames of the video clips were then manually annotated with the predator species as well as bounding boxes using the background-subtracted videos as a guide. Finally, the bounding boxes of the predator species were used to crop and resize the images to 224×224 pixels. For the negative class, containing images that do not exhibit predators, random 224×224 crops were taken from the corresponding video frames. Figure 2 summarises the process used to generate the image dataset from the raw video files.

Figure 2.

Figure 2.

Process used to generate the image dataset from the raw video files.

For our publicly available dataset, 1200 images from three of the predator species in the data (cats, possums, and rats)were randomly selected, but all 123 images available for stoats are included. Additionally, 2400 images that do not contain a predator are provided. The images are organised by folders, with the folders representing the class of the image. Table 1 and Figure 3 summarise the breakdown of the classes and present representative images respectively.

Table 1.

Breakdown of the classes in the dataset.

  Training Evaluation
Cats 1000 200
Possums 1000 200
Rats 1000 200
Stoats 100 23
Empty 2000 400

Figure 3.

Figure 3.

Sample images in the Karioi Predator Camera dataset with the corresponding classes. A, cat. B, possum. C, rat. D, stoat. E, empty.

The Karioi Predator Camera dataset differs significantly from other publicly available camera trap datasets. Datasets such as Snapshot Safari (Pardo et al. 2021) and WCS Camera Trap (O'Brien 2016) focus on large mammals in an open field, and images in datasets that focus on small mammals such as the LILA Wellington Camera Trap data (Anton et al. 2018) are usually taken during daylight hours, away from the foliage.

To provide benchmark results for this challenging data, we present results for several image classification networks that were pretrained on ImageNet and fine-tuned on the training data of the Karioi Predator Camera dataset. The following data augmentation was applied during training: random flips in both horizontal and vertical orientations, brightness and contrast jitter with a strength of 0.5, randomised rotation, and random crops of 60% to 80% of each image. We trained for 25 epochs in each run and picked the model with the lowest classification error on a separate internal unseen validation data set (a subset of the training set). The neural networks (the convolutional neural networks EfficientNet-B0, EfficientNet-B4, MobileNetV3, ResNet-18, ResNet-50, and the vision transformer network VIT-S) were trained using cross-entropy loss and the ADAM optimiser, with a batch size of 64 and learning rate of 0.001. The backbone of the vision transformer network VIT-S used pretrained weights provided by Facebook AI Research (Caron et al. 2021) and used a linear classifier as the classification head. The codes used to fine-tune the models as well as the Matlab codes used to process the videos are available at https://github.com/nlim-uow/predatorcam. We also include baseline results for a linear support vector machine (SVM).

Table 2 summarises the classification performance of the various image classification networks on the evaluation set. The table shows the means and standard deviations obtained from five runs. Note that while we did not use methods to handle the imbalance of the classes during the fine-tuning, we report the macro-F1 score to give equal weight to the minority class (stoats). Users may obtain better performance in the minority classes through methods such as random over-sampling during training.

Table 2.

The average image classification performance of various image classification networks across five runs.

  Top-1 Accuracy Macro F1
EfficientNet-B0 0.9501 (0.0038) 0.9465 (0.0027)
EfficientNet-B4 0.9371 (0.0033) 0.9285 (0.0050)
MobileNetV3 0.9609 (0.0018) 0.9625 (0.0025)
ResNet-18 0.9423 (0.0016) 0.9465 (0.0034)
ResNet-50 0.9499 (0.0028) 0.9505 (0.0024)
VIT-S 0.9609 (0.0052) 0.9594 (0.0119)
SVM (linear kernel) 0.4213 (0.1212) 0.4145 (0.1017)

Note: Standard deviations are listed in parenthesis.

Aerial photography of kahikatea trees

The kahikatea is a protected native tree in New Zealand, and there is strong interest from regional councils to identify and catalogue such trees from aerial photography. The aerial photographs in the second dataset featured in this paper, the Kahikatea dataset, were taken at two different times, namely in June 2016 and March 2017, using an undercarriage camera at heights between 200m and 400m above sea level. One of the challenges in creating this dataset was the difficulty in identifying and distinguishing species of the tree from the approximately 80,000 aerial photographs obtained from one of New Zealand's regional councils. To address this challenge, the New Zealand Land Cover Database 4.1 provided by Toitū Te Whenua Land Information New Zealand (2015) was used to identify the known approximate locations of kahikatea trees and cross-reference the locations with the GPS tags in the aerial photographs. After cross-referencing, images were manually separated based on whether or not they contained kahikatea. Finally, to enable training of image segmentation models, the kahikatea trees in the positive examples were manual segmented.

The final dataset contains 232 positive and 287 negative examples of images with kahikatea. Figure 4 shows some representative images from the dataset. There appears to be relatively few such publicly available datasets of segmented aerial photography. The INRIA (Maggiori et al. 2017) dataset is one example, but it focuses on classifying man-made buildings versus non-buildings and is thus not as fine-grained as the Kahikatea Dataset. The resisc45 (Cheng et al. 2017) dataset contains more classes (45 in total) but does not have segmentation-level annotations.

Figure 4.

Figure 4.

Sample images from the Kahikatea dataset. A, Negative class. B, Positive class. C, Segmentation of positive class.

The kahikatea dataset was used to produce the results in Jia et al. (2021) detailing the correlation between the explainability of deep learning models and their test accuracy. The paper shows that there is a positive correlation between explanation quality and test accuracy and a negative correlation between explanation quality and test loss. The codes for reproducing the results can be found at https://github.com/Alvence/AccuracyAndExplanationQuality. Additional details and the process of how the explanation quality is defined can be found in the cited paper.

Paired high-resolution aerial photography and Sentinel-2 satellite imagery

The third dataset featured in this paper contains high-resolution orthorectified aerial photography provided by the Waikato Region Aerial Photography (WRAPS) initiative hosted by Toitū Te Whenua Land Information New Zealand (2017), paired with low-resolution satellite images taken by the Sentinel-2 satellites (Wulder et al. 2019). These images were collected as part of a satellite imagery super-resolution research project. Each aerial photograph was down-sampled to a spatial resolution of 2.5 m per pixel, while the satellite images were taken at a spatial resolution of 10 m per pixel. The satellite images were taken from eight fly-bys of the Sentinel-2 satellites between 16th January to 2nd March 2019 so that this data is temporally consistent with the aerial photographs taken in February of 2019.

Experiments with variants of the data showed that super-resolution algorithms do not perform well when the reference satellite images contain high cloud coverage, producing spurious results. In order to address this, the IdePix plug-in provided by the European Space Agency (2020) was used to identify satellite images with high cloud coverage and generate a cloud mask of the images. Regions were then filtered such that each region covered in the data contains at least six relatively cloud-free satellite images. Subsequently, the aerial photographs were cropped to 512×512 pixels, and the satellite images were cropped to 128×128 pixels. These images were stored as NumPy.npy files and organised by their respective visible channel (red, green, and blue). Figure 5 shows some representative images from the dataset.

Figure 5.

Figure 5.

Paired images from the WRAPS (aerial photography) and Sentinel-2 (satelite) data. Images shown here are from the red-channel dataset. Also note that there are six Sentinel-2 images paired with each of the WRAPS images in the actual data, not just two as shown here. A, High resolution aerial photography from the WRAPS dataset. B, Sentinel-2 satellite image of the same region. C, Sentinel-2 satellite image taken by a different fly-by.

The final dataset comprises of 16,266 training scenes and 71 evaluation scenes of the three visible bands. The dataset also includes binary cloud masks generated by IdePix representing the presence of clouds in the satellite images.

Considering related work, the ProbaV Super-Resolution Challenge dataset (Märtens et al. 2019) is also a paired satellite image dataset to be used for super-resolution, but there are some major differences between the dataset presented here and the ProbaV dataset. First, the ProbaV dataset uses images from the same satellite taken with two different resolutions (namely 100 m and 300 m), with the 100 m resolution image as the ground truth. The dataset presented here instead uses more costly aerial photography as the ground truth. This setting is more challenging and has more wide-reaching remote-sensing applications: ultimately, the goal is to provide high-resolution aerial photography without the need for costly aerial photography using planes.

The target resolution and target magnification rate (2.5 m and 4× respectively) is also substantially higher than for the ProbaV dataset (100 m and 3×) making this dataset more challenging for super-resolution as there are more fine details to be resolved. Finally, it contains substantially more scenes than the ProbaV dataset, which contains 1460 scenes.

In our experiments with super-resolution, the dataset was used to train DeepSum (Molini et al. 2019), a convolutional neural network for super-resolution, as well as ESRGAN, a generative-adversarial network approach for super-resolution. Figure 6 illustrates some results. Row 1 contains up-sampled satellite images obtained using bicubic upsampling, a common traditional method used in image upsampling, row 2 is the output of super-resolution using ESRGAN, row 3 is generated using DeepSum, and finally, row 4 is a fusion method combining the output of DeepSum and ESRGAN and row 5 contains the high-resolution ground truth from the aerial photography. Table 3 is a summary of the peak signal to noise ratio (PSNR), Structural Similarity Index (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS) between the aerial photography and the super-resolution output on this dataset. Considering the results in the table and the example imagery provided, it appears that the hybrid approach provides the best trade-off between classic metrics and perceptual quality. Additional details, an ablation study, and further results are available in Bull (2021b) and Bull et al. (2021). The codes used to generate the results as well as the codes used to generate the image set are available at https://github.com/danbull1/superresolution

Figure 6.

Figure 6.

Result of the super-resolution of the satellite images. Row 1 contains up-sampled satellite images obtained using bicubic upsampling. Row 2 is the output of super-resolution using ESRGAN. Row 3 is the output of super-resolution using DeepSum. Row 4 is generated using a fusion method combining the output of DeepSum and ESRGAN. Row 5 is the high-resolution ground truth from the aerial photography.

Table 3.

The effect of various super-resolution methods on the accuracy and perceptual metrics.

  PSNR SSIM LPIPS
Bicubic Upsampling 19.1 0.30 0.34
DeepSum 20.1 0.31 0.35
ESRGAN 19.5 0.22 0.21
DeepSum+ESRGAN 19.7 0.25 0.20

Note: The bolded values indicate the best performing methods.

Additional collated datasets

One of the challenges for environmental researchers is finding a list of available datasets that may be pertinent to answer the research questions relevant for their research. To address this, we have begun to curate a repository of New Zealand-specific environmental data provided by our data partners. At the time of writing this paper, the TAIAO environmental dataset repository contains links to, or mirrors, over 30 datasets of various types, including images, videos, textual data, and multi-variate tabular data.

A partial list of the datasets available through the platform, including the original owners of the datasets, is given below; note that these datasets are not prepared by us and ownership of these datasets belongs to our data partners. In addition, some of these datasets may require credentialed access.

  • Aggregated wave and atmospheric forecast derived from GFS guidance & Hindcast and forecast wave and atmosphere model–MetOcean (MetOcean 2022)

  • Automatic weather station & Nowcast precipitation, temperature and barometric for Ardmore, Ashburton, Auckland, Birchwood, Flat Hills, and Haast–MetServices (MetService 2022)

  • Lightning feed & archive of lightning records for Blitzen, GPATS Oceania, and TOA–MetOcean (MetOcean 2022)

  • Time series of rivers and rain gauges in Coromandel Peninsular dated from 2010–2020 at 5 min resolution–Waikato Regional Council (Waikato Regional Council 2022)

  • NASA Global topography (elevation, slope, and aspect) for the New Zealand region (NASA JPL NASADEM 2021)

  • Himawari-8 500 m and 2 km half-disc archive (Bessho et al. 2016)

  • Full resolution archive of Himawari-8 satellite from 2020 (Bessho et al. 2016)

  • Multispectral satellite data from Landsat 8 OLI sensor (Wulder et al. 2019)

  • LILA Wellington Camera Trap (Anton et al. 2018)

  • Moana New Zealand Hydrodynamics Re-analysis v1.9–MetOcean (MetOcean 2022)

  • Archive of MetServices Doppler radar network–MetService & MetOcean (MetOcean 2022)

  • Regional Council Water quality and Discharge–Hawke's Bay Regional Council (Hawkes Bay Regional Council 2022)

  • Tropical cyclone trajectory archive–MetOcean (MetOcean 2022)

  • PM10 and PM100 Air Quality particulate for Horizon and Hawke's Bay–Hawke's Bay Regional Council (Hawkes Bay Regional Council 2022)

  • Historical precipitation forecast of the Coromandel region from 2015–2018–Waikato Regional Council (National Centers for Environmental Prediction, National Weather Service, NOAA, US Department of Commerce 2015)

Conclusion

In this paper, we have detailed three image datasets provided by the TAIAO project on environmental data science in New Zealand. The first dataset contains images of small invasive mammals taken in challenging conditions. The second dataset contains aerial photography of the Waikato region in New Zealand in which stands of kahikatea (a native New Zealand tree) have been marked up using manual segmentation. The third consists of paired aerial photography and satellite imagery that is spatially and temporally consistent. We hope that these datasets help in the evaluation of machine learning algorithms to determine their applicability in real-world scenarios.

As the TAIAO project progresses, we plan to extend the platform to provide greater coverage of the set of relevant available data. We also plan to increase the number of notebooks and include more complex, cross-domain, multi-dataset examples showing the application of cross-domain machine learning.

Supplementary Material

Supplemental material

Note

1

These are the largest social units in Aotearoa Māori society.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  1. Anton V, Hartley S, Geldenhuis A, Wittmer HU.. 2018. Monitoring the mammalian fauna of urban areas using remote cameras and citizen science. Journal of Urban Ecology. 4:Article ID juy002. [Google Scholar]
  2. Bessho K, Date K, Hayashi M, Ikeda A, Imai T, Inoue H, Kumagai Y, Miyakawa T, Murata H, Ohno T, et al. 2016. An introduction to himawari-8/9–japan's new-generation geostationary meteorological satellites. Journal of the Meteorological Society of Japan. Ser. II. 94:151–183. [Google Scholar]
  3. Bull D. 2021a. Paired Waikato region aerial photography and Sentinel-2 images. 10.5281/zenodo.5296368. [DOI]
  4. Bull D. 2021b. Super-resolution of satellite images [master's thesis]. University of Waikato.
  5. Bull D, Lim N, Frank E.. 2021. Perceptual improvements for super-resolution of satellite imagery. 9th December 2021 36th International Conference on Image and Vision Computing New Zealand (IVCNZ). Online. 10.1109/IVCNZ54163.2021.9653355. [DOI] [Google Scholar]
  6. Caron M, Touvron H, Misra I, Jégou H, Mairal J, Bojanowski P, Joulin A.. 2021. Emerging properties in self-supervised vision transformers. In: Proceedings of the International Conference on Computer Vision (ICCV); Oct 11; Montreal, QC. [Google Scholar]
  7. Cheng G, Han J, Lu X.. 2017. Remote sensing image scene classification: benchmark and state of the art. Proceedings of the IEEE. 105:1865–1883. [Google Scholar]
  8. European Space Agency . 2020. SNAP–ESA's SentiNel application platform. [accessed 2020-10-02]. https://step.esa.int/main/download/snap-download/.
  9. Gibert K, Horsburgh JS, Athanasiadis IN, Holmes G.. 2018. Environmental data science. Environmental Modelling & Software. 106:4–12. Special issue on environmental data science. Applications to air quality and water cycle. [Google Scholar]
  10. Hawkes Bay Regional Council . 2022. Hawkes Bay Regional Council, Te Kaunihera ā Rohe o Matau-a-Māui. https://www.waikatoregion.govt.nz/services/regional-services/river-levels-and-rainfall/rainfall-latest-reading/.
  11. Jia Y. 2021. Kahikatea dataset with human-annotated true explanations for positive examples. 10.5281/zenodo.5059769. [DOI]
  12. Jia Y, Frank E, Pfahringer B, Bifet A, Lim N.. 2021. Studying and exploiting the relationship between model accuracy and explanation quality. In: European Conference on Machine Learning & Principles & Practice of Knowledge Discovery in Databases; Sep 13; Bilbao, Spain. 10.1007/978-3-030-86520-7_43. [DOI] [Google Scholar]
  13. Kaiser LH, Saunders WSA.. 2021. Vision mātauranga research directions: opportunities for iwi and hapū management plans. Kōtuitui: New Zealand Journal of Social Sciences Online. 16(2):1–13. [Google Scholar]
  14. Lim N. 2021. Karioi predator camera trap. 10.5281/zenodo.5276593. [DOI]
  15. Maggiori E, Tarabalka Y, Charpiat G, Alliez P.. 2017. Can semantic labeling methods generalize to any city? The inria aerial image labeling benchmark. In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS). IEEE.
  16. Märtens M, Izzo D, Krzic A, Cox D.. 2019. Super-resolution of proba-v images using convolutional neural networks. Astrodynamics. 3:387–402. [Google Scholar]
  17. MetOcean . 2022. MetOcean solutions. https://metocean.co.nz/metocean-api.
  18. MetService . 2022. MetService, Te Ratonga Tirorangi. https://metservice.com.
  19. Molini AB, Valsesia D, Fracastoro G, Magli E.. 2019. DeepSUM: deep neural network for super-resolution of unregistered multitemporal images. IEEE Transactions on Geoscience and Remote Sensing. 58:3644–3656. [Google Scholar]
  20. NASA JPL NASADEM . 2021. Merged DEM Global 1 arc second V001. 2020, distributed by NASA EOSDIS land processes DAAC. 10.5067/MEaSUREs/NASADEM/NASADEM_HGT.001. [DOI]
  21. National Centers for Environmental Prediction, National Weather Service, NOAA, US Department of Commerce . 2015. Ncep gfs 0.25 degree global forecast grids historical archive.
  22. O'Brien T. 2016. An annotated bibliography of camera trap literature: 1991–2013. Wildlife Conservation Society Working Paper.
  23. Pardo LE, Bombaci SP, Huebner S, Somers MJ, Fritz H, Downs C, Guthmann A, Hetem RS, Keith M, Mgqatsa N, et al. 2021. Snapshot safari: a large-scale collaborative to monitor africa's remarkable biodiversity. South African Journal of Science. 117(1/2):1. [Google Scholar]
  24. Raraunga TM. 2018. Principles of Māori data sovereignty. https://www.temanararaunga.maori.nz/nga-rauemi.
  25. TAIAO . 2020. TAIAO, time-evolving data science/Artificial intelligence for advanced open environmental science. https://taiao.ai.
  26. Toitū Te Whenua Land Information New Zealand . 2015. LCDB v4.1 (Deprecated)–land cover database version 4.1, Mainland New Zealand.
  27. Toitū Te Whenua Land Information New Zealand . 2017. WRAPs aerial photography. [accessed 2020-10-02]. https://data.linz.govt.nz/layer/104600-waikato-03m-rural-aerial-photos-2016-2019/.
  28. Tsipras D, Santurkar S, Engstrom L, Ilyas A, Madry A.. 2020. From imagenet to image classification: contextualizing progress on benchmarks. In: Thirty-Eighth International Conference on Machine Learning. 10.48550/arXiv.2005.11295. [DOI]
  29. Waikato Regional Council . 2022. Waikato Regional Council, Te Kaunihera ā Rohe o Waikato. https://www.waikatoregion.govt.nz/services/regional-services/river-levels-and-rainfall/rainfall-latest-reading/.
  30. Wulder MA, Loveland TR, Roy DP, Crawford CJ, Masek JG, Woodcock CE, Allen RG, Anderson MC, Belward AS, Cohen WB, et al. 2019. Current status of landsat program, science, and applications. Remote Sensing of Environment. 225:127–147. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental material

Articles from Journal of the Royal Society of New Zealand are provided here courtesy of Royal Society Te Aparangi and Taylor & Francis

RESOURCES