Abstract
1. Methods for measuring animal movement are critical for understanding numerous ecological and evolutionary processes. However, few methods are available for small organisms, and even fewer methods offer consistent individual-level resolution while remaining affordable, scalable and operable in the field.
2. We describe a low-cost animal movement tracking method with a user-friendly graphical interface, called GRAPHITE. Our automated software can quantify motions of insects by offline video analysis of inexpensive and lightweight human-readable tags attached to individual insects. The integrated graphical editor provides a full-featured environment for users to review the generated tracking data and make individual- or group-level edits.
3. GRAPHITE is a novel video analysis and graphical editing software (MATLAB v.9.0.0+) that identifies tags in image frames with a minimal false negative rate, links sequences of corresponding tags into “tracks” for each individual insect, infers the tag identifier, and provides a user-friendly graphical environment for editing tracking data. Users can either batch process raw video data using the full analysis pipeline or execute GRAPHITE modules independently for a tailored analysis.
4. We demonstrate the efficacy of the developed software with a specific application to the movement of honey bees at the entrance of hives. However, this system can be easily modified to track individually marked insects of 3 mm and larger. A notable advantage of this method is its ability to provide easy access to individual-level tracking data using human-readable tags.
Keywords: monitoring, tracking, tagging, video analysis, computer vision, bees
1. Introduction
Measurement of animal movement is key to understand ecological and evolutionary processes, such as dispersal, population and metapopulation dynamics, disease transmission, and gene flow, among others (Turchin 1998). Many of these studies are highly relevant to conservation efforts (Fahrig 2007). Despite this central prominence, we have a limited understanding of animal movement in most applications. For example, there are very few diseases for which we have a well-characterized empirical understanding of spatial transmission dynamics driven by host movement. This is true even for diseases important to public health (e.g. Riley 2007; Wang et al. 2013).
Existing categories of methods for animal movement measure include: 1) direct human observation, involving marking individuals (Ricketts 2001) or not (Gómez 2003); 2) trace-based methods with visible trail markers such as powdered dyes (Adler and Irwin 2006); 3) active and passive electronic tags, including radio tracking (Aebischer et al. 1993), harmonic radar (Osborne et al. 1999), GPS tags (Recio et al. 2011), and RFID (Kissling et al. 2014); 4) biomarkers including stable isotopes (Rubenstein and Hobson 2004); and 5) image-based methods including camera traps, video tracking (Dell et al. 2014), and fingerprinting methods (Kühl and Burghardt 2013; Pérez-Escudero et al. 2014). These methods balance trade-offs between screening time, cost, accuracy, reliability, tracking area, continuity between tracking stations, ability to distinguish individuals, number of simultaneously tracked animals, a priori information, and behavior-altering impediments.
However, few methods for measuring movement are available for small organisms including insects because they require the use of small components that are susceptible to false negative detection. Within the existing repertoire, even fewer methods offering consistent individual-level resolution are affordable, scalable and operable in the field (e.g. Campbell et al. 2008; Kimura et al. 2011; Mersch et al. 2013; Crall et al. 2015; Tu et al. 2016). Our method combines automated image capture and a graphical interface to quantify motion dynamics of insects from discrete locations by video analysis of inexpensive (≪ $0.01 per tag) and lightweight tags attached to individual insects. We have deployed consumer-grade digital cameras for video capture (e.g. Steen 2016) with simple weatherproofed enclosures, keeping the cost of the entire system low. The key developmental component is a video analysis and graphical editing software that identifies potential tags in video frames, assembles these discrete tags into “tracks” of the same insect moving through scenes, infers the tag identifier by digit recognition, and provides a user-friendly graphical environment for editing tracking data. The goal is to reduce the time a research must spend screening video data while also minimizing the false negative tag detection rate.
This system is designed for use in field settings, in contrast to other image-based methods that are typically used in laboratory studies with predefined tracking areas (e.g. Noldus et al. 2001). Compared with typical camera traps (Rowcliffe and Carbone 2008), this system distinguishes individuals via human-readable tags with unique numeric identifiers. Moreover, our new image-based method overcomes key limitations of existing RFID technology. Most notably, it enables easier access to location-based data by monitoring more colonies at a fraction of the cost of comparable RFID systems.
We show proof-of-concept of this method by tracking honey bees (Apis mellifera) at the entrance of beehives. However, this method can be generally deployed to track uniquely marked insects (~ 3 mm and larger) in situ. In particular, our method is most readily applied to a range of central place foragers with small nest, colony, or roost entrances relative to animal body size, allowing consistent tag detection within the camera’s visual field. It would also be straightforward to deploy our method in studies with bait stations and/or feeders, such as artificial flowers and pollinator feeding stations for honey bees (Gould 1975), and social stingless bees (Hubbell and Johnson 1978), among others. Although free-ranging animal movements could also be tracked, this would be more challenging than the present study. Limitations to this method are two-fold. First, organisms need to be tagged. This necessitates prior capture as well as knowledge about which individuals are expected to be seen at a camera location. Second, the system is not expected to be as effective with solitary animals. Given the low cost, convenience, individual-level resolution, scalability, extensibility, and user-friendly graphical analysis and editing software, our system has the potential to contribute to a spectrum of insect movement studies.
2. Experimental Setup
2.1. Tagging
Tags were designed to be durable in the outdoor environment, easily visible, lightweight, and low-cost. Each tag consisted of a unique three-digit number that was inkjet-printed (7.5 pt font) on white card stock (Neenah Exact Index, Item# 40508) with UV-resistant ink (PrintPayLess Black UV-Resistant Dye Ink). An inverted color scheme can be used for white insects. Tags were punched from the card stock and trimmed to a final size of 2.5 × 6 mm. The tags were then sprayed with a UV resistant coating (Krylon UV-Resistant Clear Acrylic Coating Spray, Item# 1305) and a waterproof coating (Scotchgard Outdoor Water Shield, Item# 5019–6).
We recorded bee movement for a total of ninety colonies at six apiaries managed by the University of Georgia. To ensure that bees were correctly tagged with their respective colony and queen, brood frames were moved to an enclosed environment one day prior to tagging. We tagged newly emerged worker bees, which have the advantages of not being able to fly and having reduced stinging ability. A unique tag was secured to the thorax of newly emerged bees using a waterproof glue (Titebond III, Item# 1411). The glue was allowed to become tacky then applied to the bees using a wood toothpick. The tag was then affixed and held briefly to set. All tags were oriented with the rightmost number towards the head of the bee (see supplemental). Ethical considerations must be given to the tagging of sensitive or threatened species and the impact of tagging on the tracked animals.
2.2. Camera and Lighting
A camera housing was temporarily mounted to the entrance of each colony for monitoring tagged bees that exited and entered the apiary (1). Each camera housing is 10 × 14 × 15.5 cm with a lower landing that extends 8.5 cm from the front face. Bees can pass through a 100 × 8 mm opening at the front of the camera housing. The camera compartment is separated from the passage by a 3 mm thick OPTIX acrylic sheet. The camera-facing-side of the acrylic sheet above the entrance was painted black except for an 18 mm viewport strip for video recording. The entrance-facing-side of the acrylic sheet was treated with a lubricant (3-IN-ONE Dry Lubricant, Item# 3IO-DL-00) to inhibit bees from walking in an orientation that obscured the tags. Lighting was provided by a 1.5 W battery-powered LED (LouisaStore Portable Pocket LED Card Light, Item# BOOPIU26TO) located within the camera compartment. Modifications to the camera housing can be made to accommodate alternative experiments and organisms provided the camera retains clear en face view of tags.
Videos were recorded on Canon PowerShot SD1100 IS model cameras (30 fps; 640 × 480 px; automatic white balance; macro mode). Video duration ranged from 45 minutes to 1 hour depending on the battery. The camera was mounted in the camera compartment on a wooden shelf 106 mm above the acrylic sheet. Frame-by-frame tracking was restricted to the viewport area; however, integration of data from multiple camera housings allowed low-resolution tracking of tagged insects across sites.
3. Modules and Editor
We have developed an analysis pipeline and graphical editor, called the GRAPHical Insect Tracking Environment (GRAPHITE), for end-to-end processing of video data. GRAPHITE is a modular set of functions with a user-friendly graphical interface written in MATLAB R2016a (http://www.mathworks.com/). The software consists of a video preprocessor, tag detector, digit reader, and track assembler as well as a processing interface and graphical editor (Fig. 2). Each module logs and accesses information within a central annotation MATLAB file. The user can choose to initiate the entire set of analysis routines as a single pipeline or access each module independently for a tailored analysis of a particular video (see supplemental). Batch processing is performed in parallel based on the number of cores available to MATLAB.
Figure 2:

The analysis pipeline consists of five main modules: video preprocessing, tag detection, digit recognition, track assembly, and a graphical editor. (A) The video preprocessor accepts a raw video as input and generates a background image and, optionally, a cropped video file as output. The cropped video file only includes the active regions detected within the raw video, and decreases the searching space for the follow-up tag analysis. (B) Each frame of the cropped video is searched for tag regions, and the resulting individual tag images are extracted, saved, and logged in an annotation file. (C) Tag images are preprocessed and provided to the Tesseract OCR engine for digit recognition. The orientation with the highest average confidence is chosen as the correct orientation, and digit recognition results are appended to the annotation file. (D) Tag data from different frames are linked as tracks based on their spatial locations and sizes. (E) The graphical editor can ease individual and global changes to tag data stored in the annotation file. The graphical editor also allows user to export the annotation data as either CSV or XLS files, in addition to a video file.
3.1. Video Preprocessing Module
The first pipeline module is a video preprocessor that prepares each video for tracking (Fig. 2A). The module allows users to crop videos in the temporal and spatial dimensions. The user can specify trimming times to remove frames from the beginning and/or end of each video. The spatial dimensions can also be cropped to remove areas that fall outside the apiary viewport as described in Section 2 and retain only the active region (Fig. 3). Enabling active region cropping can reduce the searching space in subsequent modules for faster processing.
Figure 3:

Active region cropped from a raw video file. (A) A map of the active areas is determined by the pixel-wise variance over all frames. (B) The active region map is converted to grayscale where higher intensity values represent a greater deviation from the background, indicating motion events. (C) Thresholding is used to separate active (white) from static (black) regions in the grayscale activity map. (D) The binarized image is cleaned by morphological dilation and filling holes in the active regions. (E) The bounding box is determined for the largest active region (outlined in red), and the associated coordinates are used to crop the raw video and background image.
The active region is determined by the pixel-wise variance across the duration of the trimmed video sequence as pixels with more motion events have larger variance. The resulting variance matrix provides a map of activity that is segmented via Otsu thresholding into active and static regions. Otsu thresholding is a histogram-based method that generates a binary image by finding the optimal pixel value that separates bright foreground (active) regions from dark background (static) regions. The binary map is cleaned by a series of morphological operations to define the active region within the viewport. The bounding box coordinates of the active region are used to crop the video.
The video preprocessor also generates the static background image for the tag detection module. The grayscale background image is calculated as the mean pixel-wise intensity over all frames of the videos sequence. This method leverages the a priori knowledge of a fixed field-of-view to produce background images regardless of moving object densities and motion speeds.
3.2. Tag Detection Module
Each frame in the preprocessed video sequence is searched for tags (Fig. 2B). Moving objects fall into two categories: tagged and untagged objects. To differentiate these categories, we use color filtering to discard prominent non-tag colors (i.e. the color of the insect) with a user-determined RGB triplet. The frame and triplet are first converted to Hue-Saturation-Value colorspace, and the values of all pixels with a hue within ±15° of the specified color are set to zero. Finally, the filtered image is converted to grayscale for subsequent processing.
Next, the background image generated during video preprocessing is subtracted from the color-filtered frame to isolate pixels with motion events. The background subtracted frame is then passed to a Maximally Stable Extremal Region (MSER) feature detector to identify contiguous areas of stable pixel intensities. MSERs represent those appropriately colored moving objects with size ranging from 300 to 3000 pixels ((Fig. 4A–E).
Figure 4:

Tag detection from a single video frame. (A) A full-color video frame and (B) a grayscale background image are passed to the tag detection module. (C) The yellow bee abdomens are removed by color filtering. The background image (B) is subtracted from the color filtered image (C) to produce (D) an image highlighting moving objects. (E) The MSER feature detector identifies five contiguous areas with stable intensity (labeled in orange, yellow, green, cyan, and blue). (F) MSERs are filtered by solidity, aspect ratio, and eccentricity to remove non-tag regions and retain tag regions (orange). (G) Remaining tag regions are fitted by MBRs. (H) The MBR coordinates are used to extract and rotate the tag region.
To this point, all steps focus on positive tag detection across video frames. We proceed with filtering out non-tag regions, or false positives. As the tag physical dimensions are known (Section 2.1), we use shape measurements to screen out non-tag regions. MSER detected regions are conservatively filtered by solidity, aspect ratio, and eccentricity. Solidity refers to the ratio of the region area to the convex hull area; aspect ratio is the ratio of the minor-axis length to the major-axis length of the region’s fitted ellipse; and eccentricity is the distance between the fitted ellipse foci and the major-axis. When the MSER feature detector finds overlapping and duplicate regions, we only retain the smallest overlapping region by area (Fig. 4F).
Although filtering by physical attributes removes most non-tag regions, additional steps are required to further reduce the number of false positives. In our solution, each potential tag region is fitted with a minimum-area bounding rectangle (MBR). The MBR coordinates are used to rotate and crop the region from the full-color video frame. Each cropped region is resized to 60 × 30 px and represented by a Histogram of Oriented Gradients (HOG) feature vector. A HOG feature vector is a series of one-dimensional histograms describing the edge orientations within each 4 × 4 px image patch. By encapsulating the shape components found within the image, we classify cropped regions as “tag” or “non-tag”. The classification is performed by a two-class support vector machine (SVM) trained on HOG features from 4093 false tag images and 880 positive tag images. Finally, remaining tag regions are classified as “tag” and processed for digit recognition.
3.3. Digit Recognition Module
Digit recognition from natural images has been an area of intensive research (Goodfellow et al. 2013; Zhu et al. 2016). This module uses the Tesseract optical character recognition (OCR) engine to identify digits in tag images (Fig. 2C) (Smith 2007). Each potential tag image is preprocessed to enhance the contrast of digit characters. Tag image preprocessing begins with channel-wise wavelet denoising and a rolling-ball background subtraction (Sternberg 1983). Wavelet denoising uses a discrete stationary wavelet transform to remove noise in the image frequency domain without excessive edge blurring. The uneven white and black borders of tag images are removed with an estimated background from the denoised image. The rolling-ball background is generated by a morphological open operation on each color channel with a 5 px radius spherical structuring element. Each channel is then normalized and sharpened by intensity before conversion to grayscale.
As any remaining marks within border regions can result in incorrect digit recognition, we limit analysis to the digit containing region. We multiply the column sum and row sum of the tag image and produce a map for the digit region. This map is binarized by Otsu thresholding and the bounding box coordinates for the foreground digit region are recorded. If a digit region is not found, the tag is marked as a false positive and removed from the following analysis.
Preprocessed tag images are next passed to the Tesseract OCR engine for digit prediction. The Tesseract OCR engine is trained with over 100 examples for each preprocessed digit. As tag images can be in two possible orientations (right-side up and upside down), digit predictions are made for both orientations. Digits with the highest three confidence levels are retained as the predictions for each orientation, and the highest average confidence level is used to indicate the correct orientation.
3.4. Track Assembly Module
Each video frame is analyzed independently up to this step. Relating frame-wise tag data into tracks is necessary to achieve an interpretation of bee activity (Fig. 2D). For this purpose, the motion path of each tagged bee is assembled by linking tag data from frames based on centroid (x-y) location and tag size (area). The x-y-area feature vectors for sequential tag images are compared with a nearest neighbor algorithm. Euclidean distances between feature vectors are used to match a tag in one frame to a single tag in an adjacent frame. The track assembly algorithm tolerates gaps between matches of 0.5 seconds to account for momentary occlusions. Matched tags are linked together into tracks with unique track identification numbers to represent tagged bee motion paths (Fig. 5).
Figure 5:

The background image is overlaid with tracks of three tagged bees detected in a 50 minute video. Each color (yellow, green, and blue) represents the path of one tagged bee. Squares indicate the points of first observation, and circles denote the points of last observation. Gaps in the tracks indicate an occlusion of at least one second.
3.5. Graphical Editor
The video analysis pipeline was designed to favor Type I errors in order to reduce manual screening time without missing tagged bees. Therefore, a full-featured graphical editor is provided to allow users to remove false positives and correct any errors in the automatically generated tag data (Fig. 2E). The editor is designed to provide users with easy access to critical tag data, including tag digits, track identifiers, and false positive status (see supplemental).
The editor presents users with two tabular windows (Fig. 6). The first window allows users to select one or more tracks. Once tracks are selected, the second window displays all tags included in those tracks. Selecting a tag will display tag-related video frames with a green bounding box around the tag of interest. All other detected tags within frames are bounded in yellow. Edits can be made for individual tags, tracks, or groups of tracks for efficient bulk edits.
Figure 6:

The graphical user interface for editing tag data automatically generated by the developed video analysis method.
After edits are made, users can export the annotations as either an Excel or CSV file. For an intuitive overview of the data, users can export a summary video that contains annotated video segments of each tag track (see supplemental). Each track is represented by an MBR in a unique color and with tag digits displayed.
4. Evaluation
GRAPHITE has a human-in-the-loop design in which a user screens potential tracks that are automatically generated by the video analysis pipeline that preemptively minimizes the false negative detection rate. Monitoring was performed for ninety colonies at six apiaries resulting in 1339 video files with a cumulative duration of ~ 12000 hours. The false negative rate was determined by manually reviewing a random sample of 600 one-minute videos segments. A set of 362 segments were randomly sampled from the 181 videos containing detected tags and 238 segments were randomly sampled from the remaining 1157 videos. On average, a one-minute segment was reviewed for ~ 20% of the videos where a tag was not detected. This review resulted in a false negative rate of 0%.
GRAPHITE detected 1160145 potential tag regions in 181 of these videos. Potential tag regions were manually reviewed with the graphical editor. 6766 tags were identified (representing 450 tracks from 229 bees) resulting in a false positive rate of 99.4%. Despite the expectedly high false positive rate, the pipeline reduced the manual screening time by > 1000× from ~ 12000 hours to ~ 11 hours without missing any tagged bees. In addition, false positives were mostly grouped into a small number of tracks that were quickly reviewed and removed in bulk.
5. Conclusions and Future Directions
GRAPHITE offers a low-cost, end-to-end animal movement tracking environment with a user-friendly graphical interface. We demonstrate the efficacy of the developed software with specific application to tracking tagged bees. The accessible and minimal hardware requirements along with the highly automated and flexible processing modules allow for many different experimental setups with various model organisms. This flexibility allows capabilities beyond video tracking software with no means to identify individuals traversing different tracking stations (Kimura et al. 2011; Tu et al. 2016)
A major advantage of this method is its ability to track individual insect movements in a low-cost field setting, as opposed to average movement rates that routine techniques such as powdered dye provide. Individual variation in movement can have large consequences for the ecology and evolution of species (Bolnick et al. 2003, 2011). For example, in infectious disease studies, certain individuals may be more likely to move and thereby have greater contact rates than other individuals. Highly mobile and connected individuals could thereby have major impacts on disease transmission, and in some cases act as superspreaders (Lloyd-Smith et al. 2005).
In future work, the GRAPHITE digit reading module can be upgraded to other learning engines that allow corrections made via the editor to be fed back into the model for improved accuracy of digit recognition. The SVM classifier used to remove non-tag regions during tag detection could also benefit from the same feedback mechanism.
Supplementary Material
Figure 1:

Camera housings were attached to apiaries as shown on the left. A diagram of the camera housing components (i.e. lower landing, viewport, and camera shelf) is shown on the right. The red arrows point to the lower landing.
7. Acknowledgements
Research reported in this publication was supported by the National Institute of General Medical Sciences of the National Institutes of Health under award number R01GM109501 and by the National Science Foundation (DGE-1444932, to BJR and TD). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or National Science Foundation.
Footnotes
Data Accessibility
GRAPHITE is available at https://github.com/brossetti/graphite.
References
- Adler LS and Irwin RE (2006). Comparison of pollen transfer dynamics by multiple floral visitors: experiments with pollen and fluorescent dye. Annals of Botany, 97(1):141–150. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aebischer NJ, Robertson PA, and Kenward RE (1993). Compositional analysis of habitat use from animal radio-tracking data. Ecology, 74(5):1313–1325. [Google Scholar]
- Bolnick DI, Amarasekare P, Araújo MS, Bürger R, Levine JM, Novak M, Rudolf VH, Schreiber SJ, Urban MC, and Vasseur DA (2011). Why intraspecific trait variation matters in community ecology. Trends in Ecology & Evolution, 26(4):183–192. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bolnick DI, Svanbäck R, Fordyce JA, Yang LH, Davis JM, Hulsey CD, and Forister ML (2003). The ecology of individuals: incidence and implications of individual specialization. The American Naturalist, 161(1):1–28. [DOI] [PubMed] [Google Scholar]
- Campbell J, Mummert L, and Sukthankar R (2008). Video monitoring of honey bee colonies at the hive entrance. Visual observation & analysis of animal & insect behavior, ICPR, 8:1–4. [Google Scholar]
- Crall JD, Gravish N, Mountcastle AM, and Combes SA (2015). Beetag: a low-cost, image-based tracking system for the study of animal behavior and locomotion. PloS one, 10(9):e0136487. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dell AI, Bender JA, Branson K, Couzin ID, de Polavieja GG, Noldus LP, Pérez-Escudero A, Perona P, Straw AD, Wikelski M, et al. (2014). Automated image-based tracking and its application in ecology. Trends in ecology & evolution, 29(7):417–428. [DOI] [PubMed] [Google Scholar]
- Fahrig L (2007). Non-optimal animal movement in human-altered landscapes. Functional Ecology, 21(6):1003–1015. [Google Scholar]
- Gómez JM (2003). Spatial patterns in long-distance dispersal of quercus ilex acorns by jays in a heterogeneous landscape. Ecography, 26(5):573–584. [Google Scholar]
- Goodfellow IJ, Bulatov Y, Ibarz J, Arnoud S, and Shet V (2013). Multi-digit number recognition from street view imagery using deep convolutional neural networks. arXiv preprint arXiv:1312.6082. [Google Scholar]
- Gould JL (1975). Communication of distance information by honey bees. Journal of Comparative Physiology, 104(2):161–173. [Google Scholar]
- Hubbell SP and Johnson LK (1978). Comparative foraging behavior of six stingless bee species exploiting a standardized resource. Ecology, 59(6):1123–1136. [Google Scholar]
- Kimura T, Ohashi M, Okada R, and Ikeno H (2011). A new approach for the simultaneous tracking of multiple honeybees for analysis of hive behavior. Apidologie, 42(5):607. [Google Scholar]
- Kissling WD, Pattemore DE, and Hagen M (2014). Challenges and prospects in the telemetry of insects. Biological Reviews, 89(3):511–530. [DOI] [PubMed] [Google Scholar]
- Kühl HS and Burghardt T (2013). Animal biometrics: quantifying and detecting phenotypic appearance. Trends in ecology & evolution, 28(7):432–441. [DOI] [PubMed] [Google Scholar]
- Lloyd-Smith JO, Schreiber SJ, Kopp PE, and Getz WM (2005). Superspreading and the effect of individual variation on disease emergence. Nature, 438(7066):355–359. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mersch DP, Crespi A, and Keller L (2013). Tracking individuals shows spatial fidelity is a key regulator of ant social organization. Science, 340(6136):1090–1093. [DOI] [PubMed] [Google Scholar]
- Noldus LP, Spink AJ, and Tegelenbosch RA (2001). Ethovision: a versatile video tracking system for automation of behavioral experiments. Behavior Research Methods, Instruments, & Computers, 33(3):398–414. [DOI] [PubMed] [Google Scholar]
- Osborne J, Clark S, Morris R, Williams I, Riley J, Smith A, Reynolds D, and Edwards A (1999). A landscape-scale study of bumble bee foraging range and constancy, using harmonic radar. Journal of Applied Ecology, 36(4):519–533. [Google Scholar]
- Pérez-Escudero A, Vicente-Page J, Hinz RC, Arganda S, and De Polavieja GG (2014). idtracker: tracking individuals in a group by automatic identification of unmarked animals. Nature methods, 11(7):743–748. [DOI] [PubMed] [Google Scholar]
- Recio MR, Mathieu R, Denys P, Sirguey P, and Seddon PJ (2011). Lightweight gps-tags, one giant leap for wildlife tracking? an assessment approach. PloS one, 6(12):e28225. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ricketts TH (2001). The matrix matters: effective isolation in fragmented landscapes. The American Naturalist, 158(1):87–99. [DOI] [PubMed] [Google Scholar]
- Riley S (2007). Large-scale spatial-transmission models of infectious disease. Science, 316(5829):1298–1301. [DOI] [PubMed] [Google Scholar]
- Rowcliffe JM and Carbone C (2008). Surveys using camera traps: are we looking to a brighter future? Animal Conservation, 11(3):185–186. [Google Scholar]
- Rubenstein DR and Hobson KA (2004). From birds to butterflies: animal movement patterns and stable isotopes. Trends in Ecology & Evolution, 19(5):256–263. [DOI] [PubMed] [Google Scholar]
- Smith R (2007). An overview of the tesseract ocr engine. In icdar, pages 629–633. IEEE. [Google Scholar]
- Steen R (2016). Diel activity, frequency and visit duration of pollinators in focal plants: in situ automatic camera monitoring and data processing. Methods in Ecology and Evolution. [Google Scholar]
- Sternberg SR (1983). Biomedical image processing. Computer, 16(1):22–34. [Google Scholar]
- Tu GJ, Hansen MK, Kryger P, and Ahrendt P (2016). Automatic behaviour analysis system for honeybees using computer vision. Computers and Electronics in Agriculture, 122:10–18. [Google Scholar]
- Turchin P (1998). Quantitative analysis of movement: measuring and modeling population redistribution in animals and plants, volume 1. Sinauer Associates Sunderland. [Google Scholar]
- Wang L, Wang Z, Zhang Y, and Li X (2013). How human location-specific contact patterns impact spatial transmission between populations? Scientific Reports, 3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhu Y, Yao C, and Bai X (2016). Scene text detection and recognition: Recent advances and future trends. Frontiers of Computer Science, 10(1):19–36. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
