Skip to main content
. 2021 Feb 26;10:e64000. doi: 10.7554/eLife.64000

Table 2. Results of the human validation for a subset of videos.

Validation was performed by going through all problematic situations (e.g. individuals lost) and correcting mistakes manually, creating a fully corrected dataset for the given videos. This dataset may still have missing frames for some individuals, if they could not be detected in certain frames (as indicated by ‘of that interpolated’). This was usually a very low percentage of all frames, except for Video 9, where individuals tended to rest on top of each other – and were thus not tracked – for extended periods of time. This baseline dataset was compared to all other results obtained using the automatic visual identification by TRex (N=5) and idtracker.ai (N=3) to estimate correctness. We were not able to track Videos 9 and 10 with idtracker.ai, which is why correctness values are not available.

Table 2—source data 1. A table of positions for each individual of each manually approved and corrected trial.
Video metrics Review stats % correct
Video # ind. Reviewed (%) Of that interpolated (%) TRex idtracker.ai
7 100 100.0 0.23 99.07 ± 0.013 98.95 ± 0.146
8 59 100.0 0.15 99.68 ± 0.533 99.94 ± 0.0
9 15 22.2 8.44 95.12 ± 6.077 N/A
10 10 100.0 1.21 99.7 ± 0.088 N/A
13 10 100.0 0.27 99.98 ± 0.0 99.96 ± 0.0
12 10 100.0 0.59 99.94 ± 0.006 99.63 ± 0.0
11 10 100.0 0.5 99.89 ± 0.009 99.34 ± 0.002