Skip to main content
. 2023 Feb 15;13:2688. doi: 10.1038/s41598-023-29574-0

Table 1.

Performance comparison with existing methods Shown is the accuracy of various annotation methods on both datasets.

Model Accuracy (%)
Home-cage
Human 71.6
CleverSys commercial system7 61.0
Jhuang et al.7 78.3
Le and Murari23 73.5
Jiang et al.24 81.5
DeepAction 79.5
CRIM13
Human 69.7
Burgos-Artizzu et al.6 62.6
Eyjolfsdottir et al.25 37.2
Zhang et al.26 61.9
Meng et al.27 68.6
DeepAction 73.9%

“Human” denotes the agreement between two human annotator groups (see “Methods: Inter-observer reliability” section). The accuracy for DeepAction on the home-cage and CRIM13 datasets is the mean accuracy from 12-fold and two-fold cross-validation, respectively, to provide a comparable reference to Jhuang et al.7 and Burgos-Artizzu et al.6 (see “Methods: Comparison with existing methods” section).