Skip to main content
. 2020 Sep 21;16:1744806920958596. doi: 10.1177/1744806920958596

Figure 1.

Figure 1.

Model components. Labels (a) and associated images (b) were used as input to train the DeepLabCut tracker. (a) The video frames were manually annotated for training. Each mouse was labeled (x,y pixel coordinates) with 12 points (mouth, nose, right front paw, left front paw, three points on each hind paw—outer, inner, and base; abdomen; and tailbase) and the inner enclosure walls were marked (at the ends and cross point). (b) Input to the model was consecutive video frames. Each enclosure had four arenas; in this image, the mouse in arena 4 is still in recovery from the anesthetic, and the other arenas show active mice. (c) The output of the keypoint tracker is 12 points per mouse, shown here with “skeleton” connections and “body/head” circles to orient the reader. (d) Relative location measures (66 paired distances and 15 angles; only one of each shown in the figure) were calculated for the body parts for every video frame. The putative frame of interest is highlighted and the measures for the preceding and following frames are shown (24 consecutive frames). The statistical inputs were calculated over windows of three different sizes (6, 11, and 21 frames), and these per frame features were used as the input to the classifier model.