This video is an example of a deep neural network, trained with DeepLabCut, being used to estimate the position of a mouse’s head in an environment in real-time, and updating a virtual scene presented on the monitors based on this estimated position. The first few seconds of the video display the online tracking of specific features (nose, head, and base of tail) while an animal is moving around (shown as a red dot) in a three-port box (as in
Soares et al., 2016). Subsequently the inset shows the original video of the animal’s movements, which the simulation is based on. The rest of the video image shows how a green field landscape (source:
http://scmapdb.com/wad:skybox-skies) outside the box would be rendered on three simulated displays within the box (one placed on each of the three oblique walls). These three displays simulate windows onto the world beyond the box. The position of the animal was updated by DeepLabCut at 40 frames/s, and the simulation was rendered at the same rate.