Skip to main content
. Author manuscript; available in PMC: 2022 Apr 8.
Published in final edited form as: IEEE Trans Affect Comput. 2018 Sep 3;12(1):215–226. doi: 10.1109/taffc.2018.2868196

Fig. 1:

Fig. 1:

Screenshot of the recorded video from the front facing tablet’s camera and example of automatic facial landmarking are shown in first row. In this screenshot, the child (1) is sitting on the caregiver’s (2) lap, while the practitioner (3) is standing behind. All six outlined automatically detected landmarks (in black) are used for face pre-processing, while the lowest nose and the two outer eye landmarks are used to track head movement. Screenshots of frames from the movie stimuli being presented are shown in the remaining rows. These are Bubbles, Bunny, and Puppets, respectively.