Abstract
In this article, we review eye-tracking studies with dogs (Canis familiaris) with a threefold goal; we highlight the achievements in the field of canine perception and cognition using eye tracking, then discuss the challenges that arise in the application of a technology that has been developed in human psychophysics, and finally propose new avenues in dog eye-tracking research. For the first goal, we present studies that investigated dogs’ perception of humans, mainly faces, but also hands, gaze, emotions, communicative signals, goal-directed movements, and social interactions, as well as the perception of animations representing possible and impossible physical processes and animacy cues. We then discuss the present challenges of eye tracking with dogs, like doubtful picture-object equivalence, extensive training, small sample sizes, difficult calibration, and artificial stimuli and settings. We suggest possible improvements and solutions for these problems in order to achieve better stimulus and data quality. Finally, we propose the use of dynamic stimuli, pupillometry, arrival time analyses, mobile eye tracking, and combinations with behavioral and neuroimaging methods to further advance canine research and open up new scientific fields in this highly dynamic branch of comparative cognition.
Keywords: eye tracking, dog, gaze, face perception, pupillometry
Introduction
In the rapidly growing field of canine (mainly dogs and wolves) cognition (Aria et al., 2021), researchers have investigated visual perception of dogs for (mainly) two reasons: to understand how dogs perceive objects in their environment and how they interpret dynamic changes and events related to these objects. Concerning the first goal, researchers have developed a strong focus on the dogs’ perception of humans (Huber, 2016), mainly because dogs show impressive abilities for interacting and communicating with us (e.g., Bensky et al., 2013; Kaminski & Marshall-Pescini, 2014; Miklósi, 2015). From an evolutionary point of view, such understanding of humans is especially interesting because the decoding of social signals from heterospecifics, or across the species boundary, is challenging. The relative contribution of the two major sources of information—the phylogenetic (i.e., during domestication) and the ontogenetic (i.e., during a pet dog’s life in the human environment)—is one of the main questions in canine science. Undisputable, however, is the fact that the success of dogs within human societies, including their adoption of the numerous roles humans give to them, likely depends on an interaction of nature and nurture.
Studying how dogs perceive humans and use this information to solve their everyday problems is important for understanding why they fit so well into the human environment (Huber, 2016). Head movements are performed to direct attention to the objects of interest in a scene and therefore are behavioral proxies of ongoing cognitive processing (Henderson, 2003). The assessment of the dogs’ head orientations can be used to investigate a dog’s perception of human pointing gestures (Kaminski & Nitzschner, 2013; ManyDogs et al., 2021), gaze following (Wallis et al., 2015), referential communication (Merola et al., 2012), and perspective taking (Catala et al., 2017; Maginnity & Grace, 2014). However, the dogs’ perception of the human face alone requires the collection of data on a much finer scale. Although operant procedures have been used to investigate whether dogs can discriminate between the owner’s face and the face of another familiar (Huber et al., 2013) or unfamiliar person (Mongillo et al., 2010), between happy and angry faces (Albuquerque et al., 2016; Müller et al., 2015; Nagasawa et al., 2011), or between human and dog faces (Racca et al., 2010), it is often not clear which (facial) features dogs use to accomplish these tasks.
Therefore, in the early 2010s, dog researchers began to make use of nonintrusive technology that has been developed in human psychology and psychophysics: eye tracking. Compared with traditional behavior coding, eye tracking allows for measuring overt visual attention at a much finer scale for spatial and temporal resolution (for recent evidence comparing the performance of behavior coding from videos and eye tracking in dog research, see Pelgrim et al., 2022). Indeed, whereas head movements can be scored from videos recorded with normal cameras, the inferences that can be drawn about the dogs’ focus of attention are limited to a macro-scale (such as left–right), which might be suitable for preferential looking paradigms but not for more detailed questions. Instead, with eye tracking, researchers can estimate the location of the dogs’ central focus of attention in a scene, with an accuracy spanning from less than 1° of visual field (for stationary eye trackers; Park et al., 2022) to 5.4° of visual field (for mobile eye trackers outdoors; Pelgrim & Buchsbaum, 2022).
The temporal accuracy is usually also greater than that of traditional cameras, given that eye trackers can have a sampling frequency of 1000 Hz. For these reasons, eye tracking allows researchers to investigate novel dependent variables, such as gaze arrival times into an area of interest (AoI), that would simply be inaccessible without this methodology.
Eye Tracking: Background and Terminology
Eye tracking is a method widely used in human psychology to investigate various perceptual-cognitive phenomena by tracking gaze coordinates and the pupil size. Fixations, saccades, and blinks are events determined by an event-parsing algorithm, and they depend on information derived from the basic measurements (gaze coordinates and pupil size). Actually, more than a century ago this method was developed to study the relation between (a) eye movements and our spatial judgments and (b) various geometrical optical illusions (Delabarre, 1898). Later, the method was firmly established as an accurate way of investigating how humans look at pictures (Buswell, 1935) and then applied in many different research fields such as perception, attention, memory, reading, psychopathology, ophthalmology, neuroscience, human−computer interaction, marketing, consumer behavior, and optometry (see Duchowski, 2007; Holmqvist & Andersson, 2017, for overviews).
Eye movements can be monitored in different ways, using (a) surface electrodes, (b) infrared corneal reflections, (c) video-based pupil monitoring, (d) infrared Purkinje image tracking, and (e) search coils attached like contact lenses to the surface of the eyes (Holmqvist, Örbom, Hooge, et al., 2022). The most commonly used method is video-based P−CR eye tracking, which estimates the gaze direction as a function of the relative positions of two landmarks in the eye—the center of the pupil (P) in the camera image, and the center of the reflection on the cornea (CR) from infrared illuminators—by subtracting the CR coordinate from the P coordinate in the pixel co-ordinate system of the video image (Holmqvist, Örbom, Hooge et al., 2022). This subtraction serves to account for small head movements that can still arise when the participant’s head is restrained (we discuss restraint-free methods in the section The Use of Mobile Eye Trackers).
Buswell (1935) concluded from his observations that an important relationship exists between eye movements and visual attention. Indeed, eye movements are an overt behavioral manifestation of the allocation of attention in a scene, and therefore they serve as a window into the operation of the attentional system. Moreover, they provide an unobtrusive, sensitive, real-time behavioral index of ongoing visual and cognitive processing (Henderson, 2003). The reason for this relationship is that in humans, but also in other species with developed visual systems, high-quality visual information is acquired only from a limited spatial region on the retina (in primates the fovea)—the visual quality falls off rapidly from the center of gaze into a low-resolution visual periphery—and only during periods of relative gaze stability (fixations).
If we perceive natural scenes with many objects of interest, we need to move the center of gaze, resulting in gaze shifts, to acquire information about these objects. This process of directing fixation through a scene in real time is called gaze control, which is a central element in the service of ongoing perceptual, cognitive, and behavioral activity. Visual perception is an active process, and gaze is controlled by moving not only the eyes but also the head and in some insects the whole body to make high-quality visual information available when needed (Land, 1999). However, when we scan a picture or scene, our eyes do not wander but jump around (Yarbus, 1967). Indeed, the majority of vertebrates show a pattern of stable fixations and rapid eye movements. The latter are called saccades. Actually, we humans move our eyes about three times each second to reorient the fovea through the scene. The fixations, periods of nearly stationary viewing, last for about 300 ms or longer if our attention is caught. Park et al. (2020) found that dog saccades follow the systematic relationships between saccade metrics previously shown in humans (e.g., an increase of peak velocity and duration with increasing amplitude); however, they seem to be slower, and fixations were longer than those of humans.
For humans, fixations are defined as periods in which the gaze coordinates remaining relatively stable that can span from some tens of milliseconds to several seconds (Holmqvist et al., 2011). In the dog literature, there is no clear consensus on how to quantify a fixation (Table S1, the Operationalization of Fixation column, discussed in the Calibration and Data Quality section), because of the species-specific eye movements and little research on how they compare with human eye movements (with the noticeable exception of Park et al., 2020, 2022). Because most information is acquired during fixations, the most important eye-tracking measures are related to fixations. Often, visual scenes are complex and contain many parts and stimuli, and subjects change the focus of their attention while processing the scene. AOIs are tools of analysis that are employed when the researcher’s interest is in what parts of a scene or stimulus attract gaze most effectively, and in what order (Buswell, 1935). AOI measures such as absolute or relative time spent (sum of fixations) in AOI, the number of transitions between various AOI, or the number of revisits may be used for such questions. Such looking patterns reveal the work of two mutually affecting processes: The visual-cognitive system directs the gaze toward important and informative objects and, vice versa, gaze direction affects several cognitive processes (Henderson, 2003). In general, eye fixation patterns may be regarded as an objective method for inferring ongoing cognitive and emotional processes of visual information (Loftus, 1972), such as visual-spatial attention and semantic information processing (Henderson, 2003; Kano & Tomonaga, 2009).
Applying the Eye-Tracking Technology in the Dog Lab
By and large, the advantages of eye tracking also exist if the method is applied for investigating visual processes in dogs. However, as we discuss in this article, these benefits do not come without costs. There is not a one-to-one correspondence between the visual systems of humans and dogs in terms of anatomy and physiology of the eye, nor can dogs be tested in the same way as humans. For instance, dogs cannot be verbally instructed what to do and therefore need training (to obtain high-quality data); the screen-based stimuli are also not as natural as for humans who are raised in an environment full of pictures and televisions. These drawbacks need to be taken into account when using the eye-tracking method in the dog laboratory. Still, since its introduction, 24 studies have been published (see Table 1 and Table S1 at https://osf.io/xbqkt), altogether proving the usability of the method and producing a decent number of results about dogs’ perception and cognition that otherwise would not have been achieved.
Figure 1. Table 1. Eye-Tracking Studies with Dogs Published Until October 2022.
Reference (Stationary ET) |
Exp. No. |
N | No. Excluded Dogs |
No. Targets |
Targets | Restriction | Stimuli | No. Trials |
No. Sessions |
Tracking | Pretraining Head |
Pretraining Calibration |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Correia-Caeiro et al. (2021) | 1 | 92 | 8 | 3 | Real-life treats | No – remote mode | Dynamic - projected |
20 | 1 | Monocular | No | No |
Correia-Caeiro et al. (2020) | 1 | 27 | 1 | 3 | Real-life treats | No – remote mode | Dynamic - projected |
20 | 1 | Not reported | No | No |
Törnqvist et al. (2020) | 1 | 24 | 0 | 5 | Real-life treats | Chin rest | Static - screen |
72 | 2 | Binocular | Yes | No |
Törnqvist et al. (2015) | 1 | 40 | 6 | 5 | Real-life treats | Chin rest | Static - screen |
60 | 2 | Binocular | Yes | No |
Somppi et al. (2017) | 1 | 43 | 3 | 5 | Not reported | Chin rest | Static - screen |
2 | 2 | Binocular | Yes | No |
Somppi et al. (2016) | 1 | 31 | 2 | 5 | Not reported | Chin rest | Static - screen |
60 | 2 | Binocular | Yes | No |
Somppi et al. (2014) | 1 | 31 | 2 | 5 | Real-life treats | Chin rest | Static - screen |
6 | 2 | Binocular | Yes | No |
Somppi et al. (2012) | 1 | 6 | 0 | 5 | Real-life treats | Chin rest | Static - screen |
3 (× 6 pictures) |
4-8 | Binocular | Yes | No |
Barber et al. (2016) | 1 | 25 | 2 | 3 | Projected - static |
Chin rest | Projected - static |
16 | 5 | Monocular | Yes | Noa |
Barber et al. (2017) | 1 | 12 | 0 | 3 | Projected - static |
Chin rest | Projected - static |
16 | 5 | Eyes not tracked | Yes | Noa |
Gergely et al. (2019) | 1 | 27 | Not reported | 5 | Screen -dynamic |
no | Screen -static |
1 | 1 | Binocular | No | No |
Karl et al. (2020) | 1 | 15 | 0 | 3 | Screen -static |
Chin rest | Screen -dynamic |
4 | 1 | Monocular | Yes | Yes |
Karl et al. (2020) | 2 | 15 | 0 | 3 | Screen -static |
Chin rest | Screen -dynamic |
4 | 1 | Monocular | Yes | Yes |
Kis et al. (2017) | 1 | 31 | 27 | 5 | Screen -dynamic |
No, but body was restrained by owner |
Screen -static |
6 | 1 | Binocular | No | No |
Kis et al. (2017) | 2 | 46 | 79 | 5 | Screen -dynamic |
Screen -static |
2 | 1 | Binocular | No | No | |
Ogura et al. (2020) | 1 | 3 | 4 | 5 | Real-life treats | Chin rest | Screen -static |
6 | 6 | Monocular | Yes | No |
Park et al. (2020) | 1 | 9 | 24 | 3 | Screen -static |
Chin rest | Projected - static |
Not reported | 2 | Monocular | Yes | Yes |
Park et al. (2022) | 1 | 14 | 18 | Not reported | Screen -static |
Chin rest | Screen -static |
24 | Not reported | Monocular | Yes | Yes |
Téglás et al. (2012) | 1 | 14 | 32 | 5 | Screen -dynamic |
No | Screen -dynamic |
12 | 2 | Not reported | No | No |
Téglás et al. (2012) | 2 | 13 | 32 | 5 | Screen -dynamic |
No | Screen -dynamic |
6 | 1 | Not reported | No | No |
Völter et al. (2020) | 1 | 11 | 3 | 3-5 | Screen -dynamic |
Chin rest | Screen -dynamic |
1 | 1 | Monocular | Yes | Yes |
Völter et al. (2020) | 2 | 9 | 3 | 3-5 | Screen -dynamic |
Chin rest | Screen -dynamic |
1 | 1 | Monocular | Yes | Yes |
Völter & Huber (2021a) | 1 | 14 | 0 | 5 | Screen -dynamic |
Chin rest | Screen -dynamic |
2 | 2 | Monocular | Yes | Yes |
Völter & Huber (2021b) | 1 | 15 | 0 | 5 | Screen -dynamic |
Chin rest | Screen -dynamic |
2 | 2 | Monocular | Yes | Yes |
Völter & Huber (2021b) | 2 | 14 | 0 | 5 | Screen -dynamic |
Chin rest | Screen -dynamic |
2 | 2 | Monocular | Yes | Yes |
Völter & Huber (2022) | 1 | 14 | 0 | 5 | Screen -dynamic |
Chin rest | Screen -dynamic |
24 | 2 | Monocular | Yes | Yes |
Völter & Huber (2022) | 2 | 17 | 0 | 5 | Screen -dynamic |
Chin rest | Screen -dynamic |
12 | 4 | Monocular | Yes | Yes |
Pelgrim et al. (2022) | 1 | 5 | 3 | 5 | Real-life treats | Owner during calibration | Real-life | 10 | 1 | Monocular | No | No |
Rossi et al. (2014) | 1 | 5 | 1 | 9 | Real-life treats | No | Real-life | 16 | 2 | Monocular | No | No |
Williams et al. (2011) | 1 | 1 | Not applicable | 5 | Real-life treats | No | Real-life | 19 | Not applicable | Monocular | No | Yes |
Table 1. Note: Exp. No. = whether the information refers to the first or second experiment reported in the publication; No. Targets = number of calibration targets; Targets = type of calibration targets; Restriction = head movement restriction; Pretraining Head = pretraining to maintain a stable head position; Pretraining Calibration = pretraining to perform a calibration.
Although, to increase the dogs’ attention to the screen, chin rest trained dogs were confronted with a two-choice conditional discrimination task between geometric figures.
In these studies, researchers examined fixation patterns when presented with static images such as faces (e.g., Barber et al., 2016), changes in looking behavior as the result of expectancy violation (Völter & Huber, 2021a, 2021b) or cross-modal matching (Gergely et al., 2019), and most recently anticipatory looking (Völter et al., 2020) and changes in the pupil size (Karl, Boch, Zamansky, et al., 2020; Karl et al., 2021; Somppi et al., 2017; Völter & Huber, 2021a). Eye tracking in dogs has also been used to diagnose ocular motor abnormalities such as nystagmus (e. g., Dell’Osso et al., 1998), but for the purpose of this article we concentrate on perceptuo-cognitive tasks.
A major issue for the evaluation of the usage of eye tracking in dogs is the difference of the visual systems of humans and dogs (Miller & Murphy, 1995). As just described, the technology has been developed for the sake of measuring human eye movements on the basis of knowledge of the anatomy and physiology of the human visual system. But the canine visual system and performance deviates more or less strongly from the human counterpart, with more or less meaningful relevance for the eye-tracking measurements (Byosiere et al., 2018). For instance, it has been argued that dogs, as running predators, could be expected to be tuned to motion detection rather than visual acuity (McGreevy et al., 2004). In contrast, however, dogs might have been selected for increased visual performance during domestication, especially in the service of cooperation with humans (Barber et al., 2020).
The most recent review of the functional performances of the visual systems of dogs and humans by Barber and colleagues (2020) ended with the conclusion that the apparent limitations in visual perception of dogs compared with humans, such as lower visual acuity, reduced color perception, and increased sensitivity to bright light (because of adaptation to function in dim light), come with compensations that allow them to outperform humans in other related ways. They include better discrimination of certain colors (blue hues), increased motion sensitivity, and superior night vision. The earlier statement of Somppi and colleagues (2012) that the deficiencies of the visual system of the dog do not impose crucial limitations on the use of the eye movement tracking method seems therefore justified. Moreover, several studies have provided convergent evidence that dogs can detect details even in small photographs presented on computer screens (Aust et al., 2008; Müller et al., 2015; Pitteri et al., 2014; Range et al., 2008).
These favorable arguments with regard to the application of eye tracking in canine science should not be used as green card for research but rather as a proof of principle. Caution is necessary if one is recruiting breeds that have been selected for services with low or no engagement in visual tasks, like scent hounds and sniffer dogs (Barber et al., 2020). In general, it is important to be aware that in dogs, varying facial morphologies exist—for instance, between brachycephalic and dolichocephalic breeds—that may complicate the interpretation of even the most basic physiologic measures assessing perception (Miller & Murphy, 1995). Not only is the size of the eye and the total number of retinal ganglion cells highly variable, but the retinal ganglion cell distribution varies. This variation seems to covary with skull measurements—in particular, nose length. Whereas in dogs with long noses the retinal ganglion cells are concentrated in a horizontal visual streak across the retina with a less pronounced area centralis (similar to the fovea in primates), in dogs with short noses they are more centralized in a strong area centralis (McGreevy et al., 2004). Indeed, it seems that brachycephalic dogs, with flat faces and more forward-facing eyes, outperform dolichocephalic dogs in using the human pointing gesture (Gácsi et al., 2009), even though the causal role of the organization of the visual system remains unclear. Several researchers have therefore warned against overlooking breed differences or drawing careless conclusions or making bold assumptions from small sample sizes (Byosiere et al., 2018; Gácsi et al., 2009; McGreevy et al., 2013). Rather, these differences need to be taken into account when devising experimental designs that aim at investigating perceptuo-cognitive abilities of dogs in the laboratory.
In this article, we first review the eye-tracking studies with dogs published so far, classified according to the research questions and the respective answers; then we discuss the challenges and limitations of eye tracking in dogs (some of the challenges and limitations are applicable to eye tracking with humans as well, and others are more specific to eye tracking with dogs); and finally we provide an outlook into the future with a focus on the possible improvements and extensions.
Eye-Tracking Studies With Dogs
The Dog’s Perception of Faces
About one third of all eye-tracking studies with dogs published so far have investigated how dogs perceive faces. Faces are an important visual category for many taxa because they differ in subtle ways and possess many idiosyncratic features, thus providing a rich source of information (Leopold & Rhodes, 2010). The human face allows dogs to interact and communicate with their caregiver, for instance, by obtaining information signaled through communicative gestures (Téglás et al., 2012) as well as attentive states (Gacsi et al., 2004; Schwab & Huber, 2006) and emotional states (Müller et al., 2015).
In the first published study on stimulus perception measured with the aid of an eye tracker, dogs exhibited a looking preference for faces over children’s toys and alphabetic characters (Somppi et al., 2012). Within the category of faces, the dogs preferred to look at the faces of conspecifics (dogs) over heterospecifics (humans) in terms of both the number of fixations and the total duration of fixations. Dogs looked less frequently at the images of children’s toys, and the letters received the lowest number of fixations. This finding suggests not only that dogs are able to discriminate images of different categories but that the preference for faces over meaningless, inanimate objects and letters provides evidence that they recognized somehow the content of the images.
The same group of researchers compared dogs’ viewing patterns of dog and human faces in different conditions (Somppi et al., 2014): in upright and inverted manner and representing familiar or unfamiliar individuals. The results revealed striking similarities in the way humans and dogs view faces: As in the previous study, dogs preferred conspecific faces over heterospecific faces, showed more interest in the eye area than other areas in the case of upright faces but not inverted faces, showed deficits in face processing when the image was turned upside down, and fixated more at familiar than unfamiliar faces. The inversion effect could be explained by the reliance on global configural rather than elemental or part-based processing. Somppi and colleagues (2014) also found that dogs targeted nearly half of the relative fixation duration at the region around the eyes, indicating an eye primacy effect. This result, however, could be only partially confirmed in another eye-tracking study that investigated the possible differences between pet and laboratory dogs (Barber et al., 2016). Whereas lab dogs fixated first on the eye region and later the mouth region, pet dogs allocated their first fixation equally into the eyes and mouth regions.
Another interesting result of the Barber et al. (2016) study was the interaction of facial expression and face region preference. Across dogs, the number of fixations was higher for the forehead if a positive expression (happy or neutral face) was displayed but higher for the mouth and eye region if a negative expression (angry or sad face) was displayed. This finding deviates from what is known from the human literature; in humans, the mouth receives the most attention in positive emotions and the eyes in negative emotions (Schyns et al., 2007; Smith et al., 2005). Of interest, Somppi et al. (2016) found that eyes were more interesting for dogs than the mouth region, if measured in terms of the targets of the first fixations and looking durations, regardless of the viewed expression. This was true for both human and dog faces. Dogs scanned the facial features of conspecifics and humans in a similar manner. Only when examining the looking patterns at faces of conspecifics separately did the authors find identifiable emotion effects; dogs looked longer at the face of threatening dogs than of pleasant or neutral dogs. Such heightened attention toward the threatening signals of conspecifics could be explained by higher arousal in view of aggressive dogs than in view of angry humans.
How dogs perceive human faces is likely influenced by the emotional state of the dog subject when viewing them. When dogs were confronted with pictures of unfamiliar male human faces after receiving oxytocin (nasal spray), they showed enhanced gaze toward the eyes of smiling (happy) faces—and reduced gaze toward the eyes of angry faces—than after the placebo treatment (saline; Somppi et al., 2017). However, this finding could not be replicated. Other researchers found quite the opposite effect of oxytocin treatment, namely, that dogs’ preferential gaze toward the eye region when processing happy human facial expressions disappears (Kis et al., 2017).
Whatever the exact effects, the two studies suggest at least that the allocation of attention toward human emotional faces is somehow related to the dog’s own emotional state. This conclusion was supported by the changes of pupil size, which can be considered as an indicator of emotional arousal (Somppi et al., 2017). In the nontreatment (placebo) control condition, the pupil size was larger when looking at angry faces, a finding confirmed in a later study with faces of the dog’s caregiver, familiar and unfamiliar people (Karl, Boch, Zamansky, et al., 2020). But oxytocin reversed this effect (Somppi et al., 2017). A study that examined the cardiac responses of domestic dogs upon seeing faces of humans with different emotional expressions (angry, happy, sad, neutral) provided further evidence for the influence of human facial expression on the emotional state of the dog observers (Barber et al., 2017).
Support for the claim that dogs react to the emotions shown by human facial expressions comes from another finding of the study by Barber and colleagues (2016); the data from the eye-tracking measurements revealed a strong left gaze bias, that is, dogs were looking preferentially into the right face hemisphere in the left visual field regardless of which face (positive or negative emotion) was shown. A preference for the left visual field is associated with the engagement of the opposite, that is, right brain hemisphere. This finding corresponds not only to findings of previous studies with dogs (Guo et al., 2009; Racca et al., 2012; Siniscalchi et al., 2010) but also to findings from a variety of species that show a lateralization toward emotive stimuli regardless of the valence (see review in Salva et al., 2012). And the result is in line with the human eye-tracking literature (Butler et al., 2005; Guo et al., 2009).
In conclusion, taking all eye-tracking studies about the dog’s perception of human faces together, we are confronted with considerable ambiguity. Whereas some studies found modest to strong effects of the human facial expressions on the dogs’ looking patterns (Barber et al., 2016; Somppi et al., 2016, 2017), but not a consistent allocation of attention toward the various face regions, no such effects were found in two other studies. Instead, the dogs’ face-viewing gaze allocation varied between the various face regions but not between faces with different expressions (Correia-Caeiro et al., 2020; Kis et al., 2017). Still, these studies deviated from the other ones in one important aspect: Dogs were not trained to maintain attention to stimuli, and they were not or only weakly restrained; no chin/head rest was used, and thus the dog’s head could freely move. Although this procedure is likely producing more spontaneous looking behavior of the dog observers, it is not known if it reduces data quality.
The Dog’s Perception of Gaze and Human Communicative Signals
To test whether dogs, like human infants (Senju & Csibra, 2008), would interpret the ostensive-communicative cues of humans as the expression of communicative intent, and therefore follow with their gaze the subsequent referential signal more readily, dogs were presented a video sequence showing a female actor behind a table on which two pots, on each side, were placed (Téglás et al., 2012). The actor addressed the dogs in either an ostensive manner (looking straight at the dog and saying “Hi dog!” in a high-pitched voice) or a nonostensive manner (remaining with her head facing down) before emitting a referential signal by turning her head toward one of the two containers and looking at it for 5 seconds. Using the eye tracker, the researchers found that the dogs looked longer at the cued pot in the ostensive condition than in the nonostensive condition. The striking similarity of the effect to the one found in a study of 6.5-month-old human infants (Senju & Csibra, 2008) suggests that the dogs’ gaze following the human’s head turn was triggered by the expression of communicative intent.
Following human-given signals is also relevant for the question why dogs are communicating and interacting so well with us humans. Perhaps the best-studied phenomenon in this respect is the dog’s following of the human pointing gesture. It was catalyzed by the finding that dogs perform more accurately than other species (e.g., Hare et al., 2002; Kaminski & Nitzschner, 2013; Miklósi et al., 1998; Soproni et al., 2001), even great apes (Bräuer et al., 2006), and do so unlike wolves even from a very young age on (Bray et al., 2021). But the question of why dogs do so remains unresolved to this day. Finding an answer to this question is difficult if one must rely on choices and overt behavior of dogs. Eye tracking may provide a solution, because analyzing the looking patterns of dogs when observing the human informant has the advantage of a more thorough investigation of what drives the dogs’ decisions after the cueing. In a study using this method, dogs could use the momentary distal pointing of the human informant to find hidden food but without reference to target object (Delay, 2016). Dogs rarely looked at the indicated food container but spent most of the time looking at the experimenter’s head area and the pointing arm both during and after the signal (but see Rossi et al., 2014, for providing some evidence of gazing at the correct cup). The conclusion was, therefore, that dogs perceive the human pointing signal as direction related rather than object related (Tauzin et al., 2015).
The Dog’s Perception of Bodies
Recent studies have investigated how dogs look at full-body videos of humans and conspecifics expressing emotions (Correia-Caeiro et al., 2021) and at full-body photographs of humans (with and without communicative hand signals), conspecifics, cats, and wild animals (Ogura et al., 2020; Törnqvist et al., 2020). In the study by Correia-Caeiro et al. (2021), when presented with actors expressing emotions (happiness, fear, positive anticipation, frustration, and neutral), dogs looked longer at stimuli with human than dog actors. They looked longer at human bodies than human heads and slightly longer (29% vs. 26%) at dog bodies than dog heads. To compare these findings across studies, however, it is important to notice that, contrary to previous dog eye-tracking studies (Somppi et al., 2014; Törnqvist et al., 2020), the proportion of viewing time to each AOI was not standardized by the size of the AOI. The authors maintain that standardizing the proportion of viewing time by the AOI size would disrupt the ecological validity of the stimuli and overestimate the looking time to smaller areas by altering the natural head/body proportion. However, this choice is also open to the critique that longer proportion of viewing times are expected to fall in larger AOIs (in this case, the body), based on random scanning of the videos alone.
Because the dogs’ heads were unrestrained (eye tracker remote mode), it was possible to code dogs’ facial expressions and head movements. Dogs tended to turn their head left more often when seeing humans than conspecifics and brought their ears closer together when seeing conspecifics rather than humans. Using the eye tracker remote mode and untrained dogs, the authors were also able to test the largest sample of dogs in an eye-tracking study so far (92 dogs). However, collecting data with untrained dogs using the remote mode also led to substantial data loss in dogs: During data acquisition, the dogs received a treat from an experimenter after every trial. Each trial lasted between approximately five and eight seconds (the duration of one video), plus the time it took each dog to fixate for 1 second on the drift point before the presentation of the video. The high frequency of the reward in this procedure might have captured dogs’ attention off the screen (in anticipation of the reward) for most of the trial duration. Indeed, Table S9 of Correia-Caeiro et al. (2021) shows that although, on average, humans participating in the same study watched the stimuli for 83% (SD ± 17%) of the time, the dogs rarely looked at the stimuli for longer than 30% of the trial time and on average looked at the stimuli for 22% of the time (SD ± 28%).
In the study by Ogura et al. (2020), three dogs were presented with color photographs of humans, dogs, and cats. In the case of pictures of humans, dogs made more fixations to the limbs AOIs (which included feet, part of the lower legs, and arms) and were also more likely than expected by chance to fixate first the limbs AOIs. In contrast, they made more fixations to the head AOI and were more likely than expected by chance to fixate first the head AOI when the depicted animals were dogs. When presented with pictures of cats, dogs made more fixations to the head and body AOIs and where more likely to fixate first the body AOI. Contrary to the number of fixations and first fixation analyses, the looking duration did not differ depending on the presented species.
The AOI that was fixated first could have been dependent on the dogs’ scanning of the screen prior to stimulus presentation and during interstimuli intervals. Indeed, the presentation of the stimuli was not contingent on the dogs’ gaze being in a certain position on the screen. Likewise, differences in body postures and sizes of the same AOI between species were not considered.
Similarly to Correia-Caeiro et al. (2021), Ogura et al. (2020) did not standardize the response variables by the (differing) sizes of the AOIs. The authors argue that dogs’ gaze allocation to the AOIs in this study was not random, because dogs made more fixations in smaller AOIs (e.g., more fixations to the dog head than to the dog limbs).
Törnqvist et al. (2020) investigated how pet and kennel dogs look at pictures of dogs, humans, and wild animals. They found that both types of dogs showed spontaneous object/background and head/rest of the body differentiations, irrespective of the depicted (mammalian) species. Specifically, both pet and kennel dogs looked longer at heads than bodies. Moreover, heads and bodies elicited longer looking times than background. Both types of dogs gazed longer at the heads of wild animals than at the heads of dogs or humans but longer at the bodies of humans and dogs than at the bodies of wild animals.
The two dog types did not always converge in their viewing patterns. Whereas pet dogs looked longer at bodies in pictures showing two animals than in pictures showing only one animal, kennel dogs did not differentiate with their looking times between these two types of images. This is interesting, because the extent to which breed can influence dogs’ viewing patterns is understudied (but see Abdai & Miklósi, 2022, for an exception). It is unfortunate, because usually the breeds are not matched between the two samples, and the differences found between kennel and companion dogs (see also Törnqvist et al., 2015) might not (only) reflect differences in the social environment but also breed differences.
The Dog’s Perception of Social Interactions
Törnqvist et al. (2015) compared how pet and kennel dogs scan pictures of social interactions. Overall, pets looked longer at scenes containing two individuals than did kennel dogs, irrespective of whether the images depicted pairs of humans or dogs, facing toward each other or away from each other. However, the two groups of dogs did not differ in their gazing times to scrambled images or in the proportions of gazing time allocated to the actors relative to the entire scene. Moreover, the proportion of gazing time to interacting actors was larger than the proportion of gaze time allocated to the two actors when these faced away from each other, irrespective of the actor species. Both pets and kennel dogs looked longer at humans than conspecifics, but only when the actors faced toward each other. Both groups of dogs exhibited more saccades between actors when the actors were humans facing each other than when they were humans facing away from each other.
In studies using pictures of faces (Somppi et al., 2012, 2014), dogs looked longer at faces of conspecifics than at human faces, whereas in the experiment of Törnqvist et al. (2015) they looked longer at interacting (whole body) humans than at interacting dogs. The authors speculate that the presentation of whole-body figures might have driven this difference.
The Dog’s Perception of Animacy Cues
Most eye-tracking studies to date used static stimuli. However, dogs are very attuned to motion, which is also reflected in the organization of their visual system as mentioned before. Motion detection and perception is important, especially for a social carnivore, for multiple reasons including the detection and tracking of social partners, competitors, threats, and prey. The application of eye tracking might allow for identifying motion-related cues that capture dogs’ attention. In the psychophysics literature, such motion-related cues have been labelled animacy cues (Scholl & Tremoulet, 2000; Tremoulet & Feldman, 2000). They include self-propulsion, direction and speed changes, and nonlinear trajectories.
We recently applied eye tracking to study dogs’ sensitivity to animacy cues, here specifically self-propulsion, speed changes, and the stimulus appearance (the presence of fur; Völter & Huber, 2022). The prediction was that videos showing object movements with such animacy cues would elicit a pupil dilation response as part of the dogs’ orientation toward this stimulus (similar to the human psychosensory response; Mathôt, 2018). In a first experiment, the dogs watched three different videos repeatedly, but sometimes the playback direction was forward (i.e., normal) or reversed. The videos depicted a ball rolling down a ramp and dog toys that were dropped on the floor. When the videos were playing with reversed playback direction, it appeared as if the object moved in a self-propelled way upward. The dogs’ pupil size was indeed more variable when they watched the videos that were presented with a reversed playback direction.
In the second experiment, the dogs watched videos depicting an animation of a ball rolling back and forth between two walls. The ball either changed speed while moving (sometimes stopping and starting to move again without any external cause that would explain this speed change) or moved at constant speed. Moreover, the ball either had a smooth surface or was covered by fur. Dogs reacted in a similar way as in the first experiment: Their pupil size was more variable when the ball moved with varying speed, and there was some indication that the presence of fur had a similar effect. These findings suggest that cues such as self-propulsion, speed changes, and fur can lead to an orienting response in dogs, which might facilitate their detection of animate beings. This study also complemented previous looking time studies that provided evidence for dogs’ sensitivity to another animacy cue, dependent, chasing-like motion patterns (Abdai et al., 2017, 2021).
The Dog’s Expectations About the Physical Environment
Eye tracking, particularly pupillometry, can also be used to study expectations about the physical environment. For instance, expectations concerning issues such as how objects move when unsupported, what happens when two objects collide, or when an object moving past another should be visible and when it should not. Showing events that violate certain physical regularities (e.g., concerning support, solidity, contact causality, and occlusion events) can provide evidence for such expectations if appropriate controls are administered. This “expectancy violation paradigm” has been applied extensively with human infants and in the past few decades increasingly with dogs (Brauer & Call, 2011; Müller et al., 2011; Pattison et al., 2010, 2013).
In research on human infants, pupillometry has been highlighted as a superior method in the context of the expectancy violation paradigm because changes in the pupil size are time sensitive and can be linked to specific events (Jackson & Sirois, 2009). We recently applied this method within series of eye-tracking studies with dogs (Völter & Huber, 2021a, 2021b, 2022). Dogs were presented with animations showing occlusion, support, and launching events that were either consistent or inconsistent with the corresponding physical regularity. In the launching event, one billiard ball rolled toward another one. In the control condition, the balls collided and the launching ball stopped moving while the other ball was set into motion. In the test condition, the two balls moved exactly in the same way (same kinematic properties and same timing), but a gap remained between the two balls and there was no collision. If the dogs had an expectation about contact causality (i.e., that contact is necessary for the transfer of momentum) we predicted that their pupils would dilate more in the test condition than the control condition. Indeed, this prediction was confirmed by the dogs’ pupil size response. Additionally, the dogs looked significantly longer at the launching ball in the test condition than control condition after it had stopped moving. In the occlusion event, the dogs saw a ball rolling past a narrow pole and either it reappeared on the other side (as it should; control condition) or it did not (test condition). In line with the prediction, the dogs again had larger pupils following the implausible disappearance of the ball.
Finally, we also presented the dogs with support events: The dogs watched a ball rolling along a surface toward a gap in the surface. The ball either fell down into the gap (control condition) or hovered over the gap by continuing rolling as if no gap was present (test condition). In this case, the dogs’ pupils dilated more so when presented with the falling-down event than when they saw the hovering event. We concluded that the dogs were more surprised to see the ball suddenly changing direction (which, when considered in isolation, can be seen as an animacy cue; see previous section) than when it started hovering. However, this finding might also be an artifact of the screen-based nature of the stimulus. It remains to be seen whether this finding holds with real-world demonstrations (but there is some indication from other studies that dogs indeed have no clear expectations concerning support events or a gravity bias (Osthaus et al., 2003; Tecwyn & Buchsbaum, 2019).
This series of studies confirms that eye tracking and pupillometry can be a useful tool to study not only how dogs perceive the social environment but also what expectations they have concerning the physical world.
Challenges and Limitations
The Challenges of Dog Training and Small Sample Sizes
The sample sizes of all experiments (N = 32) of all published eye-tracking studies (N = 24) conducted with dogs so far range from 1 to 92 (M = 20.8, SD = 17.9) dogs (see Table 1). Limited sample sizes might reduce power, force researchers to adopt designs that might not be directly comparable to previous behavioral studies, and prevent investigations of breed differences. As in any other field, the appropriate sample size for each study should be dictated by the effect of interest, the experimental design, and the desired statistical power, as well as by practical concerns regarding the duration of the training, the number of available dogs, and the presence of suitable resources and facilities. A limiting factor of sample sizes in dog eye-tracking studies is the training administered in some labs prior to data collection to improve data quality (see the Calibration and Data Quality section).
The experiments involving pretrained dogs had an average sample size of 17 dogs, and those using dogs that were not trained to keep their head immobile had, on average, 36 subjects (medians are shown in Figure 1). Figure 2 shows the trend of sample sizes over the years separately for studies that included a pretraining and those that did not.
Calibration and Data Quality
The calibration is the most important step in eye tracking with respect to data accuracy (i.e., the deviation between actual and measured gaze location). It serves to map the dogs’ eye position (measured in camera co-ordinates) onto the screen or real-world coordinates. Put differently, the calibration allows for determining where (usually on the screen) the subject is looking.
During the calibration, a fixation target is shown at different locations of the screen and the subject is supposed to follow the target with the gaze. The quality of the calibration depends on various factors; some of the most important are the number of calibration targets, the size of the calibration targets, and the subjects’ ability and willingness to look at the center of the calibration target (for more details, see, e.g., Holmqvist, Orbom, Hooge, et al., 2022). Especially with nonverbal participants, either some kind of pretraining might be necessary—for dogs, this has been described by Karl, Boch, Virányi, et al. (2020)—or animated calibration targets might be used that increase the likelihood that the participants look at them (a method also commonly used with preverbal human infants and nonhuman primates). Given its importance, it is surprising how little information is provided about the calibration in comparative eye-tracking studies (Hopper et al., 2021). Often the only information provided is the number of targets that were used and whether the calibration targets were static or animated. In canine eye tracking, typically three (5 of 19 canine eye-tracking studies that reported the number of calibration targets) or more commonly five (14 of 19 studies) calibration targets have been used.
The calibration can be validated by repeating the calibration procedure. In this validation, the deviation between the first and second calibration can be used to quantify the accuracy. However, validation results are only rarely reported in canine cognition studies (only 7 of 21 stationary eye-tracking articles reported quantitative validation results, though occasionally previous publications from the same lab are cited that describe training criteria, including validation thresholds, in a more detailed manner). Sometimes the accuracy of the eye-tracking device (provided by the manufacturer) is reported instead (which merely shows which accuracies can be obtained with human participants) but not the results of the actual validation with dogs. Park et al. (2020) found accuracy in chin-rest trained dogs (0.88°) to be lower than in humans (0.51°; using an Eyelink 1000 system).
Data quality in eye tracking refers not only to accuracy but also to precision and data loss. The precision is defined as the extent to which repeated measurements of the same gaze position leads to the same measured values (i.e., the reproducibility of measurements). Data loss refers to missing data because the eye tracker cannot reliably identify the center of the pupil and/or corneal reflection. Data loss seems to be an important issue in canine eye-tracking studies, as some studies report data loss of more than 50% (reviewed in Park et al., 2022). Unfortunately, not all studies explicitly report data loss (i.e., the proportion of missing samples overall and per subject).
Additionally, researchers have applied various criteria for data exclusion. Across the reviewed studies, the criteria used to discard trials or participants because of excessive off-screen gaze, movement (e.g., the dog leaving the predefined viewing position), or technical problems varied considerably. For example, Correia-Caeiro et al. (2021) repeated data collection only with dogs that did not look at the stimuli in more than half of the trials. Törnqvist et al. (2015) excluded dogs with missing gaze data in more than 30% of the trials. Gergely et al. (2019) analyzed only trials containing at least 80 ms of on-screen gaze (out of a 17 s trial). Somppi et al. (2016) excluded trials in which the dog’s gaze was not detected for more than 50% of the time a stimulus was presented. Völter et al. (2020) excluded subjects with missing gaze data for more than 30% of the stimulus duration. Other adopted criteria focused on the AOIs. For example, Téglás et al. (2012) did not analyze trials in which the dogs had looked for less than 200 ms into the target AOIs.
Head movements can affect data quality: They can lead to inaccurate and imprecise results and data loss (Hessels et al., 2015; Holmqvist, Örbom, & Zemblys, 2022; Wass et al., 2014). Greater tracking accuracy was achieved with infants whose head movements were more restricted (when strapped to a baby seat compared with being placed in a high chair or in the parent’s lap; Hessels et al., 2015). To cope with this issue, the majority (16 of 21 of the reviewed studies) of canine eye-tracking studies used a chin rest to stabilize the dogs’ head during the recordings. Karl et al. (2020) described that the training that takes place prior to the experimental sessions, including the calibration and validation training, can take between 8 and 30 sessions. For some studies, pet dogs have been trained directly by their owners (Ogura et al., 2020; Somppi et al., 2012, 2014; Törnqvist et al., 2015, 2020), in other cases by experimenters or professional dog trainers (Karl, Boch, Zamansky, et al., 2020; Park et al., 2020, 2022; Völter et al., 2020; Völter & Huber, 2021a, 2021b, 2022) and only in some cases have they been explicitly trained for a calibration procedure (Barber et al., 2016; Binderlehner, 2017; Delay, 2016; Karl, Boch, Zamansky, et al., 2020; Ogura et al., 2020; Park et al., 2020; Park et al., 2022; Völter & Huber, 2021a, 2021b, 2022; Völter et al., 2020).
No study so far has tested whether the trainer’s background (professional, experimenter, or the dog’s owner) or the training for calibration has an influence on the dogs’ learning speed and the subsequent data quality. Based on the findings of the studies just mentioned, such an influence, if present, cannot be estimated, because relevant parameters, such as the dogs’ validation accuracy (in degrees of visual angle), are often not reported.
Although the influence of the trainer’s background and of the calibration pretraining on data quality remains unknown, stabilizing the dog’s head is likely to result in reduced data loss and greater accuracy (Hessels et al., 2015; Holmqvist, Örbom, Hooge, et al., 2022; Wass et al., 2014). Indeed, studies that did not use a chin rest used remote eye tracking (e.g., using the remote mode in Eyelink systems; for more details, see The Use of Mobile Eye Trackers section), restrained the dog’s movement in another way (e.g., the handler holding the dog by placing both hands on its chest), and/or had high attrition rate or data loss (e. g., Kis et al., 2017; Téglás et al., 2012). On one hand, high attrition rates might reflect a selection bias for calm and/ or obedient dogs. On the other hand, using specifically trained dogs may bias the sample in different ways, for instance, by selecting highly trainable dogs, with similar types of pre-experimental experiences such as participation in dog sports. Remote eye tracking without a chin rest also seems to increase the data loss in the sense of higher proportions of off-screen looks (e.g., Correia-Caeiro et al., 2021; Kis et al., 2017). For example, in the study by Kis et al. (2017), dogs looked at the screen on average 19.7% (2759.54 ms) of the total (2 × 7000 ms) time (range = 80–10,508 ms) when pictures of faces were presented. In the study by Correia-Caeiro et al. (2021), dogs rarely looked at the stimuli for longer than 30% of the trial time and on average looked at the stimuli for 22% of the time (SD±28%). Still, aside from potential selection biases and data loss, how exactly the data quality compares between studies that used a chin rest and those that did not remains unclear from the published dog eye-tracking literature and will require further investigation.
Dog eye-tracking data seem to be noisier than human data (Park et al., 2022). Apart from head movements, numerous other factors might affect data quality in canine eye tracking. Some of these factors are likely to be shared with other study populations (e.g., human infants) such as the color of the iris (Hessels et al., 2015) or the skill of the operator (Hessels & Hooge, 2019). Others are more specific to dogs, related to the morphological properties of their heads and eyes such as the color and length of the fur around the eyes, droopy eyelids, the size of the eye clefts, the shape of the iris (in our experience, dogs with an irregular shaped iris are difficult to track), the visibility of the third eyelid, and the head shape (for an extensive discussion, see Park et al., 2022). The resulting noisy data can bias dependent variables that are used to analyze the experiments (Park et al., 2022; Wass et al., 2014). Apart from minimizing head movements, data quality might be improved by ensuring that nothing is obstructing the view of the eye tracker camera to the eye (e.g., by cutting facial hair covering the view), using bright stimuli and a well-lit recording environment to avoid too large pupils (that would not be entirely visible to the eye tracker), and recruiting dogs while considering the aforementioned morphological factors. In our experience, the most important criterion for high-quality data (evidenced by the validation accuracy) are a rather dark and regular-shaped iris that can be reliably detected by the eye-tracking software. High room temperatures pose another challenge, as dogs control their body temperature by panting (Park et al., 2022). The head movements caused by panting (which can also be a sign of stress) preclude the collection of high-quality data even when the dogs put their heads on a chin rest.
Additionally, the event detection algorithm to parse the raw data into fixations and saccades affects the results and might bias their interpretation. Dogs’ fixations are on average longer and saccades slower than in humans (Park et al., 2020, 2022). Park and colleagues (2020, 2022) argued that using the default parsing thresholds optimized for eye tracking humans might lead to biased results, for example, because of higher proportions of artefactual fixations. They recommended using noise-adaptive event classification algorithms and post hoc filtering in future research. Although at the moment we cannot estimate how long a dog fixation should be for the content of the stimuli to be processed (and potentially retained), we do have initial evidence, from studies that directly compared the two species, that dogs’ fixations and blinks are longer and saccades are slower than those of human adults (Park et al., 2020, 2022). Whereas other studies (e.g., Correia-Caeiro et al., 2021; Gergely et al., 2019) found that dogs’ average fixation duration was shorter than humans’, these differing findings are difficult to reconcile because of differences in the algorithms used and the varying data quality across studies. Contrary to the recent findings by Park and colleagues (2020, 2022), in dog eye-tracking research, common practice has been to classify dog fixations using the same (default) thresholds proposed for humans (Barber et al., 2016; Karl, Boch, Zamansky, et al., 2020) or even to lower the thresholds proposed for humans (Gergely et al., 2019a; Somppi et al., 2012, 2014, 2016, 2017; Törnqvist et al., 2015, 2020) or to analyze the raw samples directly (Kis et al., 2017; Völter et al., 2020). The latter approach makes the least assumptions but also includes samples that are part of saccades, potentially assuming visual intake when this is in fact unlikely.
Given the high variability in the way a fixation was operationalized across studies and that the duration of fixations is considerably influenced by data quality (see Holmqvist et al., 2011), it should not come as a surprise that the average fixation durations vary across studies, even in response to similar stimuli (such as human faces). Aside from differing ways fixations were operationalized, these results hint at the possibility that context variables, such as the interest value (Somppi et al, 2012) or the biological relevance of the stimuli (Somppi et al., 2016; Törnqvist et al., 2020), fatigue (as suggested for humans; Schleicher et al., 2008), and possibly the length of the trial and the dogs’ training and life history (e.g., lab vs. pet dog) might influence their fixation durations (Barber et al., 2016; Törnqvist et al., 2015). Supporting the effect of trial number (and hence possibly fatigue or visual habituation) on dogs’ fixation durations, Somppi et al. (2012) found that dogs produced fewer but longer fixations with increasing trial number and that novel stimuli, presented after three to five repetitions of familiar stimuli, instead resulted in shorter fixation durations. Dogs tested with treats close to the screen area might be more distractible and show shorter on-screen fixation durations (Barber et al., 2016; Correia-Caeiro et al., 2020, 2021).
Perhaps more surprising, Park et al. (2022), who used the same setup, same comparable stimuli, and partly the same data-parsing algorithm as Barber et al. (2016), report a much longer average fixation duration (1159 ms vs. 827 ms in Barber et al., 2016). However, unlike Barber et al. (2016), who used unthresholded data, Park et al. (2022) considered as artefactual, and therefore filtered out from the analysis, fixations shorter than 50 ms as artefactual and therefore filtered them out (based on the human literature; Hessels et al., 2018), a procedure that might explain the longer average fixation duration in their study.
The current canine eye-tracking literature suffers from a lack of documentation standards. When reviewing the canine eye-tracking literature, we found that often important information for evaluating the findings (e.g., the size of the AoIs or quantitative validation results) were not reported. When they were reported they are presented in differing formats (AoI sizes were reported in screen pixel, degrees of visual angle, or centimeters). Even worse is information concerning data exclusion criteria and attrition rates both in terms whether information was provided at all and with respect to the format. Future research would benefit from adhering more closely to documentation recommendations in the field (Holmqvist, Orbom, Hooge, et al., 2022). This would help improving reproducibility of the studies and facilitate the evaluation of the results. We, therefore, concur with Park et al. (2022) in that it is crucial for the field of canine eye tracking moving forward to improve data quality and documentation standards.
Artificial Dog Behavior (Motionless Viewing)
As described earlier, the majority of eye-tracking studies conducted with dogs so far used a chin rest to limit dogs’ head motion and consequently improve data quality (see Figure 3). Under natural conditions in which the head is free to move, eye movements also compensate for head movement to keep a stable retinal image (Kowler, 2011; Land, 1999). Instead, when testing dogs with a stabilized head, eye movements are assumed to indicate mainly dogs’ focus of attention. That is, they are assumed to be functionally equivalent to the head orientations that unrestrained dogs show toward relevant stimuli, although direct comparisons of dogs’ gazing behavior with and without head movement restriction is missing.
Moreover, future research should investigate whether inhibiting movement, as a result of the chin rest training, engages dogs’ motor system to an extent that can alter dogs’ looking behavior. For example, when observing others’ actions, humans’ predictive gaze shifts are drastically impaired if the participants’ hands are tied or occupied in a second task (Ambrosini et al., 2012; Cannon & Woodward, 2008). Such an interference might be effector-specific and hence might be apparent especially for actions that dogs would perform using their mouth, because dogs cannot open their mouth without lifting their head from the chin rest.
Although no study has directly addressed the influence of inhibiting dogs’ movement on their looking patterns, there is at least some evidence that dogs’ neural response to the perception of real-life objects is influenced by the effector used to interact with the object, at least when the effector is their mouth (Prichard et al., 2021a). Future studies comparing the looking patterns of dogs watching similar stimuli with and without head restriction are needed to test the effect of head restriction on the dogs’ looking patterns.
Fourteen of the studies conducted so far (that report this information) used monocular tracking (see Table 1 for the exact references), although very little information is available about how independently from each other dog eyes can move and blink (see the discussion in Williams et al., 2011). Especially when presented with two-dimensional images on a screen, dogs are confronted with impoverished stimuli (lacking depth information) and mainly stimulating their visual but no other sensory modalities.
Moreover, dogs are not allowed to interact with these stimuli at all. Future developments of the field could include giving dogs the possibility of controlling the stimuli presentation durations (as in self-paced tasks or in gaze-contingent tasks; e.g., a stimulus would be shown until the subject looks away for certain amount of time). Another possibility could be embedding the viewing of the stimuli in a task, such as a two-way object choice task. For example, this approach has already been undertaken by researchers studying how dogs perceive video-projected directional cues from conspecifics and humans (Bálint et al., 2015; Binderlehner, 2017; Péter et al., 2013; Pongrácz et al., 2018), although the combination of a task requiring a behavioral response with eye tracking has rarely been undertaken (for rare exceptions, see Binderlehner, 2017; Pelgrim & Buchsbaum, 2022; Rossi et al., 2014). Mobile eye tracking with dogs has great potential in this respect (see the section The Use of Mobile Eye Trackers).
Artificial Tasks, No Instruction What to Do
In all experiments that use a stationary eye tracker, dogs passively viewed the stimuli: They received no instructions and had no task to perform involving the stimuli. The downside of not being able to communicate instructions verbally is that, in the case of experiments where dogs have been pretrained to lay their head on a chin rest, they might consider not moving as their task. Especially for dogs that have not participated in many eye tracker experiments, we have to consider the possibility that the recorded gaze data might simply reflect dogs’ involuntary eye movements, as a way of reflexively orienting toward moving stimuli. At the same time, however, the results of the experiments conducted so far have partly disconfirmed this hypothesis. The upside to testing dogs is that they can be trained for stabilized-head eye tracking while likely remaining unaware that their gaze is being recorded. Hence dog data are unlikely to be subject to social biases arising from the recognition of being observed, which can instead affect human data (Risko & Kingstone, 2011).
Only a few studies have directly compared dogs’ gaze behavior to that of humans’ with eye-tracking technology to infer whether similar cognitive processes take place in the two species (Correia-Caeiro et al., 2020, 2021; Park et al., 2020a, 2022a; Törnqvist et al., 2015). However, this approach can be biased if the humans receive instructions to perform a task (e.g., categorization) while viewing the stimuli or to focus their attention on certain aspects of the stimuli (e.g., the attitude of the depicted agents). Scan paths in humans are known to depend on the task instructions (Henderson, 2017); therefore, a comparison with humans can be valid only when both species are not given any instructions or preliminary information about the content of the stimuli, as, for example, in Gergely et al. (2019) and Park et al. (2020).
Artificial Stimuli
Although eye tracking is possible with real-life stimuli such as live demonstrations of human pointing (Delay, 2016), most studies with dogs have so far used artificial stimuli as substitutes for real stimuli such as pictures or videos displayed on computer screens (e.g., Correia-Caeiro et al., 2020; Karl et al., 2021; Müller et al., 2015; Range et al., 2008; Somppi et al., 2012, 2014, 2016; Téglás et al., 2012) or projected at screens using video projectors (Barber et al., 2016, 2017; Correia-Caeiro et al., 2021). There are, of course, big advantages of using artificial stimuli over natural ones, and this is the reason for their frequent use. Experimenters can better control the timing of presentation, also the content using modern image editing software, and they can present the identical stimulus repeatedly to the same or to different subject animals (D’Eath, 1998).
In this review, however, we concentrate more on the possible disadvantages. There are at least two possible problems with artificial stimuli: One is technical in nature, and the other is more conceptual. On one hand, pictures and videos are displayed by devices such as televisions, video monitors, and video projectors that are designed with human vision in mind, but dogs—and other nonhuman animals—differ in aspects of visual processing such as their color vision, critical flicker-fusion threshold, perception of depth, and visual acuity. Therefore, they may perceive the stimuli differently than we do (D’Eath, 1998). This may be less problematic if the experimenter’s aim is to examine the ability to learn, discriminate, generalize, or categorize the stimuli on the basis of a specific predefined rule or hypothesis and the subjects perform as expected. However, if they do not respond to the depicted stimuli as they would to the real counterparts, the challenge becomes to identify the reasons for this discrepancy.
The bigger problem arises when researchers aim at understanding how animals perceive real stimuli (e.g., human faces) and use artificial stimuli as substitutes. Pictures are always abstractions of their three-dimensional referents (Bovet & Vauclair, 2000; Fagot, 2000) and must therefore appear quite different from real objects to most animals. Of course, nonhuman animals may recognize the content of the real object in a picture or video without perceiving them in the exact same manner. Picture–object recognition may therefore come along a scale from partial picture–object correspondence up to full picture–object equivalence. The animals’ place on this scale in the given experiment depends on various factors, including picture quality, functional properties of the visual system, and the subject’s prior experience with pictures or videos. But only if researchers address the question of whether their subjects perceive the correspondence between the image and its depicted referent can they correctly interpret the results of their experiments (Spetch, 2010).
At a low level, picture–object correspondence requires discriminating one or more visual features of the picture and recognizing them in the real object (or vice versa). Such a mechanism that is mediated by simple invariant two-dimensional characteristics without recognition of the real three-dimensional object is qualitatively different from perceiving pictures as representations of the real world, which is based on an ability to recognize the correspondence between objects and their pictures on a level beyond that of mere feature discrimination (see Fagot, 2000). On a higher level, the subject confuses the image and its referent. At the highest representational level, representing true picture–object equivalence, the animal comprehends that the picture is not only an entity in itself but also a representation of the depicted object (Fagot et al., 2010). For instance, Java monkeys were able to identify novel views of a familiar conspecific presented on slides or match different body parts of the same familiar group members. After first being familiarized with slides of their conspecifics, they were able to identify mother–offspring pairs or to match views of offspring to their mother (Dasser, 1987). By using a similar but extended logic, called complementary information procedure, Aust and Huber (2006) trained pigeons to discriminate between pictures of incomplete human figures and then tested them with pictures of the previously missing body parts. The pigeons could sort these complementary pictures into the correct categories, even if the test parts did not come from the same individuals as those shown during training, which provided an additional control for transfer by means of recognized item-specific properties. This result provided some evidence that the pigeons did not simply process some basic invariant features of the stimuli but had actually gained representational insight, a kind of symbol-referent relationship (Beilin, 1999).
Given the widespread use of two-dimensional stimuli in dog cognition research, surprisingly little effort has been directed to the investigation of dogs’ ability to recognize the content of two-dimensional representations of three-dimensional objects. Much more information is available about dogs’ perception of real-life events (reviewed in Byosiere et al., 2018). Some studies investigated whether dogs can use video-projected human pointing gestures to locate hidden food (Bálint et al., 2015; Péter et al., 2013; Pongrácz et al., 2018). Other studies compared directly dogs’ behavioral responses with (Binderlehner, 2017; Eatherington et al., 2021; Huber et al., 2013; Kaminski et al., 2009; Pongrácz et al., 2003) and their neural processing of (Prichard et al., 2021b) two-dimensional stimuli and their real-life counterparts.
For example, the ability to transfer knowledge acquired in one domain into another has been tested in our Clever Dog Lab in Vienna by borrowing an idea established with chimpanzees. Infant chimpanzees were able, after limited experience, to match what they observed on a television screen to events occurring elsewhere to determine the location of a hidden goal object in a familiar outdoor field (Menzel et al., 1978). We tested dogs if they would find a hidden object when before watching a video in the eye-tracker apparatus and then were asked to locate it in the real room (Binderlehner, 2017). With the aid of the eye tracker, we could prove that the dogs paid attention to the relevant parts of the video for most of the time. In the test, although the dogs did not provide unambiguous evidence for a transfer from the video to the real situation by going directly and without hesitation to the hiding place, they searched there much longer than dogs from a control group who had not seen the video.
Taken together, the results of the studies on dogs’ ability to generalize across two- and three-dimensional stimuli suggest that researchers should be cautious when assuming an equivalence between these two types of stimuli, especially when presenting dogs with static, novel pictures of inanimate objects. But there is no reason to be too pessimistic, because studies using cross-modal matching suggest that dogs have expectations about the vocalization a depicted dog or human should produce, on the basis of its species or emotional facial expression (Albuquerque et al., 2016; Gergely et al., 2019; Mongillo et al., 2021). Moreover, on the basis of two-dimensional information alone, dogs seem capable of recognizing different (dog and human) individuals (Racca et al., 2010), including discriminating their owner from a familiar or an unknown person (Adachi et al., 2007; Huber et al., 2013; Karl, Boch, Zamansky, et al., 2020) and prefer conspecific heads over human ones (Somppi et al., 2014). Finally, a study that projected a dog as informant in a two-choice task (Bálint et al., 2015) found that dogs reacted to the conspecific’s head directional cues by significantly avoiding the indicated bowl, contrary to dogs’ typical tendency to follow human-given directional cues in the same task (Miklósi et al., 1998; Soproni et al., 2001). Hence, the tentative conclusion that can be drawn from these studies is that dogs recognize the content of two-dimensional representations of at least dogs and humans. Still, more research on picture–object correspondence in the context of eye-tracking studies would be desirable.
In the next section, we turn now to the potential future directions of the field, describing methodological alternatives and additions that might overcome some of the limitations hitherto outlined.
The Future of Eye Tracking in Dogs
The Use of Dynamic Stimuli, Applying Pupillometry and Arrival Time Analyses
Whereas the first canine eye-tracking studies mainly focused on static stimuli (except for Téglás et al., 2012) in combination with AoI looking time analyses, researchers started to use more complex, dynamic stimuli (e.g., recorded videos and animations) in the past years to address new questions. This also entailed different types of data analyses, including pupillometry and arrival times analyses.
Dynamic stimuli bring about a number of challenges: If the object of interest is moving, dynamic AoIs might be necessary to quantify looking times. Additionally, when preparing stimuli, videos with a sufficient frame rate might be beneficial to increase the likelihood that they appear realistic to the dogs (e.g., Völter & Huber, 2021a, used videos with a frame rate of 100 Hz). Dogs’ visual perception appears to have a higher temporal resolution compared with humans as evidence by their higher flicker-fusion rates (Coile et al., 1989). The liquid-crystal displays (LCD) that are commonly used today, unlike cathode ray tube (CRT) displays, do not flicker between frames. Therefore, low monitor refresh rates should not result in a flickering effect (Byosiere et al., 2018). Nevertheless, low refresh and video frame rates might affect how realistic the stimuli appear to the dogs. Given the availability of (gaming) monitors with refresh rates of 100 Hz or higher and high-speed cameras, these potential confounds can be easily avoided.
Eye trackers provide information about the gaze location and the pupil size. With respect to pupil size, the pupil dilation response is a widely used measure of mental load, arousal, and the orienting response in humans (Mathôt, 2018). The involved neural pathways are similar in non-human primates (Wang & Munoz, 2015), evidence shows that nonprimates such as cats display an arousal-related pupil dilation response (Hess & Polt, 1960). Pupil dilation analyses require luminance-controlled stimuli. Various other factors can influence the pupil size as well, such as the angle between eye tracker and eye (the so-called pupil foreshortening error; Hayes & Petrov, 2016). Moreover, pupil size data are typically preprocessed, including a baseline correction (Mathôt et al., 2018).
As just reviewed, the first pupillometry studies with dogs emerged over the past few years. Whereas the first two studies (Karl, Boch, Zamansky, et al., 2020; Somppi et al., 2017) used only a summary statistic of the pupil size (the mean or the maximum) without conducting any baseline correction or time-course analysis, three recent studies focused specifically on the baseline-corrected pupil size as primary response variable (Völter & Huber, 2021a, 2021b, 2022). An advantage of the time course analysis is that the gaze coordinates can be accounted for in the analysis (van Rij et al., 2019), which might help to account for the gaze foreshortening error to some extent at least. Although these first studies provided promising first results, future research, also from different labs, will reveal the sensitivity and reproducibility of pupillometry in different areas of canine cognition research.
The Use of Mobile Eye Trackers
A disadvantage of many stationary eye-tracking systems is that the head ideally should be stabilized in some way. In canine eye tracking, this can be achieved by training the dogs to put their heads on a chin rest (Karl, Boch, Virányi, et al., 2020). Some scholars have voiced concerns over the external validity of stationary eye tracking with dogs because of the chin rest training and the reduction of head movements on the chin rest. Eye-tracking options that do not require the dogs to keep their heads motionless are desirable. One possibility in this regard is remote eye tracking. For example, the remote mode in Eyelink systems (which requires adding a target sticker to the participant’s forehead) allows accounting for head movements to some extent. This method has been applied also with dogs (Correia-Caeiro et al., 2020, 2021). The remote mode can be expected to yield a worse tracking accuracy than eye tracking with head stabilization—especially if the participant are in nonoptimal poses (Niehorster et al., 2018)—and by default has a smaller trackable range (Eyelink 1000 plus manual, version 1.0.12; copyright SR Research Ltd., Mississauga, Ontario, Canada). However, the extent to which accuracy is affected remains unclear in the dog studies because quantitative validation results have not been reported. Data loss is another issue to be considered: In these studies, dogs looked at the stimuli only approximately 20% to 30 % of the time (compared with about 80% to 85% with human participants). Additionally, the application of the target sticker might be irritating for the (untrained) dogs and might affect their performance (in the canine eye-tracking studies, the target sticker has been placed on a paper loop and then placed on the dogs’ forehead with sticky tape to bring it in the correct position above the eye).
Head-mounted mobile eye tracking offers even more flexibility because, in principle, it does not impose any limitations on the dogs’ mobility. Williams et al. (2011) presented the first head-mounted eye tracker for behavioral research with dogs. As a proof-of-principle study, they reported the calibration data with one Alaskan Malamute. Accuracies of about 3° of visual angle were obtained. Rossi et al. (2014) used another custom-made mobile eye tracker. They presented five dogs with an object-choice tasks in which the dogs were supposed to follow either a static distal pointing gesture or a momentary head gaze cue to one of two cups in order to locate a hidden reward. The dogs looked significantly more at the pointing hand than at the other hand in the pointing comprehension task, and some evidence showed that they looked more at the head of the communicator in the head gaze cue task. No information was provided on the accuracy of the eye-tracking system. Their system had to be recalibrated whenever the dogs moved to approach one of the cups (i.e., after each trial), which reduced the number of trials that could be performed within a session.
An eye-tracking headgear specifically designed for dogs is now commercially available (Positive Science, Inc., Rochester, NY, USA), and the first behavioral studies have recently been published (Pelgrim & Buchsbaum, 2022; Pelgrim et al., 2022). The Positive Science eye tracker is mounted on off-the-shelf dog goggles (RexSpecs, Jackson, WY, USA), which facilitates the habituation of the dogs to tolerate the headgear because of the availability of training goggles and training materials (see Figure 3). The average accuracy of the Positive Science eye-tracking headgear within behavioral studies with dogs has been reported to be around 3.5° of visual angle in an indoor environment (Pelgrim et al., 2022) and 5.4° in an outdoor setting (Pelgrim & Buchsbaum, 2022). Pelgrim et al. (2022) tested five dogs in a forced-choice treat-finding task. The eye-tracking data were in accordance with traditional video scorings in that the dogs preferred looking at the baited location over the empty location though the eye-tracking data provided greater spatial and temporal resolution. Pelgrim and Buchsbaum (2022) documented the viewing pattern of four dogs on a walk in an urban/campus environment. Dogs looked proportionally more at persons and plants when they were in their field of view than to other object categories (no other dogs were encountered).
Although mobile eye tracking has the potential to provide exciting new insights into how dogs process visual information without some of the constraints imposed by stationary eye tracking, it does not come without costs and challenges: Apart from the reduced accuracy compared to stationary eye trackers, processing the data is time and labor intensive: Even if the gaze position can be reliably and accurately be determined, the fixations still need to be categorized (by a human observed) and assigned to objects that were fixated by the participant. This is because participants can move freely while wearing the mobile eye tracker; therefore, the fixation targets depend not only on the eye movements but also on the body movements. These challenges also highlight that mobile eye tracking should not replace stationary eye tracking but that it is a complementary method particularly suited for addressing certain questions. These questions include how dogs interact with and acquire information about the natural environment and how they allocate their visual attention when interacting with humans (e.g., caregiver, trainers) and conspecifics.
Combining Eye Tracking with Behavioral Tests and Neuroimaging
In addition to preparing highly controllable but very naturalistic stimuli and by using mobile eye-tracking equipment to allow the subjects to behave naturally, a further improvement in the future is the combination of different methods to achieve convergent data about the same underlying perceptuo-cognitive processes. This kind of cross-validation could be achieved by combining eye-tracking and behavioral tests such as preference tests. Another possibility would be to combine eye-tracking and neuroimaging or behavioral tests and neuroimaging, the latter aiming at solving the problem of reverse inference, the reasoning from a given brain activation pattern to the cognitive process (Poldrack, 2011).
A first attempt at using a multimethod approach with dogs—by combining the three methods of eye-tracking, behavioral preference tests, and functional magnetic resonance imaging—explored the engagement of an attachment-like system in dogs (Karl, Boch, Zamansky, et al., 2020). In the past, the results of several behavioral studies suggested that the dog–human relationship resembles the human mother–child bond, but the underlying mechanisms remained unclear. We presented morph videos of the caregiver, a familiar person, and a stranger showing either happy or angry facial expressions. Regardless of emotion, viewing the caregiver’s face activated brain regions associated with emotion and attachment processing. In contrast, the face of a stranger elicited activation mainly in brain regions related to visual and motor processing, and the face of a familiar person elicited relatively weak activations overall. Importantly, the eye-tracking data supported the superior role of the caregiver’s face and were in line with the findings from the fMRI test. These findings indicated that cutting across different levels, from brain to behavior, can provide novel and converging insights into the engagement of the putative attachment system in dogs and confirmed the advantages of multimethod approaches.
Eye Tracking in Wolves
An exciting future avenue for comparative eye-tracking studies would be direct comparisons among canids, particularly the comparison between dogs and wolves. Realistically this would require a population of well-trained wolves (or other canids) such as the hand-raised individuals at the Wolf Science Center of the Vetmeduni Vienna (https://www.wolfscience.at/en/). These wolves are in daily training for various tests on cognition and cooperation, including tests on the touch screen, and are therefore used to images and videos shown on a computer screen. However, to our knowledge, eye-tracking wolves have not yet been tested, and there may be unforeseen obstacles related to the training or the morphology of the eye (i.e., the recognition of the pupils by the eye tracker). A direct comparison between dogs and wolves on the basis of eye-tracking measures would help to elucidate how domestication shaped dogs’ visual attention, especially toward humans (Gácsi et al., 2005; Range & Marshall-Pescini, 2022; Range et al., 2015, 2019).
Conclusion
Because of the recency of the field, shared methodological and reporting practices are still missing, and many questions remain unanswered. For example, although the use of some form of head stabilization seems to reduce off-screens looking times and increase tracking accuracy, direct evidence for this relation in dog eye-tracking research is still missing. Sharing stimuli and replicating experimental paradigms across different laboratories with different setups will help addressing this issue. Additionally, providing more information on the achieved tracking accuracy (the validation results) will help to compare the results across different studies. Moreover, more research is needed to characterize dogs’ species-typical eye movements and to identify the best algorithms for classifying them into saccades and fixations (as so far, smooth pursuit has not been shown in dogs), given that fixation-related measures in dogs are more influenced by the choice of algorithm than human data (Park et al., 2022).
Information that should be routinely reported in publications include (a) a description of the calibration procedure including the targets, the training (if performed), and how often it was performed during a session; (b) the dogs’ validation accuracy; (c) the algorithm and parameters used to filter and parse the data into events (saccades, fixations); and (d) the dogs’ on-screen looking times, to ensure data loss can be compared across studies. In the spirit of open science principles, researchers might also consider sharing the raw and processed data on public repositories and preregistering, prior to data collection, their experimental hypothesis and predictions, as well as the size of the AoIs they plan to use and the exclusion criteria for subjects or trials (at least for confirmatory analyses).
On the basis of the studies published so far, it is likely that the field will continue characterizing dogs’ perception and response to referential communication and emotional expressions, although more attention could be devoted to further examining dogs’ possible recognition of the content of two-dimensional representations and their typical eye-movements parameters. Further, despite small sample sizes, individual differences in dogs’ eye metrics (e.g., fixation durations) and scan paths have started to be reported (Pelgrim & Buchsbaum, 2022) and could constitute a promising avenue for dog research, consistently with the proposal made by Arden et al. (2016). Adopting a comparative approach, in the future, dog researchers might address similar questions to those already asked in the human infant and nonhuman primate literature. For example, although violation of expectation tasks have yielded positive results (e.g., Völter & Huber, 2021a, 2021b), novel avenues for research might include testing whether dogs would show anticipatory looks in tasks that have been pivotal in the developmental and comparative study of social cognition (e. g., Gredebäck et al., 2009; Lewis & Krupenye, 2022).
In conclusion, the 10 years of investigating dogs’ perception and cognition with the aid of an eye tracker have shown which challenges and hurdles still lie ahead of us before we can fully exploit the possibilities of the method. Although researchers can take advantage of nearly a century of usage and continuous development of this technology in the human psychophysics laboratories, and indeed much has been learned from it, solvable and unsolvable problems are still ahead. The morphology and physiology of the canine visual system and the impossibility to (verbally) instruct the dog what to do are the main continuing difficulties, whereas improvements of the stimuli, data, and documentation quality; the ecological validity of the tasks; and the naturalness of the dog’s behavior, as well as better training and familiarization, are within reach. Much has already been achieved, and despite the current knowledge gaps and the somewhat artificial nature of the settings and stimuli, canine eye-tracking studies allowed us to address new questions, provided new answers to old questions, and advanced our understanding of dog perception and cognition. Therefore, even stationary eye-tracking with two-dimensional stimuli seems to be a promising technique for investigating these domains. Both the rapid progress in the field of information technology and the increasing understanding of the dog’s visual perception and cognitive processing encourage us to continue along the path already taken.
Acknowledgments
Funding for this work was provided by the Austrian Science Fund (FWF; W1262-B29) and by the Vienna Science and Technology Fund (WWTF), the City of Vienna and ithuba Capital AG through project CS18-012.
Footnotes
Author Note: Ludwig Huber, Messerli Research Institute, University of Veterinary Medicine Vienna, Veterinaerplatz 1, 1210 Vienna, Austria.
References
- Abdai J, Ferdinandy B, Lengyel A, Miklósi Á. Animacy perception in dogs (Canis familiaris) and humans (Homo sapiens): Comparison may be perturbed by inherent differences in looking patterns. Journal of Comparative Psychology. 2021;135(1):82–88. doi: 10.1037/com0000250. [DOI] [PubMed] [Google Scholar]
- Abdai J, Ferdinandy B, Terencio CB, Pogány Á, Miklósi Á. Perception of animacy in dogs and humans. Biology Letters. 2017;13(6):20170156. doi: 10.1098/rsbl.2017.0156. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Abdai J, Miklósi Á. Selection for specific behavioural traits does not influence preference of chasing motion and visual strategy in dogs. Scientific Reports. 2022;12(1):2370. doi: 10.1038/s41598-022-06382-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Adachi I, Kuwahata H, Fujita K. Dogs recall their owner’s face upon hearing the owner’s voice. Animal Cognition. 2007;10(1):17–21. doi: 10.1007/s10071-006-0025-8. [DOI] [PubMed] [Google Scholar]
- Albuquerque N, Guo K, Wilkinson A, Savalli C, Otta E, Mills D. Dogs recognize dog and human emotions. Biology Letters. 2016;12(1):20150883. doi: 10.1098/rsbl.2015.0883. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ambrosini E, Sinigaglia C, Costantini M. Tie my hands, tie my eyes. Journal of Experimental Psychology: Human Perception and Performance. 2012;38:263–266. doi: 10.1037/a0026570. [DOI] [PubMed] [Google Scholar]
- Arden R, Bensky MK, Adams MJ. A review of cognitive abilities in dogs, 1911 through 2016: More individual differences, please! Current Directions in Psychological Science. 2016;25(5):307–312. doi: 10.1177/0963721416667718. [DOI] [Google Scholar]
- Aria M, Alterisio A, Scandurra A, Pinelli C, D’Aniello B. The scholar’s best friend: Research trends in dog cognitive and behavioral studies. Animal Cognition. 2021;24(3):541–553. doi: 10.1007/s10071-020-01448-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aust U, Huber L. Picture-object recognition in pigeons: Evidence of representational insight in a visual categorization task using a complementary information procedure. Journal of Experimental Psychology: Animal Behavior Processes. 2006;32:190–195. doi: 10.1037/0097-7403.32.2.190. [DOI] [PubMed] [Google Scholar]
- Aust U, Range F, Steurer M, Huber L. Inferential reasoning by exclusion in pigeons, dogs, and humans. Animal Cognition. 2008;11(4):587–597. doi: 10.1007/s10071-008-0149-0. [DOI] [PubMed] [Google Scholar]
- Bálint A, Faragó T, Meike Z, Lenkei R, Miklósi Á, Pongrácz P. Do not choose as I do! Dogs avoid the food that is indicated by another dog’s gaze in a two-object choice task. Applied Animal Behaviour Science. 2015;170:44–53. doi: 10.1016/j.applanim.2015.06.005. [DOI] [Google Scholar]
- Barber ALA, Mills DS, Montealegre-Z F, Ratcliffe VF, Guo K, Wilkinson A. Functional performance of the visual system in dogs and humans: A comparative perspective. Comparative Cognition & Behavior Reviews. 2020;15:1–44. doi: 10.3819/ccbr.2020.150002. [DOI] [Google Scholar]
- Barber ALA, Müller E, Randi D, Müller CM, Huber L. Heart rate changes in pet and lab dogs as response to human facial expressions. ARC Journal of Animal and Veterinary Sciences. 2017;3(2):46–55. doi: 10.20431/2455-2518.0302005. [DOI] [Google Scholar]
- Barber ALA, Randi D, Müller CA, Huber L. The processing of human emotional faces by pet and lab dogs: evidence for lateralization and experience effects. PLOS ONE. 2016;11(4):e0152393. doi: 10.1371/journal.pone.0152393. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beilin H. Understanding the photographic image. Journal of Applied Developmental Psychology. 1999;20(1):1–30. doi: 10.1016/S0193-3973(99)80001-X. [DOI] [Google Scholar]
- Bensky MK, Gosling SD, Sinn DL. The world from a dog’s point of view: A review and synthesis of dog cognition research. Advances in the Study of Behavior. 2013;45:209–406. doi: 10.1016/B978-0-12-407186-5.00005-7. [DOI] [Google Scholar]
- Binderlehner I. Referential understanding of videos in dogs—A cognitive task using the eye-tracking technology. University of Veterinary Medicine Vienna; 2017. Unpublished bachelor’s thesis. [Google Scholar]
- Bovet D, Vauclair J. Picture recognition in animals and humans. Behavioural Brain Research. 2000;109(2):143–165. doi: 10.1016/s0166-4328(00)00146-7. [DOI] [PubMed] [Google Scholar]
- Brauer J, Call J. The magic cup: Great apes and domestic dogs (Canis familiaris) individuate objects according to their properties. Journal of Comparative Psychology. 2011;125(3):353–361. doi: 10.1037/a0023009. [DOI] [PubMed] [Google Scholar]
- Bray EE, Gnanadesikan GE, Horschler DJ, Levy KM, Kennedy BS, Famula TR, MacLean EL. Early-emerging and highly heritable sensitivity to human communication in dogs. Current Biology. 2021;31(14):3132–3136. doi: 10.1016/j.cub.2021.04.055. [DOI] [PubMed] [Google Scholar]
- Buswell GT. How people look at pictures. University of Chicago Press; 1935. [Google Scholar]
- Butler S, Gilchrist ID, Burt DM, Perrett DI, Jones E, Harvey M. Are the perceptual biases found in chimeric face processing reflected in eye-movement patterns? Neuropsychologia. 2005;43(1):52–59. doi: 10.1016/j.neuropsychologia.2004.06.005. [DOI] [PubMed] [Google Scholar]
- Byosiere SE, Chouinard PA, Howell TJ, Bennett PC. What do dogs (Canis familiaris) see? A review of vision in dogs and implications for cognition research. Psychonomic Bulletin & Review. 2018;25(5):1798–1813. doi: 10.3758/s13423-017-1404-7. [DOI] [PubMed] [Google Scholar]
- Cannon EN, Woodward AL. Action anticipation and interference: A test of prospective gaze. Cogsci. 2008;2008:981–985. [PMC free article] [PubMed] [Google Scholar]
- Catala A, Mang B, Wallis L, Huber L. Dogs demonstrate perspective taking based on geometrical gaze following in a Guesser–Knower task. Animal Cognition. 2017;20(4):581–589. doi: 10.1007/s10071-017-1082-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Coile DC, Pollitz CH, Smith JC. Behavioral determination of critical flicker fusion in dogs. Physiology and Behavior. 1989;45(6):1087–1092. doi: 10.1016/0031-9384(89)90092-9. [DOI] [PubMed] [Google Scholar]
- Correia-Caeiro C, Guo K, Mills DS. Perception of dynamic facial expressions of emotion between dogs and humans. Animal Cognition. 2020;23(3):465–476. doi: 10.1007/s10071-020-01348-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Correia-Caeiro C, Guo K, Mills D. Bodily emotional expressions are a primary source of information for dogs, but not for humans. Animal Cognition. 2021;24(2):267–279. doi: 10.1007/s10071-021-01471-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dasser V. Slides of group members as representations of real animals (Macaca fascicularis) Ethology. 1987;76:65–73. doi: 10.1111/j.1439-0310.1987.tb00672.x. [DOI] [Google Scholar]
- D’Eath RB. Can video images imitate real stimuli in animal behaviour experiments? Biological Reviews. 1998;73:267–292. doi: 10.1017/S0006323198005179. [DOI] [Google Scholar]
- Delabarre EB. A method of recording eye-movements. The American Journal of Psychology. 1898;9(4):572–574. doi: 10.2307/1412191. [DOI] [Google Scholar]
- Delay C. Tracking the gaze pattern of pet dogs during gestural communication. University of Zurich; 2016. Unpublished master’s thesis. [Google Scholar]
- Dell’Osso L, Williams R, Jacobs J, Erchul D. The congenital and see-saw nystagmus in the prototypical achiasma of canines: Comparison to the human achiasmatic prototype. Vision Research. 1998;38(11):1629–1642. doi: 10.1016/S0042-6989(97)00337-4. [DOI] [PubMed] [Google Scholar]
- Duchowski AT. Eye tracking methodology: Theory and practice. Springer; 2007. [DOI] [Google Scholar]
- Eatherington CJ, Mongillo P, Lõoke M, Marinelli L. Dogs fail to recognize a human pointing gesture in two-dimensional depictions of motion cues. Behavioural Processes. 2021;189:104425. doi: 10.1016/j.beproc.2021.104425. [DOI] [PubMed] [Google Scholar]
- Fagot J, editor. Picture perception in animals. Psychology Press; 2000. [DOI] [Google Scholar]
- Fagot J, Thompson RK, Parron C. How to read a picture: Lessons from nonhuman primates. Proceedings of the National Acadamy of Sciences USA. 2010;107(2):519–520. doi: 10.1073/pnas.0913577107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gácsi M, Gyori B, Miklósi A, Virányi Z, Kubinyi E, Topál J, Csányi V. Species-specific differences and similarities in the behavior of hand-raised dog and wolf pups in social situations with humans. Developmental Psychobiology. 2005;47(2):111–122. doi: 10.1002/dev.20082. [DOI] [PubMed] [Google Scholar]
- Gácsi M, McGreevy P, Kara E, Miklósi Á. Effects of selection for cooperation and attention in dogs. Behavioral and Brain Functions. 2009;5(1):31. doi: 10.1186/1744-9081-5-31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gácsi M, Miklósi A, Varga O, Topál J, Csányi V. Are readers of our face readers of our minds? Dogs (Canis familiaris) show situation-dependent recognition of human’s attention. Animal Cognition. 2004;7(3):144–153. doi: 10.1007/s10071-003-0205-8. [DOI] [PubMed] [Google Scholar]
- Gergely A, Petró E, Oláh K, Topál J. Auditory–visual matching of conspecifics and non-conspecifics by dogs and human infants. Animals. 2019b;9(1):17. doi: 10.3390/ani9010017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gredebäck G, Johnson S, Hofsten C. Eye tracking in infancy research. Developmental Neuropsychology. 2009;35(1):1–19. doi: 10.1080/87565640903325758. [DOI] [PubMed] [Google Scholar]
- Guo K, Meints K, Hall C, Hall S, Mills D. Left gaze bias in humans, rhesus monkeys and domestic dogs. Animal Cognition. 2009;12(3):409–418. doi: 10.1007/s10071-008-0199-3. [DOI] [PubMed] [Google Scholar]
- Hare B, Brown M, Williamson C, Tomasello M. The domestication of social cognition in dogs. Science. 2002;298(5598):1634–1636. doi: 10.1126/science.1072702. [DOI] [PubMed] [Google Scholar]
- Hayes TR, Petrov AA. Mapping and correcting the influence of gaze position on pupil size measurements. Behavior Research Methods. 2016;48(2):510–527. doi: 10.3758/s13428-015-0588-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Henderson JM. Human gaze control during real-world scene perception. Trends in Cognitive Sciences. 2003;7(11):498–504. doi: 10.1016/j.tics.2003.09.006. [DOI] [PubMed] [Google Scholar]
- Henderson JM. Gaze control as prediction. Trends in Cognitive Sciences. 2017;21:15–23. doi: 10.1016/j.tics.2016.11.003. [DOI] [PubMed] [Google Scholar]
- Hess EH, Polt JM. Pupil size as related to interest value of visual stimuli. Science. 1960;132(3423):349–350. doi: 10.1126/science.132.3423.349. [DOI] [PubMed] [Google Scholar]
- Hessels RS, Andersson R, Hooge IT, Nyström M, Kemner C. Consequences of eye color, positioning, and head movement for eye-tracking data quality in infant research. Infancy. 2015;20(6):601–633. doi: 10.1111/infa.12093. [DOI] [Google Scholar]
- Hessels RS, Hooge ITC. Eye tracking in developmental cognitive neuroscience—The good, the bad and the ugly. Developmental Cognitive Neuroscience. 2019;40:100710. doi: 10.1016/j.dcn.2019.100710. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hessels RS, Niehorster DC, Nystrom M, Andersson R, Hooge ITC. Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. Royal Society Open Science. 2018;5(8):180502. doi: 10.1098/rsos.180502. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Holmqvist K, Andersson R. Eye tracking: A comprehensive guide to methods, paradigms and measures. Lund Eye-Tracking Research Institute; 2017. [Google Scholar]
- Holmqvist K, Nyström M, Andersson R, Dewhurst R, Jarodzka H, Weijer J. Eye tracking: A comprehensive guide to methods and measures. Oxford University Press; 2011. [Google Scholar]
- Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, et al. Eye tracking: Empirical foundations for a minimal reporting guideline. Behavior Research Methods. 2022;55:364–416. doi: 10.3758/s13428-021-01762-8. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
- Holmqvist K, Örbom SL, Zemblys R. Small head movements increase and colour noise in data from five video-based P-CR eye trackers. Behavior Research Methods. 2022;54(2):845–863. doi: 10.3758/s13428-021-01648-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hopper LM, Gulli RA, Howard LH, Kano F, Krupenye C, Ryan AM, Paukner A. The application of noninvasive, restraint-free eye-tracking methods for use with nonhuman primates. Behavior Research Methods. 2021;53(3):1003–1030. doi: 10.3758/s13428-020-01465-6. [DOI] [PubMed] [Google Scholar]
- Huber L. How dogs perceive and understand us. Current Directions in Psychological Science. 2016;25(5):339–344. doi: 10.1177/0963721416656329. [DOI] [Google Scholar]
- Huber L, Racca A, Scaf B, Virányi Z, Range F. Discrimination of familiar human faces in dogs (Canis familiaris) Learning and Motivation. 2013;44(4):258–269. doi: 10.1016/j.lmot.2013.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jackson I, Sirois S. Infant cognition: Going full factorial with pupil dilation. Developmental Science. 2009;12(4):670–679. doi: 10.1111/j.1467-7687.2008.00805.x. [DOI] [PubMed] [Google Scholar]
- Kaminski J, Marshall-Pescini S, editors. The social dog: Behaviour and cognition. Academic Press; 2014. [DOI] [Google Scholar]
- Kaminski J, Nitzschner M. Do dogs get the point? A review of dog–human communication ability. Learning and Motivation. 2013;44(4):294–302. doi: 10.1016/j.lmot.2013.05.001. [DOI] [Google Scholar]
- Kaminski J, Tempelmann S, Call J, Tomasello M. Domestic dogs comprehend human communication with iconic signs. Developmental Science. 2009;12(6):831–837. doi: 10.1111/j.1467-7687.2009.00815.x. https://doi.org/DESC815 . [DOI] [PubMed] [Google Scholar]
- Kano F, Tomonaga M. How chimpanzees look at pictures: A comparative eye-tracking study. Proceedings of the Royal Society B: Biological Sciences. 2009;276(1664):1949–1955. doi: 10.1098/rspb.2008.1811. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Karl S, Boch M, Virányi Z, Lamm C, Huber L. Training pet dogs for eye-tracking and awake fMRI. Behavior Research Methods. 2020;52(2):838–856. doi: 10.3758/s13428-019-01281-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Karl S, Boch M, Zamansky A, van der Linden D, Wagner IC, Völter CJ, Lamm C, Huber L. Exploring the dog–human relationship by combining fMRI, eye-tracking and behavioural measures. Scientific Reports. 2020;10(1):22273. doi: 10.1038/s41598-020-79247-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Karl S, Sladky R, Lamm C, Huber L. Neural responses of pet dogs witnessing their caregiver’s positive interactions with a conspecific: An fMRI study. Cerebral Cortex Communications. 2021;2(3):tgab047. doi: 10.1093/texcom/tgab047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kis A, Hernádi A, Miklósi B, Kanizsár O, Topál J. The way dogs (Canis familiaris) look at human emotional faces is modulated by oxytocin. An eye-tracking study. Frontiers in Behavioral Neuroscience. 2017;11:210. doi: 10.3389/fnbeh.2017.00210. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kowler E. Eye movements: The past 25 years. Vision Research. 2011;51(13):1457–1483. doi: 10.1016/j.visres.2010.12.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Land MF. Motion and vision: Why animals move their eyes. Journal of Comparative Physiology A. 1999;185(4):341–352. doi: 10.1007/s003590050393. [DOI] [PubMed] [Google Scholar]
- Leopold DA, Rhodes G. A comparative view of face perception. Journal of Comparative Psychology. 2010;124(3):233–251. doi: 10.1037/a0019460. https://doi.org/2010-16166-001 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lewis LS, Krupenye C. Eye-tracking as a window into primate social cognition. American Journal of Primatology. 2022;84(10):e23393. doi: 10.1002/ajp.23393. [DOI] [PubMed] [Google Scholar]
- Loftus GR. Eye fixations and recognition memory for pictures. Cognitive Psychology. 1972;3:525–551. doi: 10.1016/0010-0285(72)90021-7. [DOI] [Google Scholar]
- Maginnity ME, Grace RC. Visual perspective taking by dogs (Canis familiaris) in a Guesser-Knower task: Evidence for a canine theory of mind? Animal Cognition. 2014;17(6):1375–1392. doi: 10.1007/s10071-014-0773-9. [DOI] [PubMed] [Google Scholar]
- ManyDogs Project. Espinosa J, Bray E, Buchsbaum D, Byosiere S-E, Byrne M, Freeman MS, Gnanadesikan GE, Guran C-NA, Horschler DJ, Huber L, et al. ManyDogs 1: A multilab replication study of dogs’ pointing comprehension. PsyArXiv. 2021 doi: 10.31234/osf.io/f86jq. [DOI] [Google Scholar]
- Mathôt S. Pupillometry: Psychology, physiology, and function. Journal of Cognition. 2018;1(1):16. doi: 10.5334/joc.18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mathôt S, Fabius J, Heusden E, Stigchel S. Safe and sensible preprocessing and baseline correction of pupil-size data. Behavior Research Methods. 2018;50(1):94–106. doi: 10.3758/s13428-017-1007-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGreevy PD, Georgevsky D, Carrasco J, Valenzuela M, Duffy DL, Serpell JA. Dog behavior co-varies with height, bodyweight and skull shape. PLoS ONE. 2013;8(12):80529. doi: 10.1371/journal.pone.0080529. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGreevy P, Grassi TD, Harman AM. A strong correlation exists between the distribution of retinal ganglion cells and nose length in the dog. Brain, Behavior and Evolution. 2004;63(1):13–22. doi: 10.1159/000073756. [DOI] [PubMed] [Google Scholar]
- Menzel EW, Premack D, Woodruff G. Map reading in chimpanzee. Folia Primatologica. 1978;29:241–249. doi: 10.1159/000155845. [DOI] [PubMed] [Google Scholar]
- Merola I, Prato-Previde E, Marshall-Pescini S. Social referencing in dog-owner dyads? Animal Cognition. 2012;15(2):175–185. doi: 10.1007/s10071-011-0443-0. [DOI] [PubMed] [Google Scholar]
- Miklósi Á. Dog behaviour, evolution and cognition. 2nd Oxford University Press; 2015. [Google Scholar]
- Miklósi A, Polgárdi R, Topál J, Csányi V. Use of experimenter-given cues in dogs. Animal Cognition. 1998;1:113–121. doi: 10.1007/s100710050016. [DOI] [PubMed] [Google Scholar]
- Miller PE, Murphy CJ. Vision in dogs. Journal of the American Veterinary Medical Association. 1995;207:1623–1634. [PubMed] [Google Scholar]
- Mongillo P, Bono G, Regolin L, Marinelli L. Selective attention to humans in companion dogs, Canis familiaris) Animal Behaviour. 2010;80(6):1057–1063. doi: 10.1016/j.anbehav.2010.09.014. [DOI] [Google Scholar]
- Mongillo P, Eatherington C, Lõoke M, Marinelli L. I know a dog when I see one: Dogs (Canis familiaris) recognize dogs from videos. Animal Cognition. 2021;24(5):969–979. doi: 10.1007/s10071-021-01470-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Müller CA, Mayer C, Dörrenberg S, Huber L, Range F. Female but not male dogs respond to a size constancy violation. Biology Letters. 2011;7(5):689–691. doi: 10.1098/rsbl.2011.0287. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Müller CA, Schmitt K, Barber ALA, Huber L. Dogs can discriminate emotional expressions of human faces. Current Biology. 2015;25(5):601–605. doi: 10.1016/j.cub.2014.12.055. [DOI] [PubMed] [Google Scholar]
- Nagasawa M, Murai K, Mogi K, Kikusui T. Dogs can discriminate human smiling faces from blank expressions. Animal Cognition. 2011;14(4):525–533. doi: 10.1007/s10071-011-0386-5. [DOI] [PubMed] [Google Scholar]
- Niehorster DC, Cornelissen TH, Holmqvist K, Hooge IT, Hessels RS. What to expect from your remote eye-tracker when participants are unrestrained. Behavior Research Methods. 2018;50(1):213–227. doi: 10.3758/s13428-017-0863-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ogura T, Maki M, Nagata S, Nakamura S. Dogs (Canis familiaris) gaze at our hands: A preliminary eye-tracker experiment on selective attention in dogs. Animals. 2020;10(5):755. doi: 10.3390/ani10050755. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Osthaus B, Slater AM, Lea SE. Can dogs defy gravity? A comparison with the human infant and a non-human primate. Developmental Science. 2003;6(5):489–497. doi: 10.1111/1467-7687.00306. [DOI] [Google Scholar]
- Park SY, Bacelar CE, Holmqvist K. Dog eye movements are slower than human eye movements. Journal of Eye Movement Research. 2020;12(8) doi: 10.16910/jemr.12.8.4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Park SY, Holmqvist K, Niehorster DC, Huber L, Virányi Z. How to improve data quality in dog eye tracking. Behavior Research Methods. 2022 doi: 10.3758/s13428-022-01788-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pattison KF, Laude JR, Zentall TR. The case of the magic bones: Dogs’ memory of the physical properties of objects. Learning and Motivation. 2013;44(4):252–257. doi: 10.1016/j.lmot.2013.04.003. [DOI] [Google Scholar]
- Pattison KF, Miller HC, Rayburn-Reeves R, Zentall T. The case of the disappearing bone: Dogs’ understanding of the physical properties of objects. Behav Processes. 2010;85(3):278–282. doi: 10.1016/j.beproc.2010.06.016. [DOI] [PubMed] [Google Scholar]
- Pelgrim MH, Buchsbaum D. Categorizing Dogs’ Real World Visual Statistics. Proceedings of the Annual Meeting of the Cognitive Science Society. 2022;44:448. https://escholarship.org/uc/item/769022r9 . [Google Scholar]
- Pelgrim MH, Espinosa J, Buchsbaum D. Head-mounted mobile eye-tracking in the domestic dog: A new method. Behavior Research Methods. 2022 doi: 10.3758/s13428-022-01907-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Péter A, Miklósi Á, Pongrácz P. Domestic dogs’ (Canis familiaris) understanding of projected video images of a human demonstrator in an object-choice task. Ethology. 2013;119(10):898–906. doi: 10.1111/eth.12131. [DOI] [Google Scholar]
- Pitteri E, Mongillo P, Carnier P, Marinelli L, Huber L. Part-based and configural processing of owner’s face in dogs. PLoS ONE. 2014;9(9) doi: 10.1371/journal.pone.0108176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poldrack RA. Inferring mental states from neuroimaging data: From reverse inference to large-scale decoding. Neuron. 2011;72(5):692–697. doi: 10.1016/j.neuron.2011.11.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pongrácz P, Miklósi Á, Dóka A, Csányi V. Successful application of video-projected human images for signalling to dogs. Ethology. 2003;109(10):809–821. doi: 10.1046/j.0179-1613.2003.00923.x. [DOI] [Google Scholar]
- Pongrácz P, Péter A, Miklósi Á. Familiarity with images affects how dogs (Canis familiaris) process life-size video projections of humans. Quarterly Journal of Experimental Psychology. 2018;71(6):1457–1468. doi: 10.1080/17470218.2017.1333623. [DOI] [PubMed] [Google Scholar]
- Prichard A, Chhibber R, Athanassiades K, Chiu V, Spivak M, Berns GS. The mouth matters most: A functional magnetic resonance imaging study of how dogs perceive inanimate objects. Journal of Comparative Neurology. 2021a;529(11):2987–2994. doi: 10.1002/cne.25142. [DOI] [PubMed] [Google Scholar]
- Prichard A, Chhibber R, Athanassiades K, Chiu V, Spivak M, Berns GS. 2D or not 2D? An fMRI study of how dogs visually process objects. Animal Cognition. 2021b;24(5):1143–1151. doi: 10.1007/s10071-021-01506-3. [DOI] [PubMed] [Google Scholar]
- Racca A, Amadei E, Ligout S, Guo K, Meints K, Mills D. Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris) Animal Cognition. 2010;13(3):525–533. doi: 10.1007/s10071-009-0303-3. [DOI] [PubMed] [Google Scholar]
- Racca A, Guo K, Meints K, Mills DS. Reading faces: Differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-old children. PLoS ONE. 2012;7(4):36076. doi: 10.1371/journal.pone.0036076. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Range F, Aust U, Steurer M, Huber L. Visual categorization of natural stimuli by domestic dogs. Animal Cognition. 2008;11(2):339–347. doi: 10.1007/s10071-007-0123-2. [DOI] [PubMed] [Google Scholar]
- Range F, Marshall-Pescini S. Comparing wolves and dogs: Current status and implications for human ‘self-domestication’. Trends in Cognitive Sciences. 2022;26(4):337–349. doi: 10.1016/j.tics.2022.01.003. [DOI] [PubMed] [Google Scholar]
- Range F, Marshall-Pescini S, Kratz C, Virányi Z. Wolves lead and dogs follow, but they both cooperate with humans. Scientific Reports. 2019;9(1):3796. doi: 10.1038/s41598-019-40468-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Range F, Ritter C, Virányi Z. Testing the myth: Tolerant dogs and aggressive wolves. Proceedings of the Royal Society B: Biological Sciences. 2015;282:20150220. doi: 10.1098/rspb.2015.0220. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Risko EF, Kingstone A. Eyes wide shut: Implied social presence, eye tracking and attention. Attention, Perception, & Psychophysics. 2011;73:291–296. doi: 10.3758/s13414-010-0042-1. [DOI] [PubMed] [Google Scholar]
- Rossi A, Smedema D, Parada FJ, Allen C. Horowitz A, editor. Visual attention in dogs and the evolution of non-verbal communication. Domestic dog cognition and behavior. 2014:133–154. doi: 10.1007/978-3-642-53994-7_6. [DOI] [Google Scholar]
- Salva OR, Regolin L, Mascalzoni E, Vallortigara G. Cerebral and behavioural asymmetries in animal social recognition. Comparative Cognition & Behavior Reviews. 2012;7:110–138. doi: 10.3819/ccbr.2012.70006. [DOI] [Google Scholar]
- Schleicher R, Galley N, Briest S, Galley L. Blinks and saccades as indicators of fatigue in sleepiness warnings: Looking tired? Ergonomics. 2008;51(7):982–1010. doi: 10.1080/00140130701817062. [DOI] [PubMed] [Google Scholar]
- Scholl BJ, Tremoulet PD. Perceptual causality and animacy. Trends in Cognitive Science. 2000;4(8):299–309. doi: 10.1016/s1364-6613(00)01506-0. [DOI] [PubMed] [Google Scholar]
- Schwab C, Huber L. Obey or not obey? Dogs (Canis familiaris) behave differently in response to attentional states of their owners. Journal of Comparative Psychology. 2006;120(3):169–175. doi: 10.1037/0735-7036.120.3.169. [DOI] [PubMed] [Google Scholar]
- Schyns PG, Petro LS, Smith ML. Dynamics of visual information integration in the brain for categorizing facial expressions. Current Biology. 2007;17(18):1580–1585. doi: 10.1016/j.cub.2007.08.048. [DOI] [PubMed] [Google Scholar]
- Senju A, Csibra G. Gaze following in human infants depends on communicative signals. Current Biology. 2008;18:668–671. doi: 10.1016/j.cub.2008.03.059. [DOI] [PubMed] [Google Scholar]
- Siniscalchi M, Sasso R, Pepe AM, Vallortigara G, Quaranta A. Dogs turn left to emotional stimuli. Behavior and Brain Research. 2010;208(2):516–521. doi: 10.1016/j.bbr.2009.12.042. [DOI] [PubMed] [Google Scholar]
- Smith ML, Cottrell GW, Gosselin F, Schyns PG. Transmitting and decoding facial expressions. Psychological Science. 2005;16(3):184–189. doi: 10.1111/j.0956-7976.2005.00801.x. [DOI] [PubMed] [Google Scholar]
- Somppi S, Törnqvist H, Hänninen L, Krause C, Vainio O. Dogs do look at images: Eye tracking in canine cognition research. Animal Cognition. 2012;15(2):163–174. doi: 10.1007/s10071-011-0442-1. [DOI] [PubMed] [Google Scholar]
- Somppi S, Törnqvist H, Hänninen L, Krause CM, Vainio O. How dogs scan familiar and inverted faces: An eye movement study. Animal Cognition. 2014;17(3):793–803. doi: 10.1007/s10071-013-0713-0. [DOI] [PubMed] [Google Scholar]
- Somppi S, Törnqvist H, Kujala MV, Hänninen L, Krause CM, Vainio O. Dogs evaluate threatening facial expressions by their biological validity—Evidence from gazing patterns. PLOS ONE. 2016;11(1):e0143047. doi: 10.1371/journal.pone.0143047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Somppi S, Törnqvist H, Topál J, Koskela A, Hänninen L, Krause CM, Vainio O. Nasal oxytocin treatment biases dogs’ visual attention and emotional response toward positive human facial expressions. Frontiers in Psychology. 2017;8:1854. doi: 10.3389/fpsyg.2017.01854. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Soproni K, Miklósi A, Topál J, Csányi V. Comprehension of human communicative signs in pet dogs (Canis familiaris) Journal of Comparative Psychology. 2001;115(2):122–126. doi: 10.1037/0735-7036.115.2.122. [DOI] [PubMed] [Google Scholar]
- Spetch ML. Understanding how pictures are seen is important for comparative cognition. Comparative Cognition and Behavior Reviews. 2010;5:163–166. doi: 10.3819/ccbr.2010.50013. [DOI] [Google Scholar]
- Tauzin T, Csik A, Kis A, Topál J. What or where? The meaning of referential human pointing for dogs (Canis familiaris) Journal of Comparative Psychology. 2015;129(4):334–338. doi: 10.1037/a0039462. [DOI] [PubMed] [Google Scholar]
- Tecwyn EC, Buchsbaum D. What factors really influence domestic dogs’ (Canis familiaris) search for an item dropped down a diagonal tube? The tubes task revisited. Journal of Comparative Psychology. 2019;133(1):4–19. doi: 10.1037/com0000145. [DOI] [PubMed] [Google Scholar]
- Téglás E, Gergely A, Kupán K, Miklósi Á, Topál J. Dogs’ gaze following is tuned to human communicative signals. Current Biology. 2012;22(3):209–212. doi: 10.1016/j.cub.2011.12.018. [DOI] [PubMed] [Google Scholar]
- Törnqvist H, Somppi S, Koskela A, Krause CM, Vainio O, Kujala MV. Comparison of dogs and humans in visual scanning of social interaction. Royal Society open science. 2015;2(9):150341. doi: 10.1098/rsos.150341. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Törnqvist H, Somppi S, Kujala MV, Vainio O. Observing animals and humans: Dogs target their gaze to the biological information in natural scenes. PeerJ. 2020;8:e10341. doi: 10.7717/peerj.10341. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tremoulet PD, Feldman J. Perception of animacy from the motion of a single object. Perception. 2000;29(8):943–951. doi: 10.1068/p3101. [DOI] [PubMed] [Google Scholar]
- van Rij J, Hendriks P, Rijn H, Baayen RH, Wood SN. Analyzing the time course of pupillometric data. Trends in Hearing. 2019;23:1–22. doi: 10.1177/2331216519832483. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Völter CJ, Huber L. Dogs’ looking times and pupil dilation response reveal expectations about contact causality. Biology Letters. 2021a;17(12):20210465. doi: 10.1098/rsbl.2021.0465. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Völter CJ, Huber L. Expectancy violations about physical properties of animated objects in dogs [Preprint] PsyArXiv. 2021b doi: 10.31234/osf.io/3pr9z. [DOI] [Google Scholar]
- Völter CJ, Huber L. Pupil size changes reveal dogs’ sensitivity to motion cues. IScience. 2022;25(9):104801. doi: 10.1016/j.isci.2022.104801. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Völter CJ, Karl S, Huber L. Dogs accurately track a moving object on a screen and anticipate its destination. Scientific Reports. 2020;10(1):1–10. doi: 10.1038/s41598-020-72506-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wallis LJ, Range F, Muller CA, Serisier S, Huber L, Viranyi Z. Training for eye contact modulates gaze following in dogs. Animal Behavior. 2015;106:27–35. doi: 10.1016/j.anbehav.2015.04.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang CA, Munoz DP. A circuit for pupil orienting responses: Implications for cognitive modulation of pupil size. Current Opinion in Neurobiology. 2015;33:134–140. doi: 10.1016/j.conb.2015.03.018. [DOI] [PubMed] [Google Scholar]
- Wass SV, Forssman L, Leppänen J. Robustness and precision: How data quality may influence key dependent variables in infant eye-tracker analyses. Infancy. 2014;19(5):427–460. doi: 10.1111/infa.12055. [DOI] [Google Scholar]
- Williams FJ, Mills DS, Guo K. Development of a head-mounted, eye-tracking system for dogs. Journal of Neuroscience Methods. 2011;194(2):259–265. doi: 10.1016/j.jneumeth.2010.10.022. [DOI] [PubMed] [Google Scholar]
- Yarbus A. Movements of the eyes. Plenum. 1967 [Google Scholar]