Skip to main content
BMC Veterinary Research logoLink to BMC Veterinary Research
. 2025 May 24;21:372. doi: 10.1186/s12917-025-04839-0

Continuous automated analysis of facial dynamics of brachycephalic and normocephalic dogs in different contexts

George Martvel 1,#, Petra Eretová 2,#, Lucie Přibylová 2, Helena Chaloupková 2, Péter Pongrácz 3, Ilan Shimshoni 1, Noam Chen Cittone 1, Yuval Michaeli 1, Dan Grinstein 1, Anna Zamansky 1,
PMCID: PMC12102829  PMID: 40413474

Abstract

This study develops a novel automated method for measuring continuous dynamics of dog facial behavior based on video-based tracking of 46 facial landmarks grounded in the Dog Facial Action Coding System. This method is applied for comparing the facial behavior of (n1=7) brachycephalic (Boston Terrier) and (n2=7) normocephalic (Jack Russel Terrier) dogs in four different contexts, eliciting various inner states: positive (play and called by name) and negative (separation and stranger). Having objectively quantified facial dynamics in brachycephalic and normocephalic dogs, we have found that brachycephalic dogs exhibited consistently lower facial dynamics across all four tested contexts and facial regions (eyes, mouth, and ears) compared to normocephalic dogs. They further demonstrated relatively higher dynamics in positive play and negative stranger conditions than in the other two conditions. In contrast, normocephalic dogs showed elevated dynamics exclusively in the positive play condition, with significantly reduced dynamics in the negative stranger condition. These findings highlight distinct patterns of facial expressivity between the two morphological groups, suggesting decreased facial expression in brachycephalic dogs and demonstrating our method’s value in providing novel insights into canine communication.

Keywords: Dog emotion, Brachycephaly, Facial expressions, Facial dynamics, Facial landmarks, Artificial intelligence, Machine learning, Automated behavior analysis

Introduction

Emotions are complex states that involve physiological, cognitive, and behavioral components [1]. Today, it is widely recognized that facial expressions provide significant visual cues for identifying emotional states. In humans, facial expressions are a primary form of nonverbal communication that helps regulate social interactions [2]. The link between facial expressions and emotions has been extensively established in human psychology [3, 4].

Not only humans but also non-human animals, especially mammalian species, produce facial expressions, which are believed to convey information about their emotional states [5, 6]. Therefore, there is an increasing interest in facial expressions as potential indicators of emotions and welfare. For example, assessing pain can be difficult in species that otherwise give no or very weak behavioral signs of painful states. However, based on a validated facial assessment of pain-related expressions, veterinary science can now rely on objective methods, among others, in horses [7] and cats [8].

Domestic dogs are of growing popularity in science, being used as valuable clinical models for numerous human disorders [9], research on domestication [10], attachment [11, 12], human-animal interactions [13, 14] and more. However, the scientific understanding of canine visual signaling is still somewhat limited. While animal facial expressions were once considered purely involuntary, it has been shown recently that dogs actively use facial expressions, particularly when interacting with an attentive human audience [15], highlighting the role of the human partner in canine communication. Some studies have focused on dog-human relationships, addressing both the perception by dogs of human emotions [16] and dog emotions as perceived by humans [17, 18]. The remarkable diversity of dog facial phenotypes due to the selective breeding processes [19] leads to fascinating questions on the impact of dog facial features on dogs’ ability to produce visual cues in their communication with humans and humans’ ability to understand them [20].

Sexton et al. [21] investigated dogs’ facial complexity and behaviors and found that dogs with less complex facial coloring (fewer markings) were objectively scored as more behaviorally expressive than those with colored markings on their faces. A recent study by Eretová et al. [22] investigated the effect of brachycephaly on humans’ ability to interpret visual signals in dogs exposed to different situations. Participants more often mistakenly attributed positive emotions in negative situations to brachycephalic dogs (Boston Terrier) than to normocephalic dogs (Jack Russel Terrier). The study provided more evidence that brachycephalic dogs may have a reduced capacity to signal emotions through facial expressions, as has been previously suggested [23]. This is due to the facial musculature, particularly the m. zygomaticus (which pulls back the corner of the mouth) are malpositioned in brachycephalic dogs, while other muscles (like m. nasolabialis) are folded and malfunctioning [24]. The decreased facial activity seems to negatively affect human understanding of canine facial expression and thus can potentially negatively affect the animal’s welfare. However, the above-mentioned studies on canine signals’ interpretation may indicate that changes in facial coloration or skull shape also may distort a human’s ability to perceive dogs’ facial expressions.

The most commonly used method for objectively assessing changes in dog facial expressions is an adaptation from the Facial Action Coding System (FACS), widely used in human emotion studies [25]. Recently, it has been extended to many species, including dogs [26]. Indeed, DogFACS has been applied in several studies [21, 2629] to measure facial changes in dogs objectively. However, this method has some serious limitations. First, it depends on laborious manual annotation, which also requires extensive human training and certification and may still be prone to at least some level of human error or bias [30]. Some first steps to automating facial movements (action units) detection in dogs were taken by Boneh-Shitrit et al. [31], but much more data is needed for substantial progress in this direction. Another limitation of the DogFACS coding method is its discrete nature. Indeed, the method captures discrete facial action units and descriptors based on the underlying facial musculature. However, they may not necessarily capture dynamics that are not directly induced by muscle movement (e.g., muscle tightening).

Approaches based on facial landmarks may offer an alternative for facial analysis that can capture a continuous stream of information from animals’ facial visual cues. An important advantage of landmark-based approaches is their ease of application to video data, producing time series of multiple landmark coordinates. Such uses of landmark time-series data are just beginning to be explored in the domain of animal behavior, primarily for body landmarks (see, e.g., Labuguen et al. [32] and Wiltshire et al. [33]), while facial analysis remains largely underexplored.

Martvel et al. recently addressed this gap for cat facial analysis by introducing a dataset and the Ensemble Landmark Detector (ELD) model for 48 anatomy-based cat facial landmarks [34]. The detector performed well in tasks requiring subtle facial analysis, such as breed, cephalic type, and pain recognition [35, 36], as well as classification of social interactions [37]. Martvel, Abele, et al. [38] extended the ELD model to dogs using a landmark scheme grounded in DogFACS and creating the Dog Facial Landmarks in the Wild (DogFLW) dataset. Figure 1 shows an example of a landmark scheme containing 46 facial landmarks from the DogFLW.

Fig. 1.

Fig. 1

Forty-six dog facial landmarks based on DogFACS from Martvel, Abele, et al. [38]

This paper applies the ELD model to develop a novel approach for quantifying continuous facial behavior changes in dogs. We do so by introducing a rigorously defined mathematical metric for analyzing the dynamics of facial visual cues, taking into account multiple scales and camera distances, and separating relative facial movement from the absolute movement of the dog in the video. This approach utilizes a continuous stream of information obtained from automated tracking of dog facial landmarks and converts the relative movement into facial dynamics. Notably, the method can be used with any automated landmark tracking approach and generalized to other species. Another benefit of the approach is that it supports a separate analysis of dynamics associated with various facial parts (e.g., eyes, muzzle, ears, etc.) and is resistant to noise (occlusions, camera movement, etc.).

To demonstrate the usefulness of the proposed novel measurement tool, we use it on the dataset previously collected by Eretová et al. [22], in which video footage was collected of brachycephalic dogs (Boston Terriers) and normocephalic dogs (Jack Russell Terriers) in four different original contexts: play, separation from the owner (separation), called by the name (called) and threatened by a stranger (stranger). These contexts were induced to elicit visual cues that could be related to different emotional states. We use the proposed metrics to study whether there are differences in facial dynamics between the Boston Terrier and Jack Russell Terrier individuals across contexts and within the contexts.

Methods

Dataset

We filtered all videos from the original study [22] to balance the number of dogs of brachycephalic and normocephalic types. The final dataset comprised 230 videos in 50 fps with a length of 8.55±2.56 seconds, containing 14 dogs: 7 brachycephalic (Boston Terrier; 4 females, 3 males; mean age: 2.71±1.88 years; 97 videos, 14±6 videos per dog) and 7 normocephalic (Jack Russel Terrier; 5 females, 2 males; mean age: 2.64±0.64 years; 133 videos, 19±3 videos per dog). For a full breakdown of all dogs used in the study, please see Table 8 in the Supplementary Materials. We discarded videos with the stranger condition for two normocephalic and one brachycephalic dog and videos with the separation condition for one brachycephalic dog due to either dog’s hyperreaction or absence of automatically detected landmarks. Below, we briefly reproduce the experimental protocol from Eretova et al. for completeness. Further details can be found in the original study.

Conditions

Two Experimenters conducted the experiment in a laboratory environment at the Department of Ethology, ELTE, Budapest, Hungary. The recorded situations followed 5 minutes of free exploration, during which owners were present but instructed not to engage with the dogs. The experiment consisted of eight phases (four situations and four interludes between the situations), each lasting 1 minute. Before the first situation started, the dog was put on a leash that was attached to the handle in front of the chair on which the owner was sitting. Each used condition has been validated to non-invasively induce positive (Conditions 1 and 2) and negative (Conditions 3 and 4) affective states in dogs for a brief period of time. We did not inspect the dogs’ true affective states via other means (for detailed methodology, see Eretová et al. [22]).

Condition 1 — Called.

Experimenter 1 called the dog by its name while operating a tripod camera. Experimenter 2 filmed the dog’s reactions with a hand-held camera from an upper-front angle. The owner was sitting in a chair behind the dog, not engaging with it in any way. Interlude 1 followed, in which the dog was released from the leash and played with a tennis ball with Experimenter 1. For the validity of the situation, see [39, 40].

Condition 2 — Play.

The dog, put on the leash loosely, was teased with the tennis ball by Experimenter 1 without receiving the ball. The reactions were recorded by a tripod camera (operated by Experimenter 1) and a hand-held camera (operated by Experimenter 2) from different angles. The owner was sitting in a chair behind the dog, not engaging with it in any way. Interlude 2 followed, in which the owner and Experimenter 2 left the room, leaving Experimenter 1 operating the handheld camera. The tripod camera was left unattended, filming the dog continuously. For the validity of the situation, see [41, 42].

Condition 3 — Separation.

The owner left the room, leaving the dog leashed to a floor handle with Experimenter 1 present. Experimenter 1 did not interact with the dog and maintained a minimum distance of 2 meters while operating the handheld camera. The dog’s behavior was filmed by the tripod camera freely and unoperated. Interude 3 followed, in which the owner returned to the room, standing behind the dog, not engaging with it in any manner. Experimenter 2 returned to the room after that and closed the door behind herself. For the validity of the situation, see [43, 44].

Condition 4 — Stranger.

Experimenter 2 approached the dog slowly in a threatening manner (hunched posture, arms clasped behind her back, sliding her feet slowly on the floor and maintaining direct eye contact with the dog). The approach continued until Experimenter 2 was 1.5 meters away from the dog. The dog’s reactions were filmed from a front-side view by Experimenter 1 using the hand-held camera, keeping Experimenter 2 out of the camera’s field. The tripod camera was filming the dog freely and unoperated. At a distance of 1.5 metres, Experimenter 2 broke the threatening posture and direct eye contact, called the dog by its name, greeted it and petted it. Interlude 4 followed, in which the dog was let to play with the tennis ball freely with the owner and both Experimenters to release any residual tension. When the dog was sufficiently relaxed, the owner left, and the experiment ended. For the validity of the situation, see [4547].

For the purposes of the study, the raw videos were edited — the soundtracks were removed, and the timestamps of each filmed situation were cut into separate videos, which were coded as per situation and individual dog identity. A team of experts in canine behavior affiliated with the Department of Ethology and Companion Animal Science at CZU Prague and the Department of Ethology at ELTE reviewed the resulting videos for clarity of signals (such as body posture, facial expressions, ear and tail movement, eye contact, pacing or standing still, etc.) and approved them for use in the original study [22]. For the purpose of this study, videos were only cut to isolate individual contexts (conditions) while maintaining each dog’s unedited expressions.

Figure 2 shows dog faces cut from the original videos in four conditions.

Fig. 2.

Fig. 2

Boston Terrier and Jack Russel Terrier dogs in different conditions. Facial images are cropped from original video frames

Landmark detection

We processed all videos from the dataset with the Ensemble Landmark Detector (ELD) [34], trained on the DogFLW dataset [38]. The output time series contained the frame’s timestamp, coordinates of 46 facial landmarks, and the detection model’s confidence, representing the quality of detected landmarks (from 0 to 1). We limited the fps in the obtained time series to 25, discarding each second frame to avoid too similar data points (since the dog’s movement in 0.02 seconds is neglectable in most cases).

Dynamic metrics

For each processed frame, we obtained 46 facial landmarks. Since the dogs in the videos are moving and we’re interested in subtle facial movements, we divided the movement into two components: general and relative. General movement is the movement of the dog as a whole or the movement of a camera. Most of the landmarks move by approximately the same trajectory during this movement. Relative movement is a movement of facial parts separate from the general movement. During this movement, specific landmarks have a distinct trajectory, different in direction from the general one. To capture this relative movement, we defined stable landmarks as landmarks less prone to relative movements, such as eye corners, ear bases, and nose landmarks, as shown in Fig. 3. In other words, we selected landmarks that generally don’t have a relative component in their movement to be able then to subtract this general movement from other landmarks’ movement.

Fig. 3.

Fig. 3

Facial landmarks grouped in facial regions and a center (centroid) of selected stable landmarks

For each landmark set, we calculated the position of the center (centroid) of stable landmarks by the formula:

r¯c(x,y,t)=1ni=1nr¯i(x,y,t),

where r¯c(x,y,t) is a center coordinate vector, r¯i(x,y,t) is a coordinate vector of an i-th landmark from the set of n=8 stable landmarks. We imply the centroid coordinates to represent the general movement of the dog’s head.

Using the center coordinates r¯c, we translated all k=46 landmarks’ coordinates into the center’s coordinate system:

r¯i,rel=r¯i-r¯c,i(1,k),

where r¯i,rel — relative coordinates of an i-th landmark in the center’s coordinate system, r¯i — absolute coordinates of an i-th landmark in the frame’s coordinate system. Further, we refer to obtained relative coordinates simply as coordinates unless stated otherwise.

Obtained relative coordinates represent separate facial movements (blinks, ear movements, yawns, etc.), but this representation is limited in its accuracy. Since the detected landmarks have only two dimensions (x and y in a frame’s plain), some movements, such as head rotation, can’t be described with two dimensions without distortion and proportion changes in landmark coordinates. Nevertheless, we negated a part of the general movement “leak” into the relative part by selecting stable landmarks and transitioning into their centroid’s coordinate system.

Having coordinates for the whole video, we computed the dynamics of landmark movement. Since, for different movements, different landmarks could have various trajectories and velocities, we averaged the amount of movement per landmark. For that, we calculated the absolute difference in coordinates between two consecutive frames:

di(tj)=|xi(tj)-xi(tj-1)|+|(yi(tj)-yi(tj-1)|,

where di(tj) is a dynamic of an i-th landmark between a j-th and a j-1-th frames, xi(tj) and yi(tj) — coordinates of an i-th landmark on a jth frame. Taking the average on all k=46 landmarks, we obtained the overall dynamic per landmark d(tj):

d(tj)=1ki=1kdi(tj).

Intuitively, the dynamic metric introduced above reflects the total relative movement of all facial parts between two frames, averaged per landmark. In this sense, the higher the metric is in a frame, the more facial movement is detected on the dog’s face, captured by the movement of the facial landmarks.

To distinguish between the movement types, we additionally divided all landmarks into morphological regions: left and right ears, nose, and left and right eyes (from the dog’s perspective). Computing landmark dynamics in a specific region allowed us to track potential lateralization and focus on different facial parts. The dynamic in the region M containing m landmarks is defined as follows:

dM(tj)=1mi=1mdi(tj),

where an i-th landmark belongs to the region M, as shown in Fig. 3.

The videos in the dataset have frames where landmarks are not detected or detected with a low model’s confidence. Usually, the dog in such frames has its head heavily rotated, and its face isn’t partially or at all visible. The landmark dynamic is not defined in those cases since the landmarks are absent or poorly detected. During the preprocessing, we noticed that such time intervals are usually “surrounded” by landmarks detected with low confidence. It happens due to the extreme head rotation, where the model can still detect the dog’s face but cannot detect landmarks with high enough certainty. As a result, the dynamic metric may behave randomly and contain misleading values. To mitigate that, we introduced masking padding around the missing landmarks by computing the number of consecutive frames where the model’s confidence is below a certain threshold (the value of 0.6 was chosen empirically) and masking a sequence of frames of length equal to 14 of this number (minimum of 1 frame) before and after the missing landmarks. By this, we discarded potential bias related to the limitations of the landmark-based approach. Figure 4 shows the masked d(t) metric graph from a random video from the dataset and the model’s confidence level (scaled).

Fig. 4.

Fig. 4

Facial dynamics metric d(t) in a random video from the dataset (blue) masked by the landmark detector’s confidence level (scaled, red)

After aggregating dynamics for a video v with l frames, we compute the average dynamic value per video across all frames. To negate the differences between the scale of the face across different frames, we normalize the dynamic metric by dividing d(tj) by the distance between outer eye corners (inter-ocular distance, IOD) in each frame. The total dynamic D is defined as:

D(v)=1lj=1ld(tj)IODj.

Statistical analysis

Due to the hierarchical nature of the dataset (videos are grouped by dogs, each dog having several trials), we opted for using the mixed-effects ANOVA model with the dog as a random factor to compare the dynamic metrics D (looking at the face holistically) and DM for each facial region M across cephalic types (normocephalic vs. brachycephalic) and conditions (called, play, separation, stranger). The statistical analyses were performed using the statsmodels python library [48].

Results

Novel landmark-based approach for measuring continuous facial dynamics

Our landmark-based approach supports the quantitative measuring of subtle behavioral facial changes. It has the following novel features:

  • The metric uses a continuous stream of visual information (as opposed to discrete events captured by the DogFACS system).

  • It supports both a holistic analysis (whole face) and a separate analysis based on different facial parts.

  • It is practical in the sense that it can handle noisy data (frames with insufficiently detected landmarks are automatically discarded).

Figure 5 demonstrates the pipeline suggested in this study.

Fig. 5.

Fig. 5

Pipeline overview. The approach takes video as an input, which is then passed to the ELD model, following pre-processing steps. The dynamics metrics are then calculated based on the time series, representing values that quantify the degree of facial movement in the video

Facial dynamics comparison across cephalic types and conditions

We denoted the set of videos featuring normocephalic dogs by JRT and the set of videos featuring brachycephalic dogs by BT. For condition c (called, play, separation, or stranger), we denoted the subset of JRT/BT in condition c by JRT(c)/BT(c), respectively. The average dynamic value per video is denoted as D, and such dynamic for a separate facial region is denoted as DRegion.

Comparing across breeds (brachycephalic vs. normocephalic)

Table 1 shows that overall dynamics and dynamic metrics for all facial regions were significantly higher for JRT than BT.

Table 1.

Mean dynamics values D for different facial regions aggregated over Boston Terriers (BT) and Jack Russel Terriers (JRT) in all experiments. The intercept is JRT

D Dlefteye Drighteye Dnose&mouth Dleftear Drightear
JRT 3.55 2.29 2.38 3.16 4.96 4.9
BT 1.79 1.77 1.67 1.65 1.99 2.02
p <0.001 0.002 0.003 <0.001 <0.001 <0.001

Across-context comparisons within the cephalic type

On average, the dynamics DBT did not differ significantly across conditions (df=3,F=2.6,p=0.058). All regional dynamics, except for the nose and mouth region, also did not differ significantly overall (DlefteyeBT:df=3,F=1.99,p=0.121;DrighteyeBT:df=3,F=2.37,p=0.077;Dnose&mouthBT:df=3,F=4.13,p=0.009;DleftearBT:df=3,F=1.19,p=0.32;DrightearBT:df=3,F=2.27,p=0.087).

However, we observed significant differences when comparing dynamics between conditions, as shown in Table 2. For BT, the stranger and play conditions had higher dynamics than called and separation. The called condition had a significantly lower overall dynamic and dynamics in all facial regions, except for ears, than the stranger condition. In the separation condition, we observed significantly lower overall dynamic and dynamic for the muzzle area, as well as right-side asymmetry, with DrighteyeBT(sep) and DrightearBT(sep) being significantly lower than DrighteyeBT(str) and DrightearBT(str). The play condition did not differ significantly in dynamics compared to the stranger condition.

Table 2.

Mean dynamics values D for different facial regions in Boston Terriers (BT) videos subset in different conditions. The intercept is a stranger condition. Significantly different dynamic values are highlighted in bold

DBT DlefteyeBT DrighteyeBT Dnose&mouthBT DleftearBT DrightearBT
Stranger 2.29 1.97 2.13 2.1 2.54 2.57
(p<0.001) (p=0.001) (p<0.001) (p=0.001) (p=0.001) (p<0.001)
Called 1.6 1.42 1.61 1.43 1.88 1.9
(p=0.028) (p=0.029) (p=0.035) (p=0.016) (p=0.112) (p=0.085)
Play 1.94 1.79 1.86 1.9 2.01 2.07
(p=0.254) (p=0.493) (p=0.282) (p=0.494) (p=0.178) (p=0.178)
Separation 1.59 1.59 1.59 1.38 1.89 1.7
(p=0.017) (p=0.114) (p=0.023) (p=0.005) (p=0.099) (p=0.013)

In contrast to BT, dynamics in JRT differed significantly in general across conditions, except for the left ear dynamics, where the difference was marginally significant (DJRT:df=3,F=4.58,p=0.006;DlefteyeJRT:df=3,F=5.34,p=0.002;DrighteyeJRT:df=3,F=4.67,p=0.005;Dnose&mouthJRT:df=3,F=5.7,p=0.001;DleftearJRT:df=3,F=2.56,p=0.062;DrightearJRT:df=3,F=2.93,p=0.04).

When comparing conditions in JRT, we observed higher average dynamic metrics in play than in called, stranger, and separation conditions: the dynamics in the play condition were significantly higher than in the stranger condition, except for the left eye dynamic DlefteyeJRT. The called condition, on the other hand, significantly differed from the stranger only in DlefteyeJRT. The separation condition did not differ significantly in dynamics compared to the stranger condition. Table 3 shows the mean dynamic metric values for different conditions in JRT.

Table 3.

Mean dynamics values D for different facial regions in Jack Russell Terriers (JRT) videos subset in different conditions. The intercept is a stranger condition. Significantly different dynamic values are highlighted in bold

DJRT DlefteyeJRT DrighteyeJRT Dnose&mouthJRT DleftearJRT DrightearJRT
Stranger 3.45 2.38 2.45 3.12 4.62 4.76
(p<0.001) (p<0.001) (p<0.001) (p<0.001) (p<0.001) (p<0.001)
Called 3.06 1.99 2.12 2.58 4.68 4.34
(p=0.257) (p=0.049) (p=0.098) (p=0.113) (p=0.928) (p=0.420)
Play 4.24 2.77 2.85 3.93 5.74 5.78
(p=0.025) (p=0.056) (p=0.046) (p=0.026) (p=0.044) (p=0.049)
Separation 3.27 2.26 2.39 2.85 4.38 4.72
(p=0.597) (p=0.569p=) (p=0.755) (p=0.441) (p=0.648) (p=0.941)

Comparing contexts between cephalic types

Play.

JRT had significantly higher dynamics in all facial regions in the play condition (see Table 4).

Table 4.

Mean dynamics values D for different facial regions aggregated over Boston Terriers (BT) and Jack Russell Terriers (JRT) in a play condition. The intercept is JRT

D(play) Dlefteye(play) Drighteye(play) Dnose&mouth(play) Dleftear(play) Drightear(play)
JRT 4.24 2.77 2.85 3.93 5.74 5.78
BT 1.94 1.79 1.86 1.90 2.01 2.07
p <0.001 0.002 0.001 <0.001 <0.001 <0.001

Separation.

In the separation condition, JRT had significantly higher dynamics in all facial regions except the left eye (see Table 5).

Table 5.

Mean dynamics values D for different facial regions aggregated over Boston Terriers (BT) and Jack Russell Terriers (JRT) in a separation condition. The intercept is JRT

D(sep) Dlefteye(sep) Drighteye(sep) Dnose&mouth(sep) Dleftear(sep) Drightear(sep)
JRT 3.27 2.26 2.39 2.85 4.38 4.72
BT 1.59 1.59 1.59 1.39 1.89 1.70
p 0.002 0.072 0.042 0.003 0.007 <0.001

Stranger.

In the stranger condition, only ear dynamics were significantly lower in BT than in JRT (see Table 6).

Table 6.

Mean dynamics values D for different facial regions aggregated over Boston Terriers (BT) and Jack Russell Terriers (JRT) in a stranger condition. The intercept is JRT

D(str) Dlefteye(str) Drighteye(str) Dnose&mouth(str) Dleftear(str) Drightear(str)
JRT 3.45 2.38 2.45 3.12 4.62 4.76
BT 2.29 1.97 2.13 2.10 2.54 2.57
p 0.115 0.345 0.416 0.195 0.031 0.029

Called.

JRT had significantly higher dynamics in all facial regions in the called condition (see Table 7).

Table 7.

Mean dynamics values D for different facial regions aggregated over Boston Terriers (BT) and Jack Russell Terriers (JRT) in a called condition. The intercept is JRT

D(call) Dlefteye(call) Drighteye(call) Dnose&mouth(call) Dleftear(call) Drightear(call)
JRT 3.06 1.99 2.12 2.58 4.68 4.34
BT 1.60 1.42 1.61 1.43 1.88 1.90
p <0.001 0.015 0.019 0.006 <0.001 <0.001

Discussion

The use of AI methods for facial analysis in animals is attracting great interest, as highlighted in the comprehensive review by Broomé et al. [49]. Studies focusing on dogs (e.g., Boneh-Shitrit et al. [31], Franzoni et al. [50], and Hernández-Luquin et al. [51]) predominantly employ deep learning approaches to tasks such as classification of particular emotional states. While these models achieve good accuracy and offer extensive practical applications, they function as “black-box” systems, i.e., the reasoning behind their decisions is not comprehensible to humans. This lack of explainability limits their utility as tools for scientific discovery in behavioral research, which traditionally relies on rigorous measurement of behavior and the use of statistical methods on these measurements for hypothesis testing.

Facial behavior measurements in dogs have primarily been conducted using the DogFACS annotation system [21, 27]. While being as objective as possible, this system has significant drawbacks, including the labor-intensive process of manual coding and the requirement for specialized certification of coders. Furthermore, it relies on discrete categories, such as action units and descriptors, a common practice in behavioral measurement. However, this type of categorical abstraction may overlook subtle, continuous visual signals in facial behavior, limiting its ability to capture the full complexity of dynamic facial expressions.

Landmark-based approaches present an appealing alternative for measuring facial behavior, offering the “best of both worlds”. By developing landmark schemes based on DogFACS (or other AnimalFACS systems), we can maintain a connection to the underlying facial musculature while enabling continuous tracking of behavioral visual signals. The use of these approaches for facial behavior analysis has already been demonstrated useful for recognizing pain [35], classification of social interactions [37], and analyzing morphological differences in cephalic types [36] in cats.

This study is the first to utilize a computer vision model for facial landmark detection in dogs based on the scheme developed by Martvel, Abele, et al. [38] to create a fully automated method for measuring continuous facial dynamics in dogs. To achieve this, we have proposed a dynamic metric that quantifies the relative movement of facial landmarks throughout a video, enabling also a separate analysis for different facial regions, such as the ears or eyes. In this metric, we have addressed the normalization of landmark positions to account for camera angles and distance variations. Most importantly, we explicitly addressed the isolation of the relative movement of facial landmarks from the general motion of the dog within the frame, leading to better capturing the subtle visual cues of facial behavior. An additional technical advantage of our method over automated deep learning approaches is its robustness under noisy conditions. By leveraging the model’s confidence threshold, we can automatically exclude noisy frames where facial landmarks are not clearly identified, eliminating the need for labor-intensive manual or automated preprocessing.

The limitation of the proposed method is that it operates with two-dimensional facial landmarks, introducing errors connected with three-dimensional movement since it is impossible to fully consider the distortions and perspective changes. One possible solution is using 3D animal face approximation models constructed from detected (x,y) coordinates [52]. By such approximation, it becomes possible to operate with three-dimensional coordinates, but with a cost of their placement accuracy. We plan to explore this method in our future studies.

Another important note about the landmark-based method is that it captures only a part of a dog’s activity, primarily working with dog faces oriented toward the camera. When the dog’s face is heavily rotated, the computer vision model cannot detect landmarks with high confidence. Such limitation applies to both other automated and manual motion coding methods to some degree, but landmarks tend to be more acute in this sense. However, for this study, it would be reasonable to assume that the dog was oriented toward the experimenters most of the time and displayed at least not more facial dynamics when oriented away from the camera.

Using the novel measurement method described above, we quantified the facial dynamics of brachycephalic and normocephalic dogs across four contexts, leading to some new insights into the differences. Brachycephaly is particularly intriguing in this context due to its distinctive physical traits, including a large forehead and eyes, a shortened rostrum, a small-appearing mouth, and a reduced chin [53]. These features not only alter the dog’s appearance but may also impact its communication.

Our results indicate that the facial dynamics of BT dogs, as objectively quantified by our method, are significantly lower compared to JRT dogs across all contexts and facial regions. This could be related to the communication limitations attributed to paedomorphic traits of brachycephalic breeds [23]. Additionally, it aligns with the finding that humans had more difficulty perceiving negative signals in BT rather than in JRT. They even attributed positive emotional charge to negative situations [22], although our analysis showed that facial dynamics are the highest in the case of the stranger condition. The misinterpretation by humans may be influenced by the “baby schema” effect, described by Lorenz [54], wherein features such as a round head and large eyes serve as optical signals that elicit caretaking behaviors and feelings of cuteness [55]. Unlike subjective human assessments, our method provides a more objective evaluation of these signals.

The difference in facial dynamics across all contexts between the Boston Terrier and the Jack Russell Terrier can be related to the findings of Paul et al. [56], who compared their craniofacial ratio (CFR). Indeed, brachycephalic breeds possess relatively larger (taller) foreheads and a larger eye area than their normocephalic counterparts. This, along with significant differences in facial muscle distribution between the two cephalic types of dogs [24], would explain the differences between the two breeds in all conditions, as revealed in our study.

An additional interesting finding is that the play (positive) condition has significantly higher facial dynamics across both breeds than the separation (negative) condition. Freezing and passiveness have been described in the literature as separation-related behaviors [57]. Interestingly, the Boston Terriers also had the highest dynamics in the stranger (negative) condition, while significantly lower dynamics characterized both negative conditions for JRT.

The tendency of human viewers to focus on the eyes and nasal/mouth region when looking for affective cues has been documented [58]. Similarly, participants of the study of Eretová et al. [22] self-reported the eyes and ears of dogs, regardless of their skull shape, as the most informative features of the face, followed closely by the snout. Our findings show that ears have the highest dynamics across all contexts for both breeds, which may explain the viewers’ attention to them. On the other hand, the measured eye dynamics are lower than ears in both breeds. It should be noted in this context that in brachycephalic dogs, the eyes typically appear large as they protrude from the orbit more than in normocephalic dogs [59], so humans may consider them a vital communication cue and look at them intently, despite the objective relatively low dynamics.

The obtained results demonstrated a difference in dynamic between paired parts of the face (eyes and ears) in some cases. We further tested the results with mixed-effects regression tests for different conditions; the asymmetry between left and right facial dynamics across conditions or the two breeds was insignificant (see Table 9 in Supplementary Materials). Regardless, the presented materials were not suitable for the proper inspection of left-right asymmetry in facial expressions due to the fact that the Experimenters’ positions and angles of the camera used in this study varied and were not designed to inspect symmetry properly. In future studies, we plan to further emphasize the study of asymmetry, addressing it fully in experiment design and data collection.

A limitation of the present study is that only two breeds represent brachycephalic and normocephalic morphological types, respectively. Some dogs’ reactions could be partly influenced by the different temperaments of the two breeds studied; therefore, it is necessary to test other representatives of dogs that differ in nose length. Additionally, the snout length and overall head shape somewhat varied among the brachycephalic dogs. Future research should incorporate a greater diversity of breeds to better represent the spectrum of morphological variations. Additionally, expanding the sample size to include a larger number of individual dogs will enhance the generalizability and robustness of the findings.

This study represents an initial step in systematically and objectively quantifying canine visual cues across different morphological types and situational contexts. Future research should involve a broader range of canine phenotypes to fully understand how facial dynamics interact with the effectiveness of canine visual cues. Further expanding, validating, and calibrating the computational approach demonstrated here, which has proven effective in capturing subtle facial dynamics, could provide deeper insights into this complex topic.

Appendix

1 Sex and age information for dogs participated in the study

Table 8.

Sex and age of Jack Russell (JRT) and Boston Terriers (BT) participated in the study

Code Sex Age (years)
JRT01 Female 2.5
JRT02 Male 4
JRT03 Female 2.5
JRT04 Female 2
JRT05 Male 3
JRT06 Female 2.5
JRT07 Female 2
BT01 Female 1.25
BT02 Male 1.25
BT03 Male 5
BT04 Female 3
BT05 Male 1.5
BT06 Female 1
BT07 Female 6

2 Dynamics asymmetry

Table 9.

Regression coefficient β and p-value for left and right eye and ear dynamics aggregated over Boston Terriers (BT) and Jack Russell Terriers (JRT) in all conditions. The left dynamic is the dependent variable, and the right dynamic is the independent variable

Condition DlefteyeBTDrighteyeBT DlefteyeJRTDrighteyeJRT DleftearBTDrightearBT DleftearJRTDrightearJRT
Called 0.80 (p<0.001) 0.93 (p=0.002) 0.91 (p<0.001) 0.63 (p=0.034)
Play 0.77 (p<0.001) 0.82 (p<0.001) 0.88 (p<0.001) 0.75 (p<0.001)
Separation 0.85 (p<0.001) 0.99 (p<0.001) 0.73 (p<0.001) 0.50 (p=0.008)
Stranger 0.88 (p<0.001) 0.96 (p<0.001) 1.04 (p<0.001) 0.71 (p<0.001)

Authors' contributions

PE, LP, HC, and PP acquired the data. GM, IS, YE, DG, and AZ conceived the experiment(s). GM conducted the experiment(s). GM, PE, LP, HC, PP, IS, YE, DG, and AZ analyzed and/or interpreted the results. All authors reviewed the manuscript.

Funding

The research was partially supported by the Data Science Research Center at the University of Haifa. PE, LP, and HC were supported by the Grant Agency of the Czech University of Life Sciences Prague (grant no. SV22-18-21370).

Data availability

Videos from the original study are available from the corresponding author on reasonable request. The landmark time series dataset generated during the current study is available at https://github.com/martvelge/dog_dynamics.

Code availability

The code generated during the current study is available at https://github.com/martvelge/dog_dynamics.

Declarations

Ethics approval and consent to participate

The data collection of the dogs’ behavior was assessed as noninvasive experimentation and approved by the Animal Welfare Committee of Eötvös Loránd University (Certificate number PEI/001/1056-4/2015).

All dog owners who participated in video recording with their dogs signed a written informed consent form.

Consent for publication

No personal data was published in the current study.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

George Martvel and Petra Eretová contributed equally to this work.

References

  • 1.Mendl M, Burman OH, Paul ES. An integrative and functional framework for the study of animal emotion and mood. Proc R Soc B Biol Sci. 2010;277(1696):2895–904. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ekman P, Friesen WV. Measuring facial movement. Environ Psychol Nonverbal Behav. 1976;1:56–75. [Google Scholar]
  • 3.Ekman P, Keltner D. Universal facial expressions of emotion. Calif Mental Health Res Dig. 1970;8(4):151–8. [Google Scholar]
  • 4.Russell JA, Bachorowski JA, Fernández-Dols JM. Facial and vocal expressions of emotion. Annu Rev Psychol. 2003;54(1):329–49. [DOI] [PubMed] [Google Scholar]
  • 5.Descovich KA, Wathan J, Leach MC, Buchanan-Smith HM, Flecknell P, Farningham D, et al. Facial expression: An under-utilized tool for the assessment of welfare in mammals. ALTEX-Altern Anim Experimentation. 2017;34(3):409–29. [DOI] [PubMed] [Google Scholar]
  • 6.Mota-Rojas D, Marcet-Rius M, Ogi A, Hernández-Ávalos I, Mariti C, Martínez-Burnes J, et al. Current advances in assessment of dog’s emotions, facial expressions, and their use for clinical recognition of pain. Animals. 2021;11(11):3334. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Andersen PH, Broomé S, Rashid M, Lundblad J, Ask K, Li Z, et al. Towards machine recognition of facial expressions of pain in horses. Animals. 2021;11(6):1643. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Evangelista MC, Watanabe R, Leung VS, Monteiro BP, O’Toole E, Pang DS, et al. Facial expressions of pain in cats: the development and validation of a Feline Grimace Scale. Sci Rep. 2019;9(1):19128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Hytönen MK, Lohi H. Canine models of human rare disorders. Rare Dis. 2016;4(1):e1006037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Range F, Marshall-Pescini S. Comparing wolves and dogs: current status and implications for human ‘self-domestication’. Trends Cogn Sci. 2022;26(4):337–49. [DOI] [PubMed] [Google Scholar]
  • 11.Palmer R, Custance D. A counterbalanced version of Ainsworth’s Strange Situation Procedure reveals secure-base effects in dog-human relationships. Appl Anim Behav Sci. 2008;109(2–4):306–19. [Google Scholar]
  • 12.Payne E, Bennett PC, McGreevy PD. Current perspectives on attachment and bonding in the dog-human dyad. Psychol Res Behav Manag. 2015;8:71-9. 10.2147/PRBM.S74972. PMID: 25750549; PMCID: PMC4348122. [DOI] [PMC free article] [PubMed]
  • 13.Karl S, Boch M, Zamansky A, van der Linden D, Wagner IC, Völter CJ, et al. Exploring the dog-human relationship by combining fMRI, eye-tracking and behavioural measures. Sci Rep. 2020;10(1):22273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Stevens JR. Canine cognition and the human bond. Springer; 2023.
  • 15.Kaminski J, Hynds J, Morris P, Waller BM. Human attention affects facial expressions in domestic dogs. Sci Rep. 2017;7(1):12914. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Müller CA, Schmitt K, Barber AL, Huber L. Dogs can discriminate emotional expressions of human faces. Curr Biol. 2015;25(5):601–5. [DOI] [PubMed] [Google Scholar]
  • 17.Kujala MV. Canine emotions as seen through human social cognition. Anim Sentience. 2017;2(14):1. [Google Scholar]
  • 18.Burza LB, Bloom T, Trindade PHE, Friedman H, Otta E. Reading emotions in Dogs’ eyes and Dogs’ faces. Behav Process. 2022;202:104752. [DOI] [PubMed] [Google Scholar]
  • 19.Burrows AM, Kaminski J, Waller BM, Omstead KM, Rogers-Vizena C, Mendelson B. Dog faces exhibit anatomical differences in comparison to other domestic animals. Anat Rec (Hoboken). 2021;304(1):231–41. 10.1002/ar.24507. Epub 2020 Sep 24. PMID: 32969196. [DOI] [PubMed]
  • 20.Siniscalchi M, d’Ingeo S, Minunno M, Quaranta A. Communication in dogs. Animals. 2018;8(8):131. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Sexton CL, Buckley C, Lieberfarb J, Subiaul F, Hecht EE, Bradley BJ. What is written on a dog’s face? Evaluating the impact of facial phenotypes on communication between humans and canines. Animals. 2023;13(14):2385. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Eretová P, Liu Q, Přibylová L, Chaloupková H, Bakos V, Lenkei R, et al. Can my human read my flat face? The curious case of understanding the contextual cues of extremely brachycephalic dogs. Appl Anim Behav Sci. 2024;270:106134. [Google Scholar]
  • 23.Goodwin D, Bradshaw JW, Wickens SM. Paedomorphosis affects agonistic visual signals of domestic dogs. Anim Behav. 1997;53(2):297–304. [Google Scholar]
  • 24.Schatz KZ, Engelke E, Pfarrer C. Comparative morphometric study of the mimic facial muscles of brachycephalic and dolichocephalic dogs. Anat Histol Embryol. 2021;50(6):863–75. [DOI] [PubMed] [Google Scholar]
  • 25.Ekman P, Rosenberg EL. What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS). USA: Oxford University Press; 1997. [Google Scholar]
  • 26.Caeiro C, Guo K, Mills D. Dogs and humans respond to emotionally competent stimuli by producing different facial actions. Sci Rep. 2017;7(1):15525. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Bremhorst A, Sutter NA, Würbel H, Mills DS, Riemer S. Differences in facial expressions during positive anticipation and frustration in dogs awaiting a reward. Sci Rep. 2019;9(1):19312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Pedretti G, Canori C, Marshall-Pescini S, Palme R, Pelosi A, Valsecchi P. Audience effect on domestic dogs’ behavioural displays and facial expressions. Sci Rep. 2022;12(1):9747. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Bremhorst A, Mills D, Würbel H, Riemer S. Evaluating the accuracy of facial expressions as emotion indicators across contexts in dogs. Anim Cogn. 2022;25(1):121–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Hamm J, Kohler CG, Gur RC, Verma R. Automated facial action coding system for dynamic analysis of facial expressions in neuropsychiatric disorders. J Neurosci Methods. 2011;200(2):237–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Boneh-Shitrit T, Feighelstein M, Bremhorst A, Amir S, Distelfeld T, Dassa Y, et al. Explainable automated recognition of emotional states from canine facial expressions: the case of positive anticipation and frustration. Sci Rep. 2022;12(1):22611. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Labuguen R, Bardeloza DK, Negrete SB, Matsumoto J, Inoue K, Shibata T. Primate markerless pose estimation and movement analysis using DeepLabCut. In: 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd international conference on imaging, vision & pattern recognition (icIVPR). IEEE; 2019. pp. 297–300.
  • 33.Wiltshire C, Lewis-Cheetham J, Komedová V, Matsuzawa T, Graham KE, Hobaiter C. DeepWild: Application of the pose estimation tool DeepLabCut for behaviour tracking in wild chimpanzees and bonobos. J Anim Ecol. 2023;92(8):1560–74. [DOI] [PubMed] [Google Scholar]
  • 34.Martvel G, Shimshoni I, Zamansky A. Automated detection of cat facial landmarks. Int J Comput Vis. 2024;132:3103–18. 10.1007/s11263-024-02006-w.
  • 35.Martvel G, Lazebnik T, Feighelstein M, Henze L, Meller S, Shimshoni I, et al. Automated video-based pain recognition in cats using facial landmarks. Sci Rep. 2024;14(1):28006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Martvel G, Lazebnik T, Feighelstein M, Meller S, Shimshoni I, Finka L, et al. Automated landmark-based cat facial analysis and its applications. Front Vet Sci. 2024:11:1442634. [DOI] [PMC free article] [PubMed]
  • 37.Martvel G, Scott L, Florkiewicz B, Zamansky A, Shimshoni I, Lazebnik T. Computational investigation of the social function of domestic cat facial signals. Sci Rep. 2024;14(1):27533. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Martvel G, Abele G, Bremhorst A, Canori C, Farhat N, Pedretti G, et al. DogFLW: Dog Facial Landmarks in the Wild Dataset. 2024. arXiv preprint arXiv:2405.11501.
  • 39.Mills DS. What’s in a word? A review of the attributes of a command affecting the performance of pet dogs. Anthrozoös. 2005;18(3):208–21. [Google Scholar]
  • 40.Colbert-White EN, Anderson DC, Maus MQ. Positive intonation increases the perceived value of smaller rewards in a quantity discrimination task with dogs (Canis familiaris). J Comp Psychol. 2025;139(1):18–25. 10.1037/com0000392. Epub 2024 Sep 9. PMID: 39250238. [DOI] [PubMed]
  • 41.Hasin D, Pampori ZA, Aarif O, Bulbul K, Sheikh AA, Bhat IA. Happy hormones and their significance in animals and man. Int J Vet Sci Anim Husb. 2018;3(5):100–3. [Google Scholar]
  • 42.Aldridge GL, Rose SE. Young children’s interpretation of dogs’ emotions and their intentions to approach happy, angry, and frightened dogs. Anthrozoös. 2019;32(3):361–74. [Google Scholar]
  • 43.Appleby D, Pluijmakers J. Separation anxiety in dogs: the function of homeostasis in its development and treatment. Vet Clin: Small Anim Pract. 2003;33(2):321–44. [DOI] [PubMed] [Google Scholar]
  • 44.Amat M, Le Brech S, Camps T, Manteca X. Separation-Related Problems in Dogs: A Critical Review. Adv Small Anim Care. 2020;1:1–8. [Google Scholar]
  • 45.Bálint A, Faragó T, Miklósi Á, Pongrácz P. Threat-level-dependent manipulation of signaled body size: dog growls’ indexical cues depend on the different levels of potential danger. Anim Cogn. 2016;19:1115–31. [DOI] [PubMed] [Google Scholar]
  • 46.Pongrácz P, Dobos P, Zsilák B, Faragó T, Ferdinandy B. ‘Beware, I am large and dangerous’-human listeners can be deceived by dynamic manipulation of the indexical content of agonistic dog growls. Behav Ecol Sociobiol. 2024;78(3):37. [Google Scholar]
  • 47.Vas J, Topál J, Gácsi M, Miklósi A, Csányi V. A friend or an enemy? Dogs’ reaction to an unfamiliar person showing behavioural cues of threat and friendliness at different times. Appl Anim Behav Sci. 2005;94(1–2):99–115. [Google Scholar]
  • 48.Seabold S, Perktold J. Statsmodels: econometric and statistical modeling with python. SciPy. 2010;7(1):92–6.
  • 49.Broomé S, Feighelstein M, Zamansky A, Carreira Lencioni G, Haubro Andersen P, Pessanha F, et al. Going deeper than tracking: A survey of computer-vision based recognition of animal pain and emotions. Int J Comput Vis. 2023;131(2):572–90. [Google Scholar]
  • 50.Franzoni V, Milani A, Biondi G, Micheli F. A preliminary work on dog emotion recognition. In: IEEE/WIC/ACM International Conference on Web Intelligence-Companion Volume, 2019. p. 91–96. ISBN 978-1-4503-6988-6. 10.1145/3358695.3361750.
  • 51.Hernández-Luquin F, Escalante HJ, Villaseñor-Pineda L, Reyes-Meza V, Villaseñor-Pineda L, Pérez-Espinosa H, et al. Dog emotion recognition from images in the wild: Debiw dataset and first results. In: Proceedings of the ninth international conference on animal-computer interaction. 2022. p. 1–13. 10.1145/3565995.3566041.
  • 52.Sun Y, Murata N. CAFM: A 3D morphable model for animals. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision workshops. 2020. p. 20–24. 10.1109/WACVW50321.2020.9096941.
  • 53.Alley TR. Head shape and the perception of cuteness. Dev Psychol. 1981;17(5):650. [Google Scholar]
  • 54.Lorenz K. Die angeborenen formen möglicher erfahrung. Z Tierpsychol. 1943;5(2):235–409. [Google Scholar]
  • 55.Thorn P, Howell TJ, Brown C, Bennett PC. The canine cuteness effect: owner-perceived cuteness as a predictor of human-dog relationship quality. Anthrozoös. 2015;28(4):569–85. [Google Scholar]
  • 56.Paul ES, Coombe E, McGreevy PD, Packer RM, Neville V. Are Brachycephalic Dogs Really Cute? Evidence from Online Descriptions Anthrozoös. 2023;36(4):533–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Palestrini C, Minero M, Cannas S, Rossi E, Frank D. Video analysis of dogs with separation-related behaviors. Appl Anim Behav Sci. 2010;124(1–2):61–7. [Google Scholar]
  • 58.Correia-Caeiro C, Guo K, Mills DS. Perception of dynamic facial expressions of emotion between dogs and humans. Anim Cogn. 2020;23(3):465–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Rujirekasuwan N, Sattasathuchana P, Theerapan W, Thengchaisri N. Comparative analysis of ocular biometry, ocular protrusion, and palpebral fissure dimensions in brachycephalic and nonbrachycephalic dog breeds. Vet Radiol Ultrasound. 2024;65(4):437–46. 10.1111/vru.13351. Epub 2024 Apr 29. PMID: 38682866. [DOI] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Videos from the original study are available from the corresponding author on reasonable request. The landmark time series dataset generated during the current study is available at https://github.com/martvelge/dog_dynamics.

The code generated during the current study is available at https://github.com/martvelge/dog_dynamics.


Articles from BMC Veterinary Research are provided here courtesy of BMC

RESOURCES