Abstract
Eye contact is a crucial aspect of social interaction, conveying social cues based on the direction of one's gaze. Perceiving eye contact affects behavior and social processing. The widespread use of remote video conferencing technologies impacts these social cues, because most technologies do not support natural eye contact. We consider the question of how to best achieve the perception of eye contact when a person is captured by a camera and then rendered on a two-dimensional display. To test this, 17 participants were asked to rate whether 3 actors, photographed while looking at different vertical locations, were making eye contract (yes–no analysis), or were looking up or down (up–down analysis). We quantitatively assessed the gaze direction required to optimize the perception of eye contact with the camera lens. Contrary to conventional wisdom, which suggests looking directly into the camera leads to the perception of eye contact, results from both the yes–no and the up–down analyses showed that it is preferable to look approximately 2° below the camera lens. These results provide a surprising answer to the question of where to look to convey an impression of eye contact in screen-mediated interactions.
Keywords: gaze, gaze awareness, eye contact, video conferencing
Introduction
Gaze is an essential social and emotional cue for person-to-person interaction, playing a crucial role in supporting social behavior. It has been extensively studied in vision science (e.g., Emery, 2000; Ewbank, Jennings, & Calder, 2009; Grossmann, 2017; Hietanen, 2018; Hessels, 2020; Mareschal, Calder, & Clifford, 2013). The direction of one's gaze holds significant sway in social exchanges, serving to steer interactions, enhance communication, and convey varying degrees of intimacy and authority (Kleinke, 1986). Research suggests that individuals rely on cues from others' eye movements to gauge the level of social affinity shared with them. Beyond indicating an individual's focus of attention, gaze is crucial for establishing joint attention (Bayliss, Bartlett, Naughtin, & Kritikos, 2011; Stephenson, Edwards, & Bayliss, 2021), exerting social influence (e.g., persuasion) (Kleinke, 1986), deducing others’ mental states (Baron-Cohen, Campbell, Karmiloff-Smith, Grant, & Walker, 1995; Calder, 2003), recognizing face identity (McKelvie, 1976), and emotional states (Peterson & Eckstein, 2012). Gaze is especially relevant in conversational settings, because gaze cues can enhance verbal communication, including regulating turn-taking in conversation (Argyle, Cook, & Cramer, 1994) and facilitating instructions (Andrist, Gleicher, & Mutlu, 2017). Additionally, humans have a natural expectation that gaze is directed toward them in a social setting (Mareschal et al., 2013). However, although prior work established the importance of eye contact and quantified sensitivities to variations in gaze, the question of where specifically to direct one's gaze relative to a camera to optimize eye contact remains unanswered. Today, people frequently interact through teleconferencing systems using webcams. This trend has been further accelerated by the COVID-19 pandemic, which has necessitated safe distancing and remote working measures, and led to an increase in video-based interaction in both personal and professional interactions. This relatively new approach to communication opens new ways in which we experience person-to-person interaction and changes our ability to perceive social cues and communication aspects such as eye contact. Several studies have addressed gaze perception and eye contact in face-to-face and video-mediated conversation (e.g., Bohannon, Herbert, Pelz, & Rantanen, 2013); however, there is no consensus regarding the exact definition of eye contact (Jongerius, Hessels, Romijn, Smets, & Hillen, 2020), and the field uses a wide variety of methods to measure eye contact. Gale and Monk (2000) differentiate three categories of gaze awareness: full gaze awareness, which is the knowledge of what object in the environment someone is looking at; partial gaze awareness, the knowledge of the general direction someone is looking in; and mutual gaze awareness, the knowledge of whether someone is looking at you. The extent to which these different levels of gaze awareness will be achieved in video communication settings, depends on different configurations of image size and camera placement. Previous studies have established that humans can easily differentiate whether gaze is diverted or directed toward their face. Gibson and Pick (1963) measured the subjective perception of eye contact when the gazer fixated on points horizontally displaced from the subject's face. They observed that people are extremely sensitive to eye contact perception, which is comparable with visual acuity. Further studies tested whether the same could be observed when the fixations were closely spaced (Cline, 1967; Knight, Langmeyer, & Lundgren, 1973). Krüger and Hückstedt (1969) conducted an experiment where the gazer fixated on seven points around the eyes of the perceiver (forehead, bridge and tip of nose, left and right eye, left and right edges of the face) and found that perceivers were able to identify the different fixations, albeit not with high accuracy. These paradigms were also explored in the realm of video conferencing platforms. Multiple studies examined perceived eye contact when gazers looked above or below the camera lens (Horstmann & Linke, 2022; Schmitz & Einhäuser, 2023; Stokes, 1969; White, Hegarty, & Beasley, 1970). Collectively, these studies observed that the threshold of losing eye contact perception was 4.5 degrees of visual angle (°) above and 5.5° below the camera, and the sensitivity to eye contact is roughly symmetric in all directions of fixations (Anstis, Mayhew, & Morley, 1969; Knight et al., 1973; Stokes, 1969; White et al., 1970). However, past studies neither used measures of confidence about the gaze (exactly where the gazer is looking during image/video acquisition) nor controlled for the gazer's head movements, which has been shown to influence eye contact perception (Anstis et al., 1969; Cline, 1967; Gibson & Pick, 1963).
Here we examine the question of mutual gaze awareness, or eye contact, for webcams and quantify a precise measurement about where a user should direct their gaze relative to the camera to achieve perceived eye-contact. Specifically, we took images of actors looking at fixation points above, at, and below a camera placed at typical monitor distance (20”–24”), and then asked participants to indicate which recorded image(s) corresponded to the actors making eye contact. We specifically sampled along the vertical axis because this is more relevant to common video conferencing scenarios, in which the camera is horizontally centered, but vertically displaced from the center of the screen.
We find that the images that led to the largest perception of eye contact were not those in which the actors were directly looking at the camera, as one might expect, but approximately 2° below the center of the lens of the camera.
Methods
Participants
Seventeen participants (mean age, 22.3 ± 5.2 years; 7 women) with normal or corrected-to-normal vision were recruited to take part in the study. Experiment protocols were approved by the UCR Institutional Review Board, and all participants gave written informed consent prior to the experiment. Participants were compensated $10 for their participation in the study.
Stimuli and apparatus
Stimuli consisted of a total of 88 pictures of gazers’ faces collected using a Logitech C920x HD Pro Webcam. We refer to gazers as actors here and in the remainder of the paper. These pictures were collected from four different actors (three men and one woman; mean age, 25.0 ± 2.3 years) using an eye-tracking system (TRACKPixx3, VPixx Technologies, Saint-Brune-de-Montarville, Quebec, Canada) that allowed us to verify that the actors were correctly gazing at the intended locations on the screen while the pictures were taken.
The actors were instructed to fixate their gaze on a series of fixation dots on a custom-made cardboard structure, as shown in Figure 1. The camera was aligned with the actor's eye level, and pictures were collected while the actor fixated on each of 11 fixation locations (−7°, −5°, −3°, −2°, −1°, 0°, 1°, 2°, 3°, 5°, and 7°) vertically displaced from central fixation (0° [camera lens]).
Figure 1.
Experimental set up used to collect pictures from actors.
Before picture collection, a 9-point calibration and validation procedure was performed. Cutoff accuracy to accept the calibration was 0.5°. Calibration and eye monitoring was performed on a Display++ (Cambridge Research System, Cambridge, UK) monitor. The monitor was then replaced by a custom-made cardboard structure (Figure 1) with an aperture at the center within which the webcam was placed. Hair cross stickers were placed at the 11 fixation locations mentioned elsewhere in this article.
Actors were then asked to fixate at different fixation locations, and a 3-second countdown was provided to inform them about the exact moment the picture would be taken. If the actor's eyes moved more than 0.5° from the intended fixation spot at the moment of picture collection, the image was discarded and reacquired until fixation was within 0.5°.
This procedure was repeated for two viewing distances (20” and 24”1), chosen as best representing the distances suggested in the general guidelines from the U.S. Occupational Safety and Health Administration, as well as typical distances at which desktops and laptops are placed during video calls. The fixation points were adjusted accordingly to maintain consistent angular spacing.
External lighting was adjusted to ensure equal luminance and exposure across actors. The images were then manually cropped and resized so that the midpoint between the two eyes appeared at the center of the screen for all images during the main experiment. During the main experiment, images were randomized and presented on a Samsung S24e310 LCD monitor with a resolution of 1,920 × 1,080. Picture size was 19.5° vertical × 13° horizontal. Visual stimuli were generated using Matlab (MathWorks, Natick, MA) and the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Participants sat in a dark room, 23” (60 cm) from the screen. This distance was chosen to keep the size of the image the same as the face of the actor in the images shown.
Procedure
A total of 176 trials were presented. Participants viewed each of the 88 pictures twice and were asked to answer four questions per picture (Figure 2). A chin and head rest were used to ensure accurate viewing distance. Viewing distance was 57 cm. The exact phrasing of the questions was as follows: “Do you perceive the actor looking straight at you?” and “Do you perceive the actor looking up or down?” Each question was followed by a confidence judgment, as described elsewhere in this article. Participants were asked to examine each picture until they had made a judgment, at which point they responded by pressing a key to proceed to the next picture. They were encouraged to rely on their spontaneous impression and to avoid overthinking their estimate; as such, each picture was viewed for no more than a few seconds. Participants responded to the questions using the arrow keys on the keyboard, that is, up arrow (for yes to the first question and up for the second question) or down arrow (for no to the first question and down for the second question). Participants were also asked to rate their response confidence on a 5-point scale (1 [least confident] to 5 [very confident]) for each question.
Figure 2.
Example of a trial during the experiment. Participants were presented with one picture per trial and asked four questions as shown in the image. All trials were self-paced, and participants answered all questions for each picture they viewed.
Data analysis
Data were collapsed across the two viewing distances and analyzed separately for the yes vs. no and the up vs. down questions. In the up vs. down case, a logistic function was fitted to the data corresponding with the up response across the various eye offsets. The point corresponding to the 50% proportion of up responses was estimated to be the offset at which the actor was perceived as making eye contact.
In the case of the yes vs. no analysis, a Gaussian function was fit to individual distributions of yes responses across fixation offsets. The peak of the function was estimated and used as the individual subjective gaze offset that led to the perception of eye contact.
For the up vs. down case, offsets corresponding to 50% up responses were compared against zero with a one-sample t-test. For the yes vs. no analysis, individual peaks of the function were compared against zero with a one sample t-test.
Results
We present a summary of results in Figure 3, separating the yes–no and the up–down study questions.
Figure 3.
Results summary. (a) Average distribution of yes responses across gaze offset with 95% confidence interval and Gaussian fit showing a peak at 1.76° below 0° fixation (negative numbers indicate fixations above the center of the camera (0°), and positive numbers indicate fixations below the center). (b) Visual representation of the point of fixation (−1.76°) on an actor's face to produce an impression of eye contact. The circular heatmap on the image of the actor corresponds to the Gaussian fit of perceived eye contact. The red center of the heatmap (the center of the multivariate normal probability density function) is the optimal perceived point of eye contact. As one moves further away from this point, toward the outer blue ring, so does the perception of eye contact being made. (c) Average distribution of up responses across gaze offset fitted with a psychometric function, alongside pictures corresponding to the maximum positive (7°) and negative (−7°) offset, and the closest offset to the one giving rise to a perception of eye contact (−2°).
Yes–no
The shift in perceived gaze from zero (central fixation) was estimated by fitting a Gaussian distribution to the proportion of yes responses (i.e., the actor is looking at me) for the various gaze offset levels (Figure 3a). The offset corresponding to the peak of the distribution was estimated and tested against zero via a one-sample t test. Results showed a significant shift from zero, t(16) = 4.45, p < 0.0001.
Up–down
The shift in perceived gaze from zero (central fixation) was estimated by fitting a psychometric function to the proportion of responses ‘up’ for the various gaze offset levels (Figure 3c). The point of subjective equality (where participants responded 50% up and 50% down for a specific gaze offset) was estimated and tested against zero via a one-sample t-test. Results showed a significant shift from zero, t(16) = 6.19, p < 0.0001.
Taken together, both analyses point to a consistent pattern for participants to perceive the actor as making eye contact when they fixated at a point that was approximately 2° below the center fixation point (camera). We also observed a relatively high proportion of yes responses between 0° and 3° fixation points (camera and below the camera). These results are represented by the heatmap in Figure 3b: the red center of the heatmap is the peak of the Gaussian distribution and the peak of perceived eye contact. As the actor's gaze moves away from this point, so does the perceived sensation of eye contact. Overall, participants tended to perceive the actor's downward gaze as eye contact.
Discussion
In this study, we examined perception of eye contact in conditions of video conferencing. Specifically, we asked participants to rate the perception of eye contact for a series of actors photographed while fixating straight ahead, or up to 7° above or below center. Results show that people tend to perceive eye contact not when the actors are looking straight ahead, but when they are looking slightly below the center of the camera lens (Figure 4).
Figure 4.
(Left) Implementation of the study results. Directing our gaze at a camera lens is perceived as looking slightly upwards. (Right) Looking two degrees below the camera lens yields a truer perception of eye contact.
Our results suggest that a gaze that is slightly downward from the eyes and toward the middle of the nose is perceived in face-to-face interactions as making eye contact. Why might this be? Our social interactions rely on our ability to recognize a variety of facial cues that provide information required to understand our conversation partner's identity and emotion, not just our ability to perceive eye contact. These social interactions are best perceived when gazing halfway down the nose (Peterson & Eckstein, 2012). When one foveates just below the eyes, they can perceive cues from another's eyes, nose, and mouth, enabling them to maximize the number of facial cues they perceive. Thus, from an evolutionary perspective, it is likely that we have learned to categorize gazes below eye level as being eye contact.
Prior literature has shown a range of tolerance for eye offset when the actor was looking into the camera or points below or above the camera (approximately ±5°) (Stokes, 1969; White et al., 1970). Our study challenges this viewpoint by reporting perception of eye contact for smaller windows of fixations (up to approximately 3° above chance). This difference could be explained by methodological changes with respect to previous literature. Specifically, prior studies did not use measures to ensure the actor was fixating on the intended point before collecting the image or video. Our study offers the advantage of using eye tracking to bypass this issue. Additionally, our study used head and chin rests to avoid head movements, tilt or misalignments that could impact perceived eye contact. A third difference from the previous literature is that, for both in-person and video conference settings, studies have not consistently ensured that the eye level of the actor and perceiver were aligned. This factor could have contributed to the greater tolerance in eye contact perception. Our study circumvented this by first aligning the actor's eye with the center of the camera during image acquisition and by aligning the perceiver's eyes with the eyes of the actor's image during stimulus presentation.
Prior studies attributed this high level of subjective perception of eye contact when the actor looks below the camera to the eye's anatomy (van der Heijden, 1996). When the actor looks above the camera, the position of the iris within the sclera changes noticeably because the upper eyelids move while the lower eyelids remain fixed. However, when the actor looks below the camera, the position of the iris in relation to the sclera does not change noticeably. Previous results are consistent with the snap to contact theory (Chen, 2002), which states that, unless perceivers are certain that the actors are not looking at them, they will bias their perception toward eye contact. That is, owing to the less noticeable shift in the appearance of the sclera when a person looks downward, perceivers will tend to perceive eye contact. A complementary explanation for this larger range of tolerance invokes the cone of equivalent gaze, as seen for horizontal eye movements (Gamer & Hecht, 2007). The cone of gaze represents the range of gaze directions made by a gazer that are classified by the perceiver as engaging in mutual gaze or making eye contact.
Limitations and future work
Although our experiment quantified the gaze offset, in degrees of visual angle, for best perceived eye contact, or mutual gaze, and verified this through the use of an eye tracker, some limitations remain. Our study did not test this effect across combinations of multiple variables, such as the size of monitors, seating distances from the monitor, or the location of the webcam relative to the actor's eyes and the impact of these variables on the perception of eye contact. In particular, changing the viewing distance—and consequently altering the projected size of faces in degrees of visual angle—would likely shift the 2° offset we reported here. Additionally, our study focused on front-facing faces, but an important expansion would be to examine how different head orientations along the horizontal and vertical axes influence perceived gaze direction. Indeed, head orientation and facial features have been known to affect perception of eye contact since at least the nineteenth century (Wollaston, 1833). A body of research since then has shown how the same pair of eyes could be perceived as looking at different directions, depending on contextual information coming from facial features or head orientation (Kluttz, Mayes, West, & Kerby, 2009; Langton, 2000). To partially address this issue and assess whether head tilt may have influenced our results, we reviewed our set of actors’ images to manually estimate eye misalignment (see Supplementary Figure S1) and applied a model to estimate head pitch (tilt along the vertical axis; see Supplementary Figure S3). To analyze pitch, we fit a 3D morphable model (Guo et al., 2020) to each picture and obtain the three-dimensional facial landmarks. Regressing these landmarks provides the yaw, pitch, and roll for head pose estimation.
Additionally, we compared pictures taken at 20” and 24” (see Supplementary Figure S4). In the case of eye misalignment, when present, it ranged between 1 and 3 arcseconds. In the case of head pitch, the model indicated an approximate 10.5° range of pitch across the images.
We conducted the same analyses as in Figure 3 separately for images with and without observable misalignment. The results for eye alignment, presented in the Supplementary Material S2, revealed no significant difference in eye contact perception bias between these two groups. In contrast, the results for head pitch (S3), in which we used a median split to divide the set of images in low pitch and high pitch images, show a slight difference for the yes condition. In particular, high-pitched pictures seemed to shift the eye contact perception more toward offset value. However, the absolute accuracy of the model remains uncertain, because calibration against absolute pitch values was not possible. Although pitch may have some effect, this influence appears to be noisy and was only observable in the proportion of yes responses, with no corresponding effect on the proportion of up responses. Further, we note that the model used has a general error of around 9° and not being trained on images of people in chinrests, that both block and distort fixed points used by the model, may also lead to both additional error and possibly systematic bias in tilt estimates.
Although these analyses partially address evidence from the Wollaston effect that the same eye alignment can lead to different perception of eye contact depending on contextual elements such as head orientation (which our headset and chinrest setup minimized) or facial features, to fully understand how these aspects interact with fixation offset, a more complex experimental design would be necessary. Emerging technologies, such as artificial avatar heads, which allow precise control over parameters including tridimensional head rotation and eye offset, offer an ideal framework for investigating these effects.
We further addressed issues of potential differences across pictures beyond our controlled variables by conducting our analyses on two subgroups of pictures, divided by viewing distance (20” vs. 24”). Results, shown in the Supplementary Material S3, indicate again no significant difference between the two viewing distances.
Additionally, our experiment focused on still images. These images do not capture the entire possible set of gazes during actual conversation in a video-mediated setting, nor do they capture which set of gazes would be perceived as eye contact in this scenario. Further, we do not quantify how long participants want to engage in mutual gaze throughout a conversation. A potential future study could engage two actors in a short, scripted conversation and use an eye tracker to verify their gaze. Then, after the conversation, researchers could ask them at which duration they felt like the other actor was making eye contact. This would help us to better understand how people perceive gaze cues throughout video interaction. Finally, in our experimental setup, participants were asked to examine the image until they had made their judgment of eye contact (or lack thereof). We did not instruct them to fixate on a specific portion of the picture (e.g., the actor's eyes), nor did we monitor eye movements during this judgment. Future studies might incorporate an oculomotor control component to determine whether fixation location influences the perception of eye contact.
Conclusions
Our result that perceived eye contact through a camera occurs when a user looks approximately 2° below the camera, at least for 20” to 24” distances from the camera, has a variety of useful applications. First, we can facilitate photographs with better perceived eye contact by instructing actors to gaze at the optimal point. This includes selfies, which are typically shot at an arm's length away, similar to the distance between the camera and the actor in our study. Second, we can improve gaze cues in video conferencing systems. Leveraging the fact that the point of best perceived eye contact is slightly below the webcam, future gaze correction systems could redirect participants’ gazes correspondingly in the vertical axis, allowing for the optimal perception of eye contact.
Supplementary Material
Acknowledgments
The authors thank Mohammad Dastgheib for permission to utilize his images in the figures of this article.
Data are available at https://osf.io/85zvu/?view_only=8d0e74dd80c049a080d437aacc96f45a.
Commercial relationships: none.
Corresponding authors: Steven M. Seitz; Aaron R. Seitz.
Emails: a.seitz@northeastern.edu; seitz@cs.washington.edu.
Address: Department of Psychology, 360 Huntington Ave., Northeastern University, Boston, MA 02115, USA.
Footnotes
The general guideline is to have the monitor 20 to 40 inches away from the eyes.
References
- Andrist, S., Gleicher, M., & Mutlu, B. (2017). Looking coordinated: Bidirectional gaze mechanisms for collaborative interaction with virtual characters. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM; 2571–2582. [Google Scholar]
- Anstis, S. M., Mayhew, J. W., & Morley, T. (1969). The perception of where a face or television “portrait” is looking. American Journal of Psychology, 82(4), 474. [PubMed] [Google Scholar]
- Argyle, M., Cook, M., & Cramer, D. (1994). Gaze and mutual gaze. British Journal of Psychiatry, 165(6), 848–850. [Google Scholar]
- Baron-Cohen, S., Campbell, R., Karmiloff-Smith, A., Grant, J., & Walker, J. (1995). Are children with autism blind to the mentalistic significance of the eyes? British Journal of Developmental Psychology, 13(4), 379–398. [Google Scholar]
- Bayliss, A.P., Bartlett, J., Naughtin, C.K., & Kritikos, A. (2011). A direct link between gaze perception and social attention. Journal of Experimental Psychology. Human Perception and Performance, 37(3), 634–644. [DOI] [PubMed] [Google Scholar]
- Bohannon, L. S., Herbert, A. M., Pelz, J. B., & Rantanen, E. M. (2013). Eye contact and video-mediated communication: A review. Displays, 34(2), 177–185. [Google Scholar]
- Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10(4), 433–436. [PubMed] [Google Scholar]
- Calder, A. J. (2003). Disgust discussed. Annals of Neurology, 53(4), 427–428. [DOI] [PubMed] [Google Scholar]
- Chen, M. (2002). Leveraging the asymmetric sensitivity of eye contact for videoconference. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY: ACM; 49–56. [Google Scholar]
- Cline, M. G. (1967). The perception of where a person is looking. American Journal of Psychology, 80(1), 41–50. [PubMed] [Google Scholar]
- Emery, N. J. (2000). The eyes have it: The neuroethology, function and evolution of social gaze. Neuroscience and Biobehavioral Reviews, 24(6), 581–604. [DOI] [PubMed] [Google Scholar]
- Ewbank, M. P., Jennings, C., & Calder, A. J. (2009). Why are you angry with me? Facial expressions of threat influence perception of gaze direction. Journal of Vision, 9(12), 16.1–7. [DOI] [PubMed] [Google Scholar]
- Gale, C., & Monk, A. F. (2000). Where am I looking? The accuracy of video-mediated gaze awareness. Perception & Psychophysics, 62(3), 586–595. [DOI] [PubMed] [Google Scholar]
- Gamer, M., & Hecht, H. (2007). Are you looking at me? Measuring the cone of gaze. Journal of Experimental Psychology. Human Perception and Performance, 33(3), 705–715. [DOI] [PubMed] [Google Scholar]
- Gibson, J. J., & Pick, A. D. (1963). Perception of another person's looking behavior. American Journal of Psychology, 76(3), 386. [PubMed] [Google Scholar]
- Grossmann, T. (2017). The eyes as windows into other minds. Perspectives on Psychological Science, 12(1), 107–121. [DOI] [PubMed] [Google Scholar]
- Guo, J., Zhu, X., Yang, Y., Yang, F., Lei, Z., & Li, S. Z. (2020). Towards fast, accurate and stable 3D dense face alignment. arXiv:2009.09960.
- van der Heijden, A. H. C. (1996). Two stages in visual information processing and visual perception? Visual Cognition, 3(4), 325–362. [Google Scholar]
- Hessels, R. S. (2020). How does gaze to faces support face-to-face interaction? A review and perspective. Psychonomic Bulletin & Review, 27(5), 856–881. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hietanen, J. K. (2018). Affective eye contact: An integrative review. Frontiers in Psychology, 9, 1587. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Horstmann, G., & Linke, L. (2022). Perception of direct gaze in a video-conference setting: The effects of position and size. Cognitive Research: Principles and Implications, 7(1), 67. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jongerius, C., Hessels, R. S., Romijn, J. A., Smets, E. M. A., & Hillen, M. A. (2020). The measurement of eye contact in human interactions: A scoping review. Journal of Nonverbal Behavior, 44, 363–389. [Google Scholar]
- Kleinke, C. L. (1986). Gaze and eye contact: A research review. Psychological Bulletin, 100(1), 78–100. [PubMed] [Google Scholar]
- Kluttz, N. L., Mayes, B. R., West, R. W., & Kerby, D. S. (2009). The effect of head turn on the perception of gaze. Vision Research, 49(15), 1979–1993. [DOI] [PubMed] [Google Scholar]
- Knight, D. J., Langmeyer, D., & Lundgren, D. C. (1973). Eye-contact, distance, and affiliation: The role of observer bias. Sociometry, 36(3), 390. [Google Scholar]
- Krüger, K., & Hückstedt, B. (1969). Evaluation of the direction of gazing. Zeitschrift fur experimentelle und angewandte Psychologie, 16(3), 452–472. [PubMed] [Google Scholar]
- Langton, S. R. H. (2000). The mutual influence of gaze and head orientation in the analysis of social attention direction. Quarterly Journal of Experimental Psychology A, 53(3), 825–845. [DOI] [PubMed] [Google Scholar]
- Mareschal, I., Calder, A. J., & Clifford, C. W. G. (2013). Humans have an expectation that gaze is directed toward them. Current Biology, 23(8), 717–721. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McKelvie, S. J. (1976). The role of eyes and mouth in the memory of a face. American Journal of Psychology, 89(2), 311. [Google Scholar]
- Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10(4), 437–442. [PubMed] [Google Scholar]
- Peterson, M. F., & Eckstein, M. P. (2012). Looking just below the eyes is optimal across face recognition tasks. Proceedings of the National Academy of Sciences of the United States of America, 109(48), E3314–E3323. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schmitz, I., & Einhäuser, W. (2023). Gaze estimation in videoconferencing settings. Computers in Human Behavior, 139, 107517. [Google Scholar]
- Stephenson, L. J., Edwards, S. G., & Bayliss, A. P. (2021). From gaze perception to social cognition: The shared-attention system. Perspectives on Psychological Science, 16(3), 553–576. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stokes, R. (1969). Human factors and appearance design considerations of the mod II PICTUREPHONE station set. IEEE Transactions on Communications, 17(2), 318–323. [Google Scholar]
- White, J. H., Hegarty, J. R., & Beasley, N. A. (1970). Eye contact and observer bias: A research note. British Journal of Psychology, 61(2), 271–273. [Google Scholar]
- Wollaston, W. H. (1833). On the apparent direction of eyes in a portrait. Abstracts of the Papers Communicated to the Royal Society of London, 2, 214–215. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.




