Abstract
Purpose
The objective of this study was to assess how teleconferencing variables influence faculty impressions of mock residency applicants.
Methods
In October 2020, we conducted an online experiment studying five teleconferencing variables: background, lighting, eye contact, internet connectivity, and audio quality. We created interview videos of three mock residency applicants and systematically modified variables in control and intervention conditions. Faculty viewed the videos and rated their immediate impression on a 1–10 scale. The effect of each variable was measured as the mean difference between the intervention and control impression ratings. One-way analysis of variance (ANOVA) was performed to assess whether ratings varied across applicants. Paired-samples Wilcoxon signed-rank tests were conducted to assess the significance of the effect of each variable.
Results
Of 711 faculty members who were emailed a link to the experiment, 97 participated (13.6%). The mean ratings for control videos were 8.1, 7.2, and 7.6 (P < .01). Videos with backlighting, off-center eye contact, choppy internet connectivity, or muffled audio quality had lower ratings when compared with control videos (P < .01). There was no rating difference between home and conference room backgrounds (P = .77). Many faculty participants reported that their immediate impressions were very much or extremely influenced by audio quality (60%), eye contact (57%), and internet connectivity (49%).
Conclusions
Teleconferencing variables may serve as a source of assessment bias during residency interviews. Mock residency applicants received significantly lower ratings when they had off-center eye contact, muffled audio, or choppy internet connectivity, compared to optimal teleconferencing conditions.
Supplementary Information
The online version contains supplementary material available at 10.1007/s44186-022-00053-w.
Keywords: Videoconferencing, Residency interviews, Medical education, Personnel selection, Technology
Introduction
The COVID-19 pandemic launched the widespread use of virtual interviews by many residency programs during the 2020–2021 and 2021–2022 residency recruitment cycles [1, 2]. For the 2022–2023 application cycle, the AAMC is again recommending that residency programs interview all applicants virtually [3]. Virtual video interviews have introduced new dimensions to applicant self-presentation, now mediated by variables absent from in-person encounters, such as lighting and audiovisual quality. Consequently, teleconferencing variables have been hypothesized to affect the impression applicants give to interviewers [4] during an essential component of the resident selection process [5–7].
The resident selection process is biased by a range of applicant attributes [8–10]. Applicant appearance and grooming influencing perceptions of social skills and likeability [11–13], while a person’s appearance, level of eye contact, and voice have been shown to influence perceptions of extraversion and intelligence [14–16]. However, with virtual video interviews, the applicant’s camera, microphone, and internet connection can distort the applicant’s appearance, voice, and self-presentation, potentially introducing additional sources of bias.
As such, many organizations including the Association of American Medical Colleges (AAMC) have published recommendations to guide applicants and programs on optimizing their virtual interview experiences [17–19]. Applicants are encouraged to adjust teleconferencing variables such as lighting, background environment, and audiovisual quality. While the importance of these variables is empirically recognized, and audiovisual quality has been shown to affect perceptions of hirability in the field of human resources [20], the significance of these variables has not been demonstrated within the field of medical education and training. As virtual interviews will continue to be utilized during residency recruitment, it is crucial to assess how teleconferencing variables influence interviewer impressions.
In this study, we sought to measure the effect of teleconferencing variables on faculty immediate impressions of mock residency applicants and assess faculty perceptions of how much these teleconferencing variables influenced their impressions of applicants.
Methods
We conducted an online experiment and survey in October 2020 at a large academic institution. During the experiment, faculty viewed pre-recorded videos of mock residency applicants and rated their immediate impression of the applicants. Teleconferencing variables were systematically altered between control and intervention conditions to assess the influence of each variable on ratings.
Videos of mock applicants
To create applicant videos, we recruited three medical students to act as mock residency applicants. The three actors were identified as Asian, Latina, and Black American. To control for gender, professional appearance, and interview content, all actors were female, wore business attire, and recited pre-written scripts describing a research activity. Scripts were reviewed by all authors and followed a two-line structure. The first line was a simple observation or fact related to the research topic. The second line described a research activity related to that observation. An example script was, “Certain bedside procedures are performed emergently and have a higher rate of complications. I conducted a literature review to identify validated assessment tools that could be implemented for common bedside procedures.” Scripts varied in topic and field of study and were revised to be emotionally neutral and approximately equivalent in time and effort needed to conduct the research activity. Video interviews were recorded on the Zoom teleconferencing platform (Zoom Video Communications, San Jose, CA) with a Microsoft Surface Pro 4 laptop and high-quality plug-in microphone.
We studied five teleconferencing variables: background environment, lighting, eye contact, internet connectivity, and audio quality. We selected these variables as they were primarily technology related rather than interviewee related and were featured across virtual interview guides. Each variable had a control and an intervention condition (Table 1). The control background was a conference room, and the intervention was the actor’s home background. Home backgrounds were clean and contained household items including either a plant, landscape painting, or cabinets. We excluded the use of virtual backgrounds as several guidelines recommended neutral, bland, or professional backgrounds but did not explicitly recommend virtual backgrounds. The “blur background” tool was not an option at the time of the study. Furthermore, we did not study messy environments as the focus was on comparing two highly plausible environments rather than the quality or cleanliness of the background. The control lighting condition was front lighting, and the intervention was backlighting simulated by installing a flood light behind the actor. The control condition of eye contact was direct eye contact with the camera, so actors appeared to be looking at the interviewer. The intervention condition was off-center eye contact with actors looking at the right lower section of the screen. The control internet connectivity was uninterrupted and smooth, while the intervention was choppy internet with audiovisual lag created by connecting the laptop to a weak Wi-Fi signal. The control audio condition was clear, and the intervention condition was muffled audio simulated by directing the microphone away from the actor. The actor’s own voice was used and was not artificially modified for the study.
Table 1.
Simulation of teleconferencing variable control and intervention conditions
| Variable | Condition | Simulation method | |
|---|---|---|---|
| Control | Intervention | ||
| Background | Conference room | Home | Hospital conference room versus actor’s home |
| Lighting | Front lighting | Back lighting | Light in front of versus behind the actor |
| Eye contact | Direct eye contact | Off-center eye contact | Actor looked into the camera versus the right lower quadrant of the screen |
| internet connectivity | Smooth, uninterrupted | Choppy with audiovisual lag | Wi-Fi hotspot close versus far from the laptop |
| Audio quality | Clear | Muffled | Microphone directed toward versus away from the actor |
Each actor recorded one control video and five intervention videos. The control video had all teleconferencing variables in the control condition. Each of the five intervention videos had one variable in the intervention condition and the remaining variables in the control condition. This allowed us to isolate the effect of each variable during data analysis. Each video was edited to 15 s. Although videos were far shorter than the length of a typical residency interview, we chose this video length to elicit immediate impressions of the applicants and to minimize study participant fatigue.
Experimental design
Faculty members in 14 clinical departments were recruited to participate via departmental email lists. The recruitment email included a study description and a weblink to the study hosted on the Qualtrics online survey platform (Qualtrics Inc., Provo, UT). Participants were informed that the researchers sought to assess how applicants were rated the virtual setting, but were not informed of the variables being studied or that variables were modified in the videos.
In the first part of the study, faculty viewed the videos and rated their immediate impression of the applicant on a 1–10 scale, where 10 was the highest rating possible. We chose immediate impression as the outcome measure as it seemed to be the most appropriate global measure of the subjective rating formed when viewing a brief video of an applicant. Faculty rated each video immediately after watching the video. Faculty first viewed and rated the three control videos in random order to establish baseline ratings of the applicants under control conditions. Then, they viewed and rated the 15 intervention videos in random order.
In the second part of the study, faculty completed a survey in which they provided demographic data and used a 5-point Likert scale to report the degree to which they perceived each variable to have influenced their ratings of the applicants in the study. The experiment and survey instrument are viewable in Online Resource 1. Videos were omitted to preserve student privacy.
Study measures
The primary outcome was the effect of each variable on ratings from baseline. For each applicant, faculty ratings were averaged to generate a mean rating. Mean baseline ratings were compared across applicants and found to be significantly different. Thus, to control for this observed baseline difference across applicants, we calculated the rating difference between intervention and control videos with each applicant serving as their own control. The mean rating difference by variable was the mean point difference between each faculty’s rating of an applicant’s control and intervention video for that variable. A negative value indicated that the applicant was rated lower by faculty when the variable was in the intervention condition compared to the control condition.
The secondary outcome was the degree to which faculty participants perceived each teleconferencing variable to influence their ratings. We calculated the proportion of faculty who reported being very much or extremely influenced by each variable on the survey. We then conducted a correlational analysis between faculty rating differences and perceived degree of influence by each variable to assess whether faculty had insight in how their ratings may have been affected by the variables studied.
Statistical analysis
We reported study participant demographics with mean and standard deviation (SD) for parametric data and median and interquartile ranges (IQR) for non-parametric data. We performed one-way analysis of variance (ANOVA) to compare ratings of applicants at baseline and for each video type. We conducted paired-samples Wilcoxon signed-rank tests to determine whether the applicants were rated worse when each variable was in the intervention condition. We reported mean rating differences with the 95% confidence interval (CI) for each teleconferencing variable. We used descriptive statistics to report the proportion of faculty who perceived each teleconferencing factor to have very much or extremely influenced their ratings. We reported Pearson’s correlation coefficient for the association between faculty rating differences and perceived influence by each variable, with a negative correlation coefficient indicating a greater (more negative) rating difference with increased perceived influence of the variable.
For all statistical tests, a P value of < 0.05 was considered significant. All statistical analyses were performed using the R programming language (RStudio, v.1.3.959, Boston, MA).
Results
Of 711 faculty members who were emailed the link to participate, 97 (13.6%) completed the study. Of the 97 participants, 44 (45%) were identified as female and 52 (54%) as White. The mean participant age was 46 years (SD: 11), and the median number of years after completion of clinical training was 10 (IQR: 4–17). The median number of years of experience conducting residency interviews was 5 (IQR: 1–12). Additional participant demographics are described in Table 2.
Table 2.
Demographic characteristics of faculty study participants (N = 97)
| Characteristic | Faculty respondents (N = 97) |
|---|---|
| Sex, no. (%) | |
| Male | 50 (52) |
| Female | 44 (45) |
| Prefer not to answer | 3 (3) |
| Age in years, mean (SD) | 46 (11) |
| Race, no. (%) | |
| White American | 52 (54) |
| Asian American | 20 (21) |
| Black or African American | 2 (2) |
| American Indian and Alaska Native | 0 (0) |
| Other | 13 (13) |
| Prefer not to answer | 10 (10) |
| Ethnicity, no. (%) | |
| Hispanic or Latinx | 4 (4) |
| Not Hispanic or Latinx | 89 (92) |
| Prefer not to answer | 4 (4) |
| Specialty, no. (%) | |
| Medicine | 28 (29) |
| Surgery | 17 (18) |
| Anesthesiology | 10 (10) |
| Radiology | 8 (8) |
| Pediatrics | 5 (5) |
| Ophthalmology | 5 (5) |
| Obstetrics/gynecology | 4 (4) |
| Neurology | 3 (3) |
| Urology | 3 (3) |
| Otolaryngology | 3 (3) |
| Emergency medicine | 2 (2) |
| Orthopedic surgery | 2 (2) |
| Family medicine | 1 (1) |
| Psychiatry | 1 (1) |
| Other | 5 (5) |
| Years as attending, median (IQR) | 10 (4–17) |
| Years involved in residency interviews, median (IQR) | 5 (1–12) |
IQR interquartile range, SD standard deviation
Mean ratings across applicants
The mean baseline control ratings for the three applicants were 8.1, 7.2, and 7.6 (P < 0.01). Mean ratings were significantly different across applicants for lighting (P = 0.04), eye contact (P = 0.03), internet connectivity (P < 0.01), and audio quality (P < 0.01). Mean ratings were similar for background environment (P = 0.20) (Table 3).
Table 3.
Comparison of impression ratings across mock residency applicants for control and intervention videos
| Mean rating | P valuea | |||
|---|---|---|---|---|
| Applicant 1 | Applicant 2 | Applicant 3 | ||
| Control video | 8.1 | 7.2 | 7.6 | < .01 |
| Intervention videos | ||||
| Home environment | 7.4 | 7.7 | 7.6 | .20 |
| Backlighting | 7.5 | 7.0 | 7.2 | .04 |
| Off-center eye contact | 7.2 | 6.8 | 6.5 | .03 |
| Choppy internet | 7.1 | 6.6 | 6.1 | < .01 |
| Muffled audio | 6.5 | 6.2 | 5.4 | < .01 |
aP values were determined through one-way analysis of variance (ANOVA) comparing mean ratings across applicants for the same video type
Mean rating differences by teleconferencing variable
Applicants with backlighting, off-center eye contact, choppy internet connectivity, or muffled audio quality received significantly lower ratings than under control conditions (P < 0.01) (Fig. 1). Videos with muffled audio compared with clear audio resulted in a difference of − 1.6 points (95% CI: − 1.3 to − 1.8) on the 10-point rating scale. Choppy internet connectivity and off-center eye contact resulted in modest differences of − 0.9 points (95% CI: − 0.7 to − 1.1) and −0.8 points (95% CI: − 0.6 to − 1.0), respectively. Backlighting was associated with a small difference of − 0.4 points (95% CI: − 0.2 to − 0.6). There was no difference in ratings of applicants with a home background versus the conference room background (P = 0.77).
Fig. 1.

Mean difference in faculty ratings by teleconferencing variable. Faculty (N = 97) rated their immediate impressions of applicants on a 1–10 scale, where 10 points was the highest rating possible. This figure depicts the mean rating difference between the intervention and control videos for each teleconferencing variable. Negative values indicate lower ratings when the teleconferencing variable was in the intervention condition versus the control condition. Error bars depict the 95% confidence interval (CI) for the mean rating difference attributable to each intervention variable
Faculty perceptions of the influence of teleconferencing variables
Of 97 faculty participants, 59 (60%) reported that their perceptions of applicants were very much or extremely influenced by audio quality, 47 (49%) by internet connectivity and 55 (57%) by eye contact. For background environment and lighting, 20 (21%) of the faculty reported being very much or extremely influenced by these variables while rating applicants (Fig. 2). Faculty rating differences were significantly associated with perceived degree of influence of eye contact (r = −0.42), background environment (r = − 0.37), lighting (r = − 0.33), audio quality (r = − 0.49), and internet connectivity (r = − 0.33) (P < 0.01).
Fig. 2.
Faculty perceptions of the influence of teleconferencing factors on their ratings of mock residency applicants. On a survey administered after the experiment, faculty participants indicated the degree to which they perceived their immediate impressions of the mock applicants were influenced by each of the teleconferencing variables in the study
Discussion
In this study, faculty members rated mock residency applicants substantially lower in videos when they had off-center eye contact, muffled audio, or choppy internet connectivity. Backlighting was associated with marginally lower ratings, while a home background environment was not associated with a rating difference. Faculty’s perceived influence of teleconferencing variables on their ratings of applicants were consistent with how they rated collectively.
Applicant ratings during virtual interviews may reflect teleconferencing conditions to some degree, in addition to traditional measures of applicant quality such as merit, character, and interpersonal communication skills. Prior studies have demonstrated that people infer personal attributes through nonverbal cues [15, 21, 22], with vocal pitch and amplitude affecting perceptions of speaker confidence [23, 24], and poor eye contact and halting speech resulting in impressions of lower intelligence [16]. However, during virtual interviews, faulty equipment or unstable internet can distort these cues and influence interviewer impressions of applicants. In this study, off-center eye contact and suboptimal audiovisual quality resulted in significantly lower ratings despite standardized interview content, suggesting that immediate impressions of applicants may now also be shaped by teleconferencing variables distinct from applicant merit.
Our results highlight the potential for applicants with suboptimal teleconferencing conditions to be rated, and subsequently ranked, lower by faculty interviewers during resident selection. The effect of teleconferencing conditions on interviewer perceptions has been previously demonstrated by Fiechter et al., who found that mock job applicants with poor audiovisual quality during virtual interviews were considered less hirable for a legal secretary position [20]. However, this current study extends and applies this phenomenon to residency applicants, and cautions that residents are not immune to this effect and may be penalized by non-optimized teleconferencing conditions. In addition, by demonstrating that some teleconferencing variables have quantifiable and substantial effects on interviewer impressions, our data lend validity to existing recommendations for applicants to simulate direct eye contact, minimize ambient noise, and use wired Ethernet cables to improve internet connectivity [18, 25, 26]. For applicants with limited resources, these data also provide guidance as to which teleconferencing variables within their control should be prioritized when optimizing conditions for virtual interviews [27].
The transition to virtual interviews provides an extraordinary opportunity to decrease the financial burden of residency applications on applicants and reduce social and financial inequities, yet it may also amplify interviewer implicit biases. When surveyed in this study, faculty participants recognized the influence of teleconferencing variables on their ratings of the mock applicants, which was also reflected in their rating of immediate impressions. In a prior study, Fiechter et al. found that even when observers were instructed to ignore audiovisual quality when rating applicants, they still rated applicants with poor audiovisual quality as less hirable [20]. Combined with our results, these findings suggest that strategies solely aimed at informing faculty of their implicit biases and instructing them to ignore technical issues during virtual interviews may be necessary, but insufficient, to mitigate negative perceptions. While faculty members appropriately recognized their rating biases and the perceived influence of these teleconferencing variables, it is unclear as to whether faculty will account for these biases in future interview settings. Importantly, applicants from low-income households may face disproportionate disadvantage from this additional source of bias if they are unable to optimize their teleconferencing conditions due to a lack of material resources, including access to high-quality equipment, high-speed internet, and physical space.
Medical schools and residency programs can proactively address issues related to teleconferencing during residency interviews by providing trainees with private spaces with reliable internet connection, opportunities for trial interviews, and access to rental of high-quality equipment, especially for resource-limited applicants [8, 28]. Programs may also consider measuring the frequency and severity of teleconferencing problems during virtual residency interviews and evaluate the influence on their resident selection outcomes. Another strategy is to ask interviewers to systematically report teleconferencing issues experienced and calibrate applicant ratings accordingly. Variables that are more influential, such as internet connectivity, would be weighted more heavily than other variables based on the magnitudes reported in this study. However, the reception and efficacy of these interventions would require studies of implementation, acceptability, and validity.
Lastly, despite controlling for non-White race, gender, video duration, and teleconferencing conditions, mock applicants received significantly different ratings under control conditions, suggesting that some intrinsic applicant features affected interviewer impressions. Unmeasured variables such as vocal inflection, timbre, and appearance [15, 23, 24] of applicants may have influenced how they were perceived. Facial expression or grooming may have contributed to baseline rating discrepancies. Additionally, although we carefully selected each actor’s home environment to be tidy and neutral in appearance, the environments were not identical. Although script content was revised to be approximately equivalent, there could have differences in how raters interpreted each script. Furthermore, although all applicants identified as non-White, they identified with three distinct racial and ethnic backgrounds. Despite controlling for potential confounders, we found that faculty impressions were influenced by unmeasured variables beyond gender, professional appearance, and interview content.
A limitation of this study was that it only included female actors, and teleconferencing variables may influence ratings differently when applied to male applicants [26, 29]. In addition, we conducted a non-targeted participant recruitment strategy, resulting in a convenience sample of faculty participants and low response rate. However, the sample included faculty from a wide range of specialties and a majority reported involvement in the resident selection process. Additionally, the use of video recordings in the study design did not allow for two-way or real-time communication between the mock applicant and the faculty participant, which limited the participant’s ability to assess the applicant on different domains of interest. The brevity of the videos limited our ability to assess whether the negative effects of teleconferencing variables can be overcome or exacerbated by extended exposure. However, studies in psychology have demonstrated that impressions are influenced more by the extremes than the averages of an experience [30–32], suggesting that infrequent but unpleasant teleconferencing issues may nevertheless have a negative influence on overall impressions.
Conclusions
Teleconferencing issues may significantly influence how residency applicants are perceived and rated during online interviews. In this study, audio quality, internet connectivity, and eye contact had the largest effect on interviewer immediate impressions in an experimental setting, and faculty appropriately recognized that teleconferencing variables interfered with their ratings of applicants when surveyed. These results provide objective data on the magnitude of the effect of teleconferencing issues as a basis for interventions to offset interviewer bias.
Teleconferencing issues are inevitable, yet teleconferencing variables are typically not the criteria with which programs seek to select their residents. As virtual interviews will continue for the 2022–2023 residency application cycle and likely beyond, further research efforts should identify strategies to mitigate bias generated by teleconferencing and evaluate on a national scale whether and how teleconferencing conditions affect applicant ratings in practice.
Supplementary Information
Below is the link to the electronic supplementary material.
Acknowledgements
The authors wish to thank Georgina Dominique, Mildred Galvez, and Ming-Yeah Hu for acting as mock residency applicants in this study. The authors further wish to thank Dr. Clarence Braddock, M.D., M.P.H., the University of California Los Angeles Vice Dean for Medical Education, for his support of this study.
Author contributions
All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed in collaboration by IAH, YD, AJC, JW, JPW, AT, and FC. The first draft of the manuscript was written by IAH, YD, and FC and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Funding
The authors did not receive support from any organization for the submitted work.
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Conflict of interest
On behalf of all authors, the corresponding author states that the authors have no relevant financial or non-financial interests to disclose.
Ethical approval
This study was performed in line with the principles of the Declaration of Helsinki. The study was reviewed and granted an exemption by the University of California Los Angeles Institutional Review Board (#20-001623).
Consent to participate
Informed consent was obtained from all individual participants included in the study.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Final report and recommendations for medical education institutions of LCME-accredited, U.S. osteopathic, and non-U.S. medical school applicants. The Coalition for Physician Accountability. 2020. https://physicianaccountability.org/wp-content/uploads/2020/05/Workgroup-C-2020.05.06-Final-Recommendations_Final.pdf. Accessed 10 Dec 2020.
- 2.Recommendations on 2021–22 residency season interviewing for medical education institutions considering applicants from LCME-accredited, U.S. osteopathic, and non-U.S. medical schools. The Coalition for Physician Accountability. 2021. https://physicianaccountability.org/wp-content/uploads/2021/08/Virtual-Rec_COVID-Only_Final.pdf. Accessed 13 Sep 2021.
- 3.AAMC interview guidance for the 2022–2023 residency cycle. Association of American Medical Colleges. 2022. https://www.aamc.org/what-we-do/mission-areas/medical-education/aamc-interview-guidance-2022-2023-residency-cycle. Accessed 17 May 2022.
- 4.McColl R, Michelotti M. Sorry, could you repeat the question? Exploring video-interview recruitment practice in HRM. Hum Resour Manag J. 2019;29(4):637–56. 10.1111/1748-8583.12249. [Google Scholar]
- 5.Katzung KG, Ankel F, Clark M, et al. What do program directors look for in an applicant? J Emerg Med. 2019;56(5):e95-101. 10.1016/j.jemermed.2019.01.010. [DOI] [PubMed] [Google Scholar]
- 6.Makdisi G, Takeuchi T, Rodriguez J, Rucinski J, Wise L. How we select our residents—a survey of selection criteria in general surgery residents. J Surg Educ. 2011;68(1):67–72. 10.1016/j.jsurg.2010.10.003. [DOI] [PubMed] [Google Scholar]
- 7.Stephenson-Famy A, Houmard BS, Oberoi S, Manyak A, Chiang S, Kim S. Use of the interview in resident candidate selection: a review of the literature. J Grad Med Educ. 2015;7(4):539. 10.4300/JGME-D-14-00236.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Nwora C, Allred DB, Verduzco-Gutierrez M. Mitigating bias in virtual interviews for applicants who are underrepresented in medicine. J Natl Med Assoc. 2021;113(1):74–6. 10.1016/j.jnma.2020.07.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Kiraly L, Dewey E, Brasel K. Hawks and doves: adjusting for bias in residency interview scoring. J Surg Educ. 2020;77(6):e132–7. 10.1016/j.jsurg.2020.08.013. [DOI] [PubMed] [Google Scholar]
- 10.Edmond MB, Deschenes JL, Eckler M, Wenzel RP. Racial bias in using USMLE step 1 scores to grant internal medicine residency interviews. Acad Med. 2001;76(12):1253–6. 10.1097/00001888-200112000-00021. [DOI] [PubMed] [Google Scholar]
- 11.Maxfield CM, Thorpe MP, Desser TS, et al. Bias in radiology resident selection: do we discriminate against the obese and unattractive? Acad Med. 2019;94(11):1774–80. 10.1097/ACM.0000000000002813. [DOI] [PubMed] [Google Scholar]
- 12.Kassam A-F, Cortez AR, Winer LK, et al. Swipe right for surgical residency: exploring the unconscious bias in resident selection. Surgery. 2020;168(4):724–9. 10.1016/j.surg.2020.05.029. [DOI] [PubMed] [Google Scholar]
- 13.Corcimaru A, Morrell MC, Morrell DS. Do looks matter? The role of the electronic residency application service photograph in dermatology residency selection. Dermatol Online J. 2018. 10.5070/D3244039354. [PubMed] [Google Scholar]
- 14.DeGroot T, Gooty J. Can nonverbal cues be used to make meaningful personality attributions in employment interviews? J Bus Psychol. 2009;24(2):179–92. 10.1007/s10869-009-9098-0. [Google Scholar]
- 15.Naumann LP, Vazire S, Rentfrow P, Gosling S. Personality judgments based on physical appearance. Pers Soc Psychol B. 2009;35(12):1661–71. 10.1177/0146167209346309. [DOI] [PubMed] [Google Scholar]
- 16.Borkenau P, Liebler A. Observable attributes as manifestations and cues of personality and intelligence. J Pers. 1995;63(1):1–25. 10.1111/j.1467-6494.1995.tb00799.x. [Google Scholar]
- 17.Virtual interviews: tips for medical school applicants. Association of American Medical Colleges. 2020. https://www.aamc.org/media/44841/download. Accessed 1 Dec 2020.
- 18.Williams K, Kling JM, Labonte HR, Blair JE. Videoconference interviewing: tips for success. J Grad Med Educ. 2015;7(3):331. 10.4300/JGME-D-14-00507.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Virtual interviewing. American Association of Colleges of Osteopathic Medicine. 2020. https://www.aacom.org/match/virtual-interviewing. Accessed 6 Dec 2020.
- 20.Fiechter JL, Fealing C, Gerrard R, Kornell N. Audiovisual quality impacts assessments of job candidates in video interviews: evidence for an AV quality bias. Cogn Res Princ Implic. 2018;3(1):47. 10.1186/s41235-018-0139-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Borkenau P, Brecke S, Möttig C, Paelecke M. Extraversion is accurately perceived after a 50-ms exposure to a face. J Res Pers. 2009;43(4):703–6. 10.1016/j.jrp.2009.03.007. [Google Scholar]
- 22.Hall JA, Horgan TG, Murphy NA. Nonverbal communication. Annu Rev Psychol. 2019;70(1):271–94. 10.1146/annurev-psych-010418-103145. [DOI] [PubMed] [Google Scholar]
- 23.Guyer JJ, Fabrigar LR, Vaughan-Johnston TI. Speech rate, intonation, and pitch: investigating the bias and cue effects of vocal confidence on persuasion. Pers Soc Psychol B. 2019;45(3):389–405. 10.1177/0146167218787805. [DOI] [PubMed] [Google Scholar]
- 24.Van Zant AB, Berger J. How the voice persuades. J Pers Soc Psychol. 2020;118(4):661–82. 10.1037/pspi0000193. [DOI] [PubMed] [Google Scholar]
- 25.Jones RE, Abdelfattah KR. Virtual interviews in the era of COVID-19: a primer for applicants. J Surg Educ. 2020;77(4):733–4. 10.1016/j.jsurg.2020.03.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.McKinley S, Fong Z, Udelsman B, Rickert C. Successful virtual interviews: perspectives from recent surgical fellowship applicants and advice for both applicants and programs. Ann Surg. 2021. 10.1097/SLA.0000000000004172. [DOI] [PubMed] [Google Scholar]
- 27.Tassinari S, Perez LC, La Riva A, Sayegh AS, Ullrich P, Joshi C. Virtual residency interviews: what variables can applicants control? Cureus. 2021;13(5): e14938. 10.7759/cureus.14938. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Lee TC, McKinley SK, Dream SY, Grubbs EG, Dissanaike S, Fong ZV. Pearls and pitfalls of the virtual interview: perspectives from both sides of the camera. J Surg Res. 2021;262:240–3. 10.1016/j.jss.2020.12.052. [DOI] [PubMed] [Google Scholar]
- 29.Hewett L, Lewis M, Collins H, Gordon L. Gender bias in diagnostic radiology resident selection, does it exist? Acad Radiol. 2016;23(1):101–7. 10.1016/j.acra.2015.10.018. [DOI] [PubMed] [Google Scholar]
- 30.Kahneman D. Evaluation by moments: past and future. In: Kahneman D, Tversky A, editors. Choices, values, and frames. Cambridge University Press; 2000. p. 693–708. 10.1017/CBO9780511803475.039. [Google Scholar]
- 31.Fredrickson BL, Kahneman D. Duration neglect in retrospective evaluations of affective episodes. J Pers Soc Psychol. 1993;65(1):45–55. 10.1037/0022-3514.65.1.45. [DOI] [PubMed] [Google Scholar]
- 32.Redelmeier DA, Kahneman D. Patients’ memories of painful medical treatments: real-time and retrospective evaluations of two minimally invasive procedures. Pain. 1996;66(1):3–8. 10.1016/0304-3959(96)02994-6. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

