Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Sep 1.
Published in final edited form as: Am J Med Genet C Semin Med Genet. 2023 Feb 7;193(3):e32035. doi: 10.1002/ajmg.c.32035

Privacy, bias and the clinical use of facial recognition technology: A survey of genetics professionals

Elias Aboujaoude 1,*, Janice Light 2, Julia EH Brown 3, W John Boscardin 4, Benedikt Hallgrímsson 5, Ophir D Klein 2,6,7,*
PMCID: PMC10578447  NIHMSID: NIHMS1870139  PMID: 36751120

Abstract

Purpose:

Facial recognition technology (FRT) has been adopted as a precision medicine tool. The medical genetics field highlights both the clinical potential and privacy risks of this technology, putting the discipline at the forefront of a new digital privacy debate. Investigating how geneticists perceive the privacy concerns surrounding FRT can help shape the evolution and regulation of the field and provide lessons for medicine and research more broadly.

Methods:

562 genetics clinicians and researchers were approached to fill out a survey, 105 responded, and 80% of these completed. The survey consisted of 48 questions covering demographics, relationship to new technologies, views on privacy, views on FRT, and views on regulation.

Results:

Genetics professionals generally placed a high value on privacy, although specific views differed, were context-specific, and covaried with demographic factors. Most respondents (88%) agreed that privacy is a basic human right, but only 37% placed greater weight on it than other values such as freedom of speech. Most respondents (80%) supported FRT use in genetics, but not necessarily for broader clinical use. A sizeable percentage (39%) were unaware of FRT’s lower accuracy rates in marginalized communities and of the mental health effects of privacy violations (62%), but most (76% and 75%, respectively) expressed concern when informed. Overall, women and those who self-identified as politically progressive were more concerned about the lower accuracy rates in marginalized groups (88% vs 64% and 83% vs 63%, respectively). Younger geneticists were more wary than older geneticists about using FRT in genetics (28% compared to 56% “strongly” supported such use). There was an overall preference for more regulation, but respondents had low confidence in governments’ or technology companies’ ability to accomplish this.

Conclusion:

Privacy views are nuanced and context dependent. Support for privacy was high but not absolute, and clear deficits existed in awareness of crucial FRT-related discrimination potential and mental health impacts. Education and professional guidelines may help to evolve views and practices within the field.

Introduction

The application of machine learning to large repositories of biometric information promises to improve diagnostic power and bring about individualized, cost-effective, and timely medical interventions. However, as medicine increasingly relies on “Big Data” and data sharing for diagnostic accuracy, digital privacy has become increasingly challenging to protect, with myriad psychological, legal, and ethical ramifications. Clinical genetics, already transformed by the use of DNA banks, is now adopting facial recognition technology (FRT) tools for screening and diagnostic purposes (Hsieh et al., 2022). The National Institute of Health’s Facebase repository, for example, hosts more than 7,000 facial images, representing hundreds of rare syndromes (Hallgrímsson et al., 2020).

Broadly speaking, FRT is the use of quantitative analysis of facial morphology, either from 2D or 3D images, to identify individual faces. The field of clinical genetics applies these techniques a bit differently, in that the goal is not to identify an individual, but rather to use quantitative analysis of facial features of individuals to obtain information of diagnostic value. Analyses of two dimensional images of patients with known molecular conditions using machine-learning techniques has shown significant potential to aid diagnoses (Gurovich et al., 2019; Porras et al., 2021), even for syndromes not included in the training set (Hsieh et al., 2022). Such analyses rely largely on analysis of automated landmarks, but texture information or pixel-based approaches of the kind routinely used in FRT are also feasible for analysis of syndromic variation in two-dimensional images. Three-dimensional photogrammetry is becoming more accessible and easier to apply (Staller et al., 2022). Analyses of three dimensional facial images has shown high accuracy for syndrome diagnoses (Bannister, Wilms, Aponte, Katz, Klein, Bernier, Spritz, Hallgrimsson, et al., 2022; Hallgrímsson et al., 2020; Matthews et al., 2022), particularly for distinguishing syndromic from non-syndromic individuals (Bannister, Wilms, Aponte, Katz, Klein, Bernier, Spritz, Hallgrímsson, et al., 2022; Hallgrímsson et al., 2020). Three-dimensional analyses have the advantage of capturing facial shape variation more completely, which improves the utility of such images for clinical diagnoses and also has the advantage of creating data that can be more readily connected to imaging results from animal models of disease (Naqvi et al., 2022). Although not intended to identify individual patients, these techniques rely on many of the same high dimensional data ordination and feature detection methods that form the core of FRT.

More generally, FRT has been described as “an ethical emergency requiring urgent global attention” (Almeida et al., 2022). Its use has been fraught with controversy around privacy concerns, which are difficult to regulate, and far-reaching social justice issues (Raji et al., 2020). Among the challenges is that clinical and research databases often contain “multi-omic” information connecting multiple types of data, increasing the possibility of stakeholder disagreement about what is considered relevant to health and, therefore, warrants protection on health privacy grounds (Dupras & Bunnik, 2021).

Privacy laws also vary geographically and across contexts. The U.S. Health Insurance Portability and Accountability Act (HIPAA) limits privacy protection to data in healthcare settings but does not cover academia and industry (McGraw & Mandl, 2021). State laws, such as the California Consumer Privacy Act, may offer protection, but informed consent often takes the form of opt-out options (California Consumer Privacy Act of 2018., n.d.) that can lack transparency (Deverka et al., 2019; Kulynych & Greely, 2017). At the national level, there are no U.S. laws to protect against privacy violations from FRT use by law enforcement (Almeida et al., 2022).

In the European Union, several mechanisms are in place to protect privacy, including the European Convention on Human Rights and the General Data Protection Regulation (GDPR) (Council of Europe, 1950; Council of the European Union, 2016). However, protections specific to FRT are not clear.

The cost of privacy violations is high, for both individuals and society. Privacy has been shown to mediate several important psychological functions (Aboujaoude, 2019), which could be compromised by the use of FRT in clinical and research settings. Further, in a study of FRT involving about 8.5 million people, algorithms misidentified Black and Asian individuals more frequently than white individuals, older adults more than middle-aged adults, and females more than males (Grother et al., 2019). Females and communities of color face the most significant prejudices through FRT (Buolamwini & Gebru, 2018), in part because marginalized communities are less likely to be able to address the consequences of misdiagnosis or misidentification, to choose how their data is used, or to seek legal or other help to redress mistakes and social biases (Marwick & Boyd, 2018).

The privacy, fallibility, psychology, and oversight of FRT are highly relevant issues in medicine that must be weighed alongside its clinical potential, and medical genetics is at the forefront of this debate. Genetically related individuals may risk being diagnosed or identified as “at-risk” without consent, for example, and familial relations may be revealed without approval. Similarly, cold forensic cases can be reopened. The risk of these scenarios may increase when genetic testing occurs outside of clinical contexts and without medical guidance, for example via consumer-level DNA testing kits that are now popular holiday gifts (de Groot et al., 2021). Given the limits of informed consent, transparency, and regulation, and the widening disparities associated with digital health databases (McGraw & Mandl, 2021), it is critical to explore how genetics professionals perceive privacy concerns around FRT.

This study set out to investigate how genetics professionals view digital privacy across individual, societal, and governance domains. Our aims were: 1) To assess views about privacy and emerging technologies, including FRT, amongst genetics professionals; 2) to compare views across domains and demographic variables; and 3) to scope expectations for future regulation and privacy protections concerning FRT.

Methods

This study was reviewed and certified as exempt under 45 CFR 46.104(d)(2) by the University of California, San Francisco Institutional Review Board. We obtained implied consent for all respondents as required by the IRB and complied with all relevant ethical regulations. The data was de-identified and analysed as aggregate.

We identified 562 individuals using publicly available online directories of genetics professionals in the U.S., including online institutional faculty directories as well as directories from the American College of Medical Genetics and Genomics and the American Society of Human Genetics, and contacted them via email. The inclusion criterion was professional status in genetics as a clinician, researcher, or clinician-researcher. Individuals were recruited between August 17, 2021, and November 1, 2021. Surveys were conducted via Qualtrics, a web-based survey platform. Each potential participant received a personalized survey link via a maximum of three emails: an invitation email and one or two reminder emails sent as needed at 10 and 20 days thereafter. Of the 105 respondents, 11 did not reach the end of the survey and 10 were excluded as they self-identified as administrators, resulting in a final sample size of 84 respondents (19% response rate, 80% completion rate).

The survey consisted of 48 questions covering five general areas: demographic characteristics (age group, gender, highest education level, professional base, professional role, political views); relationships to new technologies (early adoption, presence and level of sharing on social media, use of privacy features); views on privacy (the right to privacy, the possibility of privacy in the digital age, attitudes toward landmark decisions on online privacy, privacy vs. other values, effects of violations on mental health); views on facial recognition (use in clinical settings, use in non-clinical settings, use in law enforcement, comparison with other biometric data, comparison with DNA databanks, FRT-related race and gender biases, and COVID19 effects); and views on regulation.

No personally identifying information was collected in the questionnaire, and the survey was set to anonymize responses (no IP address, location data, or contact information were recorded). Survey completers were sent a $30 electronic gift card after survey completion.

Questions that addressed similar topics were combined, and each response was assigned a point value to create a composite scaled score that reflects respondents’ overall views on the topic. As such, three composite scales were created for the topics of privacy, the use of facial imaging data, and regulation. The specific survey questions included under each composite scale are shown in Table 1. Responses to every question under a particular composite were separated into three categories and assigned a point value of 1, 2, or 3 (1 reflects a negative viewpoint; 2, neutral; 3, positive). Responses were then added to create a numeric value, or composite scaled score. Scores on the privacy composite scale ranged from 3 to 9, where 9 reflects the highest value placed on privacy. Scores on the facial recognition composite scale ranged from 3 to 9, where 9 reflects the strongest expression of concern about using facial recognition. Scores on the regulation composite scale ranged from 5 to 15, where 15 reflects the positive-most view.

Table 1.

Composite scales on privacy, use of facial imaging data and regulation

Privacy
Question Answer Point Score
Q16: Please choose the option that best describes your opinion about the following statement: “Privacy is a basic human right”. I strongly agree with it
I moderately agree with it
3
I neither agree nor disagree with it 2
I moderately disagree with it
I strongly disagree with it
1
Q18: The “right to be forgotten” became an established right in the European Union in 2014. It refers to a person’s right to have links to their personal information removed from Google and other Internet search engines. How do you feel about such a law? I strongly agree with it
I moderately agree with it
3
I neither agree nor disagree with it 2
I moderately disagree with it
I strongly disagree with it
1
Q23: Cases of privacy violations have often pitted an individual’s right to privacy against the right to free speech of the person sharing the private information and the public’s right to know. Which do you think should carry more weight? The person’s right to privacy 3
The right to free speech of the person sharing the private information
The public’s right to know
1
All three are equally important 2
Use of Facial Imaging Data
Question Answer Point Score
Q24: Biometric data can be used to digitally identify a person. Examples of biometric data include fingerprints, DNA, retinal scans, iris scans, voice recognition, and facial recognition. How do the privacy risks involved in using facial recognition tools compare with those of other examples of biometric data? The privacy risks with facial recognition are less than with other biometric data 1
The privacy risks with facial recognition are about the same as with other biometric data 2
The privacy risks with facial recognition are more than with other biometric data 3
It is impossible to tell at this stage “N/A”
Q25: What best describes your view on this statement: “Compared to genetic profiling, facial recognition raises more privacy concerns because facial images may be more easily obtained than a DNA sample. I strongly agree with it
I moderately agree with it
3
I neither agree nor disagree with it 2
I moderately disagree with it
I strongly disagree with it
1
Q49: Behavioral biometric data are increasingly used to distinguish humans from robots (“bots”) and may become a way to digitally identify a person. Examples of behavioral biometric data include typing cadence, mouse movements, finger movements on trackpads, and how users engage with apps and websites. How do you think the privacy risks of facial recognition compare with those of behavioral biometric data? The privacy risks with facial recognition are less than with behavioral biometric data. 1
The privacy risks with facial recognition are about the same as with behavioral biometric data. 2
The privacy risks with facial recognition are more than with behavioral biometric data. 3
Regulation
Question Answer Point Score
Q18: The “right to be forgotten” became an established right in the European Union in 2014. It refers to a person’s right to have links to their personal information removed from Google and other Internet search engines. How do you feel about such a law? I strongly agree with it
I moderately agree with it
3
I neither agree nor disagree with it 2
I moderately disagree with it
I strongly disagree with it
1
Q19: Section 230 of the US Communications Decency Act of 1996 protects online platforms from liability based on content posted on them by users. How do you feel about this law? I strongly agree with it
I moderately agree with it
1
I neither agree nor disagree with it 2
I moderately disagree with it
I strongly disagree with it
3
Q22: DNA profiling has been used to diagnose diseases and solve crimes but also for political surveillance. What best reflects your view? I am not concerned about DNA databases being misused because I live in a democracy. 1
Some misuse of DNA databases is unavoidable, but on balance a lot more benefit than harm has come from DNA databases.
I support using DNA databases with strict regulation on who can access them, how long information can be stored, and what the information can be used for.
2
I do not support using DNA databases for any purposes as the potential harms outweigh the benefits. 3
Q38 What do you think should be done about the fact that some facial recognition tools have shown lower accuracy rates in minorities and women? This should cause us to significantly slow down the development of clinical facial recognition tools until it is resolved. 3
This is a problem that we should address while we continue to develop clinical facial recognition tools. 2
This problem will resolve itself as the technology naturally gets better over time. 1
Q48: Amazon, IBM, Google and Microsoft have all called for, or implemented, a moratorium on facial recognition technology. How confident are you that technology companies can regulate themselves when it comes to the use of facial recognition tools? Very confident 1
Moderately confident
Slightly confident
2
Not confident at all 3

For a more nuanced view of the data, we further filtered survey questions into themes and filtered answers into dichotomous profiles for each theme. Eleven themes were created from the survey questions, giving us insight into how respondents felt about 1) support for FRT in genetics; 2) support for clinical use of FRT more broadly; 3) concern about non-clinical uses of FRT; 4) concern about mental health effects of FRT; 5) concern about discrimination from FRT; 6) concern about abuse of FRT by commercial entities; 7) concern about the use of FRT in surveillance; 8) concern about the use of FRT by law enforcement for minor crimes; 9) concern about the use of FRT by law enforcement for major crimes; 10) awareness of discrimination by FRT; and, 11) awareness of the impact of privacy violations on mental health. To develop respondents’ profiles, those who had “moderately” or “strongly” positive views were sorted into one profile, and those who had “neutral”, “moderately,” or “strongly” negative views were sorted into the other profile. The specific survey questions included under each theme are shown in Table 2.

Table 2.

Dichotomous Themed Questions

Support for FRT use in genetics specifically
Q30: Differences in facial features are associated with certain diseases and syndromes. What do you think about using computer-aided facial recognition to diagnose diseases based on facial shape and features? I strongly support it
I moderately support it
I neither support it nor oppose it
I moderately oppose it
I strongly oppose it
Support for clinical use of FRT more broadly
Q28: How do you feel about using facial recognition to identify conscious patients in medical settings, such as to check them into clinics? (High to low directionality, 5 point agree-disagree) I strongly support it
I moderately support it
I neither support it nor oppose it
I moderately oppose it
I strongly oppose it
Q29: How do you feel about using facial recognition to identify unconscious patients in medical settings I strongly support it
I moderately support it
I neither support it nor oppose it
I moderately oppose it
I strongly oppose it
Concern about non-clinical uses of FRT
Q26: How concerned are you about using facial recognition to unlock personal devices, such as an iPhone? Very concerned
Moderately concerned
Slightly concerned
Not concerned at all
Q27: How concerned are you about using facial recognition to open personal locks, such as to one’s home? Very concerned
Moderately concerned
Slightly concerned
Not concerned at all
Concern about mental health effects of FRT
Q21: How concerned are you about the negative effects on mental health of technology related privacy violations? Very concerned
Moderately concerned
Slightly concerned
Not concerned at all
Concern about discrimination from FRT
Q37: Knowing that some facial recognition tools have shown lower accuracy rates in minorities, women and older adults, how concerned are you about this? Very concerned
Moderately concerned
Slightly concerned
Not concerned at all
Concern about abuse of FRT
Q32: How concerned are you that medical insurance companies may use facial recognition tools to identify individuals with certain diseases and deny coverage or increase its cost based on this information? Very concerned
Moderately concerned
Slightly concerned
Not concerned at all
Q33: How concerned are you that commercial entities may use facial recognition tools to identify individuals with certain diseases and market specific products to them directly based on this? Very concerned
Moderately concerned
Slightly concerned
Not concerned at all
Concern about surveillance with FRT
Q39: Facial recognition has been used to scan crowds in concerts to generate tour promotion metrics. How concerned are you about this? Very concerned
Moderately concerned
Slightly concerned
Not concerned at all
Q40: Facial recognition has been used in shopping centers to measure visit frequency and time spent in stores. How concerned are you about this? Very concerned
Moderately concerned
Slightly concerned
Not concerned at all
Concern about FRT use by law enforcement for minor crimes
Q41: Facial recognition has been used by law enforcement to solve shoplifting cases. How concerned are you about this? Very concerned
Moderately concerned
Slightly concerned
Not concerned at all
Concern about FRT use by law enforcement for major crimes
Q42: Facial recognition has been used by law enforcement to solve murder cases. How concerned are you about this Very concerned
Moderately concerned
Slightly concerned
Not concerned at all
Awareness of discrimination by FRT
Q34: Are you aware that some facial recognition tools have shown lower accuracy rates in minorities? Yes
No
Q35: Are you aware that some facial recognition tools have shown lower accuracy rates in women? Yes
No
Q36: Are you aware that some facial recognition tools have shown lower accuracy in older adults? Yes
No
Awareness of the impact of privacy violations on mental health
Q20: How informed do you consider yourself to be about the impact on mental health of privacy violations? Very informed
Moderately informed
Slightly informed
Not informed at all

In addition, we compared how respondents’ views varied as correlated to demographic features such as age (18–50 vs over 50 years old), gender (males vs females), professional role (researchers vs clinicians and clinician-researchers), and political leanings (independents and conservatives vs progressives). Variation in responses between these groups are presented in Table 3.

Table 3.

Demographic Variation in Responses

Question Level Overall Male Female P-Value Clinician/Clinician-Researcher Researcher P-Value 18–50 >50 P-Value Progressive Independent/Conservative P-Value
Q2 What age group do you belong to? 18–30 1% 0% 2% 0.065 2% 0% 0.93 2% 0% <0.001 2% 0% 0.66
31–40 21% 11% 30% 20% 23% 40% 0% 21% 12%
41–50 31% 26% 35% 28% 34% 58% 0% 33% 25%
51–60 22% 26% 19% 26% 17% 0% 47% 21% 29%
61–70 16% 24% 9% 15% 17% 0% 34% 12% 25%
>70 9% 13% 5% 9% 9% 0% 18% 10% 8%
Q3 What gender do you identify with? Male 47% 47% 47% 1.00 33% 62% 0.008 43% 67% 0.081
Female 53% 53% 53% 67% 38% 57% 33%
Non-binary 0% 0% 0% 0% 0% 0% 0%
Q4 Please describe the highest education level you reached. Completed high school but did not graduate 0% 0% 0% 0% 0% 0% 0% 0% 0%
Graduated from high school 0% 0% 0% 0% 0% 0% 0% 0% 0%
Completed some college but did not graduate 0% 0% 0% 0% 0% 0% 0% 0% 0%
Graduated from college 0% 0% 0% 0% 0% 0% 0% 0% 0%
Completed an advanced degree (e.g., Master’s, MD, PhD) 100% 100% 100% 100% 100% 100% 100% 100% 100%
Q5 Where is your professional base? Africa 0% 0% 0% 0.22 0% 0% 0.18 0% 0% 0% 0%
Asia 2% 5% 0% 0% 6% 2% 2% 1.00 2% 4% 1.00
Central America 0% 100% 100% 0% 0% 0% 0% 0% 0%
Europe 0% 0% 0% 0% 0% 0% 0% 0% 0%
Middle East 0% 0% 0% 0% 0% 0% 0% 0% 0%
North America 98% 95% 100% 100% 94% 98% 98% 98% 96%
Oceania 0% 0% 0% 0% 0% 0% 0% 0% 0%
South America 0% 0% 0% 0% 0% 0% 0% 0% 0%
Caribbean 0% 0% 0% 0% 0% 0% 0% 0% 0%
Other 0% 0% 0% 0% 0% 0% 0% 0% 0%
Q6 What best describes your political views? Conservative 4% 5% 3% 0.16 7% 0% 0.060 6% 3% 0.19 0% 12% <0.001
Progressive 67% 57% 78% 57% 81% 75% 59% 100% 0%
Independent 29% 38% 43% 36% 19% 19% 38% 0% 88%
Q7 What best describes what you do professionally? I am predominantly a researcher 43% 44% 43% 0.68 47% 39% 0.71 51% 25% 0.092
I am predominantly a clinician 40% 36% 43% 40% 41% 31% 54%
I spend about the same amount of time doing research and treating patients 17% 21% 14% 14% 20% 18% 21%
Q8 In general, do you consider yourself an “early adopter” of new technologies? Yes 49% 59% 39% 0.061 52% 44% 0.72 49% 49% 0.085 43% 67% 0.16
No 10% 3% 16% 10% 8% 16% 2% 10% 8%
It depends on the product 42% 38% 45% 38% 47% 35% 49% 47% 25%
Q9 Are you active on social media (other than professional sites such as LinkedIn)? Yes 51% 44% 59% 0.19 33% 75% <0.001 60% 41% 0.13 57% 46% 0.46
No 49% 56% 41% 67% 25% 40% 59% 43% 54%
Q10 Do you share your accurate first and last name on your social media account? Yes 86% 82% 88% 0.67 81% 89% 0.65 81% 94% 0.38 93% 82% 0.56
No 14% 18% 12% 19% 11% 19% 6% 7% 18%
Q11 Do you share your date of birth or birthday on your social media account? Yes 28% 24% 31% 0.73 31% 26% 0.74 35% 18% 0.31 29% 18% 0.69
No 72% 76% 69% 69% 74% 65% 82% 71% 82%
Q12 Do you share your political views on your social media account? Yes 44% 35% 50% 0.37 56% 37% 0.34 46% 41% 1.00 46% 45% 1.00
No 56% 65% 50% 44% 63% 54% 59% 54% 55%
Q13 Do you share your religious beliefs on your social media account, if applicable? Yes 21% 29% 15% 0.44 19% 22% 1.00 23% 18% 1.00 18% 36% 0.24
No 79% 71% 85% 81% 78% 77% 82% 82% 64%
Q14 How aware are you of the privacy features offered by your social media account (e.g., limit viewing of your posts to approved contacts, limit targeted advertisements, limit access to location, etc.)? Very aware 14% 6% 19% 0.45 6% 19% 0.30 8% 24% 0.18 14% 18% 0.82
Moderately aware 63% 65% 62% 81% 52% 73% 47% 71% 64%
Slightly aware 21% 29% 15% 12% 26% 15% 29% 11% 18%
Not aware at all 2% 0% 4% 0% 4% 4% 0% 4% 0%
Q15 How much do you use the privacy features on your social media accounts? I always use them 24% 24% 24% 0.94 31% 19% 0.55 20% 29% 0.010 26% 27% 1.00
I use them most of the time 40% 35% 44% 44% 38% 60% 12% 41% 36%
I use them some of the time 17% 18% 16% 6% 23% 8% 29% 19% 18%
I rarely use them 19% 24% 16% 19% 19% 12% 29% 15% 18%
Q16 Please choose the option that best describes your opinion about the following statement: “Privacy is a basic human right”. I strongly agree with this statement 51% 49% 51% 0.61 50% 51% 0.47 51% 50% 0.86 52% 46% 0.62
I moderately agree with this statement 37% 41% 35% 42% 31% 40% 35% 38% 42%
I neither agree nor disagree with this statement 11% 8% 14% 8% 14% 9% 12% 10% 8%
I moderately disagree with this statement 1% 3% 0% 0% 3% 0% 2% 0% 4%
I strongly disagree with this statement 0% 0% 0% 0% 0% 0% 0% 0% 0%
Q17 Please choose the option that best describes your opinion about the following statement: “New technologies have made it very difficult to protect privacy.” I strongly agree with this statement 38% 41% 36% 0.52 31% 47% 0.16 28% 49% 0.083 51% 25% 0.008
I moderately agree with this statement 45% 44% 45% 48% 42% 58% 32% 43% 42%
I neither agree nor disagree with this statement 13% 10% 16% 19% 6% 12% 15% 6% 25%
I moderately disagree with this statement 2% 5% 0% 2% 3% 2% 2% 0% 8%
I strongly disagree with this statement 1% 0% 2% 0% 3% 0% 2% 0% 0%
Q18 The “right to be forgotten” became an established right in the European Union in 2014. It refers to a person’s right to have links to their personal information removed from Google and other Internet search engines. How do you feel about such a law? I strongly agree with this statement 39% 33% 45% 0.61 35% 44% 0.088 40% 39% 0.50 55% 17% 0.009
I moderately agree with this statement 37% 41% 32% 40% 33% 42% 32% 24% 54%
I neither agree nor disagree with this statement 14% 13% 16% 21% 6% 12% 17% 10% 21%
I moderately disagree with this statement 8% 10% 7% 4% 14% 5% 12% 8% 8%
I strongly disagree with this statement 1% 3% 0% 0% 3% 2% 0% 2% 0%
Q19 Section 230 of the US Communications Decency Act of 1996 protects online platforms from liability based on content posted on them by users. How do you feel about this law? I strongly agree with this statement 8% 13% 5% 0.33 8% 8% 0.029 12% 5% 0.13 4% 21% 0.12
I moderately agree with this statement 12% 15% 9% 4% 22% 16% 7% 16% 4%
I neither agree nor disagree with this statement 31% 23% 39% 42% 17% 37% 24% 24% 33%
I moderately disagree with this statement 36% 38% 32% 31% 42% 26% 46% 41% 33%
I strongly disagree with this statement 13% 10% 16% 15% 11% 9% 17% 14% 8%
Q20 How informed do you consider yourself to be about the impact on mental health of privacy violations? Very informed 5% 8% 2% 0.44 4% 6% 0.70 5% 5% 0.45 2% 12% 0.29
Moderately informed 33% 38% 30% 38% 28% 26% 41% 37% 38%
Slightly informed 44% 36% 50% 44% 44% 49% 39% 47% 33%
Not informed at all 18% 18% 18% 15% 22% 21% 15% 14% 17%
Q21 How concerned are you about the negative effects on mental health of technology related privacy violations? Very concerned 20% 31% 11% 0.004 25% 14% 0.57 23% 17% 0.45 16% 29% 0.098
Moderately concerned 55% 36% 73% 54% 56% 51% 59% 59% 42%
Slightly concerned 23% 31% 14% 19% 28% 26% 20% 24% 21%
Not concerned at all 2% 3% 2% 2% 3% 0% 5% 0% 8%
Q22 DNA profiling has been used to diagnose diseases and solve crimes but also for political surveillance. What best reflects your view? I am not concerned about DNA databases being misused because I live in a democracy. 1% 3% 0% 0.15 2% 0% 0.50 2% 0% 0.95 0% 4% 0.32
Some misuse of DNA databases is unavoidable, but on balance a lot more benefit than harm has come from DNA databases. 25% 28% 20% 21% 31% 23% 27% 24% 33%
I support using DNA databases with strict regulation on who can access them, how long information can be stored, and what the information can be used for. 71% 64% 80% 73% 69% 72% 71% 71% 62%
I do not support using DNA databases for any purposes as the potential harms outweigh the benefits. 2% 5% 0% 4% 0% 2% 2% 4% 0%
Q23 Cases of privacy violations have often pitted an individual’s right to privacy against the right to free speech of the person sharing the private information and the public’s right to know. Which do you think should carry more weight? The person’s right to privacy 37% 41% 32% 0.37 35% 39% 0.73 35% 39% 0.47 35% 42% 0.69
The right to free speech of the person sharing the private information 5% 3% 7% 6% 3% 5% 5% 2% 4%
The public’s right to know 4% 0% 7% 2% 6% 7% 0% 6% 0%
All three are equally important 55% 56% 55% 56% 53% 53% 56% 57% 54%
Q24 Biometric data can be used to digitally identify a person. Examples of biometric data include fingerprints, DNA, retinal scans, iris scans, voice recognition, and facial recognition. How do the privacy risks involved in using facial recognition tools compare with those of other examples of biometric data? The privacy risks with facial recognition are less than with other biometric data 5% 5% 5% 1.00 8% 0% 0.012 5% 5% 0.38 2% 8% 0.20
The privacy risks with facial recognition are about the same as with other biometric data 42% 41% 41% 52% 28% 33% 51% 41% 58%
The privacy risks with facial recognition are more than with other biometric data 35% 36% 34% 23% 50% 40% 29% 45% 25%
It is impossible to tell at this stage 19% 18% 20% 17% 22% 23% 15% 12% 8%
Q25 What best describes your view on this statement: I strongly agree with it 27% 28% 27% 0.96 19% 39% 0.18 26% 29% 0.58 35% 25% 0.13
“Compared to genetic profiling, facial recognition raises more privacy concerns because facial images may be more easily obtained than a DNA sample.” I moderately agree with it 43% 41% 43% 46% 39% 49% 37% 45% 33%
I neither agree nor disagree with it 21% 21% 23% 27% 14% 16% 27% 10% 33%
I moderately disagree with it 8% 10% 7% 8% 8% 9% 7% 10% 8%
I strongly disagree with it 0% 0% 0% 0.85 0% 0% 0% 0% 0% 0%
Q26 How concerned are you about using facial recognition to unlock personal devices, such as an iPhone? Very concerned 10% 8% 11% 0.33 6% 14% 0.66 9% 10% 0.17 8% 12% 0.79
Moderately concerned 25% 18% 32% 25% 25% 35% 15% 27% 17%
Slightly concerned 38% 46% 30% 42% 33% 30% 46% 37% 42%
Not concerned at all 27% 28% 27% 27% 28% 26% 29% 29% 29%
Q27 How concerned are you about using facial recognition to open personal locks, such as to one’s home? Very concerned 15% 13% 18% 0.69 10% 22% 0.53 19% 12% 0.20 18% 12% 0.36
Moderately concerned 36% 31% 39% 40% 31% 44% 27% 33% 33%
Slightly concerned 29% 33% 25% 29% 28% 21% 37% 33% 21%
Not concerned at all 20% 23% 18% 21% 19% 16% 24% 16% 33%
Q28 How do you feel about using facial recognition to identify conscious patients in medical settings, such as to check them into clinics? I strongly support it 2% 5% 0% 0.30 0% 6% 0.012 0% 5% 0.043 2% 4% 0.33
I moderately support it 23% 21% 25% 15% 33% 28% 17% 27% 17%
I neither support it nor oppose it 43% 51% 36% 56% 25% 33% 54% 43% 50%
I moderately oppose it 25% 18% 30% 25% 25% 35% 15% 18% 29%
I strongly oppose it 7% 5% 9% 4% 11% 5% 10% 10% 0%
Q29 How do you feel about using facial recognition to identify unconscious patients in medical settings? I strongly support it 17% 21% 14% 0.85 12% 22% 0.13 14% 20% 0.57 14% 25% 0.75
I moderately support it 43% 46% 41% 46% 39% 44% 41% 45% 46%
I neither support it nor oppose it 21% 18% 25% 29% 11% 19% 24% 22% 21%
I moderately oppose it 13% 10% 14% 8% 19% 19% 7% 12% 4%
I strongly oppose it 6% 5% 7% 4% 8% 5% 7% 6% 4%
Q30 Differences in facial features are associated with certain diseases and syndromes. What do you think about using computer-aided facial recognition to diagnose diseases based on facial shape and features? I strongly support it 42% 44% 39% 0.43 44% 39% 0.17 28% 56% 0.026 33% 58% 0.059
I moderately support it 38% 44% 34% 33% 44% 51% 24% 41% 38%
I neither support it nor oppose it 13% 8% 18% 19% 6% 16% 10% 18% 0%
I moderately oppose it 5% 5% 5% 4% 6% 2% 7% 4% 4%
I strongly oppose it 2% 0% 5% 0% 6% 2% 2% 4% 0%
Q31 Facebase is a National Institutes of Health-supported library of more than 5,000 three-dimensional facial images of patients with rare genetic disorders that has been used to develop diagnostic algorithms. How familiar are you with Facebase? Very familiar 5% 5% 5% 0.92 0% 11% 0.001 2% 7% 0.34 8% 0% 0.57
Moderately familiar 25% 26% 25% 27% 22% 19% 32% 22% 29%
Slightly familiar 25% 21% 27% 38% 8% 28% 22% 20% 25%
Not familiar at all 45% 49% 43% 35% 58% 51% 39% 49% 46%
Q32 How concerned are you that medical insurance companies may use facial recognition tools to identify individuals with certain diseases and deny coverage or increase its cost based on this information? Very concerned 27% 28% 27% 0.24 27% 28% 0.65 33% 22% 0.024 27% 29% 0.35
Moderately concerned 25% 26% 25% 21% 31% 19% 32% 29% 17%
Slightly concerned 35% 25% 41% 40% 28% 44% 24% 35% 29%
Not concerned at all 13% 21% 7% 12% 14% 5% 22% 10% 25%
Q33 How concerned are you that commercial entities may use facial recognition tools to identify individuals with certain diseases and market specific products to them directly based on this? Very concerned 32% 33% 32% 0.66 33% 31% 0.61 35% 29% 0.73 31% 29% 0.29
Moderately concerned 29% 23% 34% 27% 31% 26% 32% 31% 29%
Slightly concerned 30% 31% 27% 33% 25% 33% 27% 33% 21%
Not concerned at all 10% 13% 7% 6% 14% 7% 12% 6% 21%
Q34 Are you aware that some facial recognition tools have shown lower accuracy rates in minorities? Yes 86% 87% 84% 0.76 92% 78% 0.11 84% 88% 0.76 88% 92% 1.00
No 14% 13% 16% 8% 22% 16% 12% 12% 8%
Q35 Are you aware that some facial recognition tools have shown lower accuracy rates in women? Yes 40% 36% 45% 0.50 46% 33% 0.27 37% 44% 0.66 41% 50% 0.62
No 60% 64% 55% 54% 67% 63% 56% 59% 50%
Q36 Are you aware that some facial recognition tools have shown lower accuracy in older adults? Yes 51% 51% 52% 1.00 58% 42% 0.19 44% 59% 0.20 49% 67% 0.21
No 49% 49% 48% 42% 58% 56% 41% 51% 33%
Q37 Knowing that some facial recognition tools have shown lower accuracy rates in minorities, women and older adults, how concerned are you about this? Very concerned 40% 31% 49% 0.055 30% 53% 0.21 45% 34% 0.38 50% 17% 0.028
Moderately concerned 36% 33% 40% 43% 28% 38% 34% 33% 46%
Slightly concerned 14% 21% 9% 17% 11% 12% 17% 10% 21%
Not concerned at all 10% 15% 2% 11% 8% 5% 15% 6% 17%
Q38 What do you think should be done about the fact that some facial recognition tools have shown lower accuracy rates in minorities and women? This should cause us to significantly slow down the development of clinical facial recognition tools until it is resolved. 24% 18% 30% 0.11 25% 22% 0.89 35% 12% 0.049 29% 8% 0.13
This is a problem that we should address while we continue to develop clinical facial recognition tools. 64% 64% 66% 65% 64% 56% 73% 61% 75%
This problem will resolve itself as the technology naturally gets better over time. 12% 18% 5% 10% 14% 9% 15% 10% 17%
Q39 Facial recognition has been used to scan crowds in concerts to generate tour promotion metrics. How concerned are you about this? Very concerned 31% 23% 39% 0.44 23% 42% 0.17 28% 34% 0.15 35% 17% 0.33
Moderately concerned 30% 31% 30% 29% 31% 35% 24% 31% 29%
Slightly concerned 31% 36% 25% 40% 19% 35% 27% 27% 42%
Not concerned at all 8% 10% 7% 8% 8% 2% 15% 8% 12%
Q40 Facial recognition has been used in shopping centers to measure visit frequency and time spent in stores. How concerned are you about this? Very concerned 36% 23% 48% 0.071 29% 44% 0.55 37% 34% 0.052 41% 21% 0.089
Moderately concerned 27% 28% 27% 29% 25% 37% 17% 24% 29%
Slightly concerned 29% 38% 18% 31% 25% 23% 34% 31% 29%
Not concerned at all 8% 10% 7% 10% 6% 2% 15% 4% 21%
Q41 Facial recognition has been used by law enforcement to solve shoplifting cases. How concerned are you about this? Very concerned 17% 10% 23% 0.055 8% 28% 0.14 23% 10% 0.13 20% 4% 0.12
Moderately concerned 25% 21% 30% 27% 22% 28% 22% 29% 25%
Slightly concerned 37% 51% 23% 40% 33% 26% 49% 35% 33%
Not concerned at all 21% 18% 25% 25% 17% 23% 20% 16% 38%
Q42 Facial recognition has been used by law enforcement to solve murder cases. How concerned are you about this? Very concerned 12% 3% 20% 0.043 4% 22% 0.065 19% 5% 0.032 16% 4% 0.053
Moderately concerned 23% 21% 25% 21% 25% 26% 20% 27% 8%
Slightly concerned 35% 44% 25% 40% 28% 21% 49% 33% 38%
Not concerned at all 31% 33% 30% 35% 25% 35% 27% 24% 50%
Q43 Facial recognition has been used to identify protesters at anti-government protests. How concerned are you about this? Very concerned 49% 44% 55% 0.69 40% 61% 0.29 47% 51% 0.97 51% 38% 0.016
Moderately concerned 23% 23% 23% 27% 17% 23% 22% 31% 17%
Slightly concerned 21% 26% 16% 25% 17% 23% 20% 18% 29%
Not concerned at all 7% 8% 7% 8% 6% 7% 7% 0% 17%
Q44 There are reports that facial recognition can be fooled by 3-D photos. How concerned are you about this? Very concerned 29% 18% 39% 0.22 31% 25% 0.38 33% 24% 0.027 31% 17% 0.14
Moderately concerned 33% 38% 30% 27% 42% 44% 22% 39% 25%
Slightly concerned 32% 36% 27% 28% 25% 21% 44% 27% 50%
Not concerned at all 6% 8% 5% 4% 8% 2% 10% 4% 8%
Q45 The widespread use of facial masks since the COVID-19 pandemic may limit the testing and use of facial recognition technology. What statement best represents your view on this? That’s a good thing. Facial recognition has been developing too fast and before we can fully understand its potential disadvantages. 41% 28% 53% 0.015 38% 44% 0.88 51% 30% 0.086 46% 21% 0.026
That’s too bad, because this will slow down the progress of a promising technology. 18% 15% 21% 19% 17% 19% 18% 10% 33%
The COVID-19 pandemic will ultimately have no effect on facial recognition technology. 41% 56% 26% 43% 39% 30% 52% 44% 46%
Q46 Please choose the option that best describes your opinion about the following statement: “It has become almost impossible to protect privacy anyway, so resisting facial recognition would be pointless.” I strongly agree with this statement 6% 8% 5% 0.40 6% 6% 0.031 0% 12% 0.19 0% 17% 0.027
I moderately agree with this statement 18% 26% 11% 21% 14% 16% 20% 18% 25%
I neither agree nor disagree with this statement 25% 18% 30% 35% 11% 26% 24% 27% 17%
I moderately disagree with this statement 39% 38% 41% 31% 50% 44% 34% 37% 38%
I strongly disagree with this statement 12% 10% 14% 6% 19% 14% 10% 18% 4%
Q47 In June 2020, US lawmakers introduced a bill that would make it illegal for any federal agency or official to “acquire, possess, access, or use” biometric technology, including facial recognition tools. How confident are you that governments can adequately regulate the use of facial technology tools? Very confident 1% 0% 2% 0.14 0% 3% 0.70 0% 2% 0.064 0% 0%
Moderately confident 18% 26% 9% 17% 19% 16% 20% 12% 21% 0.53
Slightly confident 35% 28% 41% 33% 36% 47% 22% 35% 38%
Not confident at all 46% 46% 48% 50% 42% 37% 56% 53% 42%
Q48 Amazon, IBM, Google and Microsoft have all called for, or implemented, a moratorium on facial recognition technology. How confident are you that technology companies can regulate themselves when it comes to the use of facial recognition tools? Very confident 1% 3% 0% 0.86 0% 3% 0.11 0% 2% 0.62 0% 4% 0.078
Moderately confident 8% 8% 9% 10% 6% 12% 5% 2% 12%
Slightly confident 30% 28% 32% 38% 19% 30% 29% 31% 33%
Not confident at all 61% 62% 59% 52% 72% 58% 63% 67% 50%
Q49 Behavioral biometric data are increasingly used to distinguish humans from robots (“bots”) and may become a way to digitally identify a person. Examples of behavioral biometric data include typing cadence, mouse movements, finger movements on trackpads, and how users engage with apps and websites. How do you think the privacy risks of facial recognition compare with those of behavioral biometric data? The privacy risks with facial recognition are less than with behavioral biometric data. 6% 5% 7% 0.73 6% 6% 0.94 7% 5% 0.60 6% 4% 1.00
The privacy risks with facial recognition are about the same as with behavioral biometric data. 49% 54% 45% 50% 47% 53% 44% 53% 54%
The privacy risks with facial recognition are more than with behavioral biometric data. 45% 41% 48% 44% 47% 40% 51% 41% 42%

We described the results on the composite scales using means and standard deviations and the results on the categorical and dichotomous measures using percentages. Comparisons between groups of means on composite scales used two sample t-tests or two sample Mann-Whitney tests, as appropriate. Comparisons between percentages on categorical or dichotomous items were performed using chi-squared or Fisher exact tests, as appropriate. We set statistical significance at p-value<0.05. All analyses were conducted using Stata version 17.0.

Results

Sample Characteristics

Most of the respondents were 41–50 or 51–60 years old (31% and 22%, respectively), with 47% identifying as male and 53% as female. Men were mostly over 50 years of age (64%), whereas women were mostly under 50 (66%). 98% of respondents were based in North America, and 100% had completed an advanced degree. Respondents primarily had progressive political views (67%). Of the 84 respondents, 43% were predominantly researchers, 40% were predominantly clinicians, and 17% spent about the same amount of time doing research and treating patients (clinician-researchers). More respondents in the independent/conservative group than in the progressive group were clinicians/clinician-researchers (as opposed to researchers) (75% vs. 49%, p-value=0.045).

Privacy

Overall, respondents scored a mean of 7.8 (SD 1.1) on the composite scale for privacy (scaled 3–9), which indicates that they highly value privacy. Respondents moderately or strongly agreed that privacy is a basic human right (88%). Regarding weighing the right to privacy against the right to free speech of the person sharing the private information and the public’s right to know, 37% believed the right to privacy carries more weight, and a greater proportion, 55%, thought all three were equally important. Also, 76% moderately or strongly agreed with the “right to be forgotten”, which gives users the option to delete personal results from online search engines.

Facial Recognition Technology

Respondents scored a mean of 7.3 (SD 1.2) on the composite scale for concern about facial recognition (scaled 3–9), which indicates a high level of concern. Compared to other biometric data (e.g., fingerprints, DNA, retinal scans, iris scans, and voice recognition), the largest proportion of respondents (42%) believed privacy risks with FRT were about the same, whereas 35% found the risks to be more pronounced. Compared to behavioral biometric data (e.g., typing cadence, mouse movements, finger movements on trackpads, and how users engage with apps and websites), respondents thought the privacy risks with FRT were about the same or more pronounced (49% vs. 45%, respectively). Compared to genetic profiling, the largest proportion of respondents (43%) moderately agreed that FRT raises more privacy concerns.

Most respondents (80%) moderately or strongly supported the use of FRT in genetics specifically, but most (79%) were either neutral, or moderately or strongly opposed to the clinical use of FRT more broadly.

Overall, respondents were also slightly or not at all concerned about the non-clinical uses of FRT (65%), tending to be slightly or not concerned at all about the use of FRT by law enforcement to solve minor (58%) or major (65%) crimes. However, they were moderately or very concerned about the use of FRT for surveillance in public areas (52%). Also, 56% of respondents reported to be slightly or not at all concerned about the abuse of FRT by medical insurance or commercial entities. 61% responded yes to being aware of FRT’s lower accuracy rates in all three marginalized groups queried about (minorities, women, and older adults), and 76% were moderately or very concerned about discrimination by FRT.

Detailed responses to the dichotomous themed questions are presented in Table 4. Themes were cross analysed against each other to create respondent profiles with a focus on the themes of concern about the mental health effects of FRT, concern about discrimination from FRT, concern about abuse of FRT by commercial entities, and concern about the use of FRT in surveillance.

Table 4.

Themes and dichotomous respondent profiles

Theme Level Overall Concern about mental health effects of FRT Concern about discrimination from FRT Concern about abuse of FRT by commercial entities Concern about the use of FRT in surveillance
“Slightly” or “not concerned at all” “Moderately” or “very” concerned p-value “Slightly” or “not concerned at all” “Moderately” or “very” concerned p-value “Slightly” or “not concerned at all” “Moderately” or “very” concerned p-value “Slightly” or “not concerned at all” “Moderately” or “very” concerned p-value
Support for FRT in genetics “Neither support it nor oppose it”, “moderately”, or “strongly” oppose it 20% 19% 21% 1.00* 10% 24% 0.22* 13% 30% 0.063** 12% 27% 0.11*
“Moderately” or “strongly” support 80% 81% 79% 90% 76% 87% 70% 88% 73%
Support for clinical use of FRT more broadly “Neither support it nor oppose it”, “moderately”, or “strongly” oppose it 79% 67% 83% 0.14* 75% 79% 0.76* 77% 81% 0.79* 70% 86% 0.11*
“Moderately” or “strongly” support 21% 33% 17% 25% 21% 23% 19% 30% 14%
Concern about non-clinical uses of FRT “Slightly” or “not concerned at all” 65% 81% 60% 0.11* 90% 59% 0.013*** 81% 46% 0.001*** 75% 57% 0.11*
“Moderately” or “very” concerned 35% 19% 40% 10% 41% 19% 54% 25% 43%
Concern about mental health effects of FRT “Slightly” or “not concerned at all” 25% 55% 16% 0.001*** 34% 14% 0.042*** 42% 9% <0.001***
“Moderately” or “very” concerned 75% 45% 84% 66% 86% 57% 91%
Concern about discrimination from FRT “Slightly” or “not concerned at all” 24% 52% 15% 0.001*** 34% 11% 0.020*** 32% 16% 0.12*
“Moderately” or “very” concerned 76% 48% 85% 66% 89% 68% 84%
Concern about abuse of FRT by commercial entities “Slightly” or “not concerned at all” 56% 76% 49% 0.042*** 80% 49% 0.020*** 70% 43% 0.016***
“Moderately” or “very” concerned 44% 24% 51% 20% 51% 30% 57%
Concern about the use of FRT in surveillance “Slightly” or “not concerned at all” 48% 81% 37% <0.001*** 65% 43% 0.12* 60% 32% 0.016***
“Moderately” or “very” concerned 52% 19% 63% 35% 57% 40% 68%
Concern about the use of FRT by law enforcement for minor crimes “Slightly” or “not concerned at all” 58% 81% 51% 0.021*** 80% 52% 0.037*** 70% 43% 0.015*** 80% 39% <0.001***
“Moderately” or “very” concerned 42% 19% 49% 20% 48% 30% 57% 20% 61%
Concern about the use of FRT by law enforcement for major crimes “Slightly” or “not concerned at all” 65% 76% 62% 0.29* 85% 60% 0.057** 77% 51% 0.021*** 82% 50% 0.003***
“Moderately” or “very” concerned 35% 24% 38% 15% 40% 23% 49% 18% 50%
Awareness of discrimination by FRT Not Aware 39% 33% 41% 0.61* 20% 44% 0.066** 36% 43% 0.65* 35% 43% 0.51*
Aware 61% 67% 59% 80% 56% 64% 57% 65% 57%
Awareness of the impact of privacy violations on mental health “Slightly” or “not informed at all” 62% 86% 54% 0.010*** 65% 62% 1.00* 72% 49% 0.041*** 65% 59% 0.66*
“Moderately” or “very” informed 38% 14% 46% 35% 38% 28% 51% 35% 41%
*

indicates a non-significant p-value

**

indicates a p-value<0.10

***

indicates a p-value<0.05

Regulation

Overall, respondents scored a mean of 11.7 (SD 1.5) on the composite scale for regulation (scaled 5–15), which indicates that they tended to view regulation favorably. Respondents were neutral or moderately disagreed (67%) with Section 230 of the US Communications Decency Act, which protects online platforms from liability based on user-posted content. Further, the majority (71%) supported “strict regulation” when using DNA databases. Respondents were also slightly or not confident at all that governments can adequately regulate FRT use (81%) and were even less confident in technology companies regulating themselves (91%). The majority (64%) believed that the problem of FRT showing lower accuracy rates in marginalized groups should be addressed while we continue to develop clinical FRT, as opposed to significantly slowing FRT development until it has been resolved (24%) or believing that the problem will resolve itself (12%).

Mental health

Most respondents (62%) were slightly or not informed at all of the impact of privacy violations on mental health, and 75% were moderately or very concerned about the negative psychological effects once informed about them by the survey.

Breakdown by Demographics

Some differences were noted between genders with respect to demographic features, mental health privacy concerns, views on FRT use by law enforcement, and the effects of the COVID-19 pandemic on FRT. In general, women were more concerned than men about the use of FRT for surveillance (64% moderately or strongly concerned vs. 41%, p-value=0.049) and FRT-related discrimination (88% moderately or strongly concerned vs 64%, p-value=0.017). With regards to the mental health impact of privacy violations, the largest proportion of women were “moderately” concerned (73%), whereas men were more evenly distributed (very [31%], moderately [36%], and slightly concerned [31%], p-value=0.004). The largest proportion of men (44%) felt “slightly” concerned about using FRT to solve murder cases, whereas women were more split in opinion between feeling “not” concerned (30%), “slightly” concerned (25%), or “moderately” concerned (25%), p-value=0.043. Finally, whereas men primarily felt that the COVID-19 pandemic would ultimately have no effect on FRT (56%), women primarily saw the legacy of the pandemic as good since FRT had been “developing too fast and before we can fully understand its potential disadvantages” (53%), p-value=0.015.

Dividing respondents into “younger” (≤50) and “older” (>50) revealed differences in the use of privacy features on social media, trust in FRT, and views on use by law enforcement. More younger respondents used privacy features on social media (“always” use them 20%, “mostly” use them 60%), whereas older respondents were more evenly distributed (“always” 29%, “mostly” 12%, “sometimes” 29%, “rarely” use them 29%), p-value=0.010. Most younger respondents “moderately” supported using FRT to diagnose diseases based on facial morphology (51%), whereas most older respondents “strongly” supported it (56%), p-value=0.026. However, older respondents were more concerned than younger respondents about the possibility of abuse by insurance companies (32% “moderately” and 24% “slightly” concerned vs 19% “moderately” and 44% “slightly” concerned, respectively, p-value=0.024). Older respondents also predominantly (73%) believed that the lower accuracy rates of FRT in minorities and women should be addressed while we continue to develop clinical FRT tools, whereas only 56% of younger respondents held that view, with an additional 35% believing that the development of clinical FRT tools should be “significantly slowed down” until these issues are resolved (p-value=0.049). Younger respondents were also more cautious about the use of FRT by law enforcement to solve murder cases: most older respondents were only “slightly” (49%) or “not at all” (27%) concerned, compared with a more divided response among younger respondents (“very” 19%, “moderately” 26%, “slightly” 21%, and “not at all” concerned 35%, p-value=0.032). Further, younger respondents were more concerned about FRT being fooled by 3-D photos, compared to older respondents (77% moderately or very concerned vs 46%, p-value=0.027).

Some differences also emerged based on respondents’ self-identified political views (progressive vs independent/conservative). 96% of independents/conservatives moderately or strongly supported the use of FRT in genetics, compared with 73% of progressives (p-value=0.027). Progressives were also more concerned about FRT’s lower accuracy rates in minorities, women, and older adults, compared to independents/conservatives, (83% vs 63% “moderately” or “very” concerned, p-value=0.028). Regarding use by law enforcement, more independents/conservatives were primarily “slightly” or “not at all” concerned about using FRT to solve major crimes (88% vs 57%, p-value=0.016). Progressives were more concerned than independents/conservatives about using FRT to identify protesters (82% vs 55% “moderately” or “very” concerned, p-value=0.016).

While both political groups agreed with the statement “new technologies have made it very difficult to protect privacy”, progressives felt more strongly about it (94% vs 67% moderately or strongly agreed, p-value=0.008). Similarly, independents/conservatives primarily either moderately agreed (25%) or moderately disagreed (38%) with the statement “it has become almost impossible to protect privacy anyway, so resisting facial recognition would be pointless” whereas progressives tended to be neutral (27%) or moderately disagree (37%, p-value=0.027). Although both groups agreed with the “right to be forgotten” law, progressives felt more strongly about it (79% vs 71% moderately or strongly agreed, p-value=0.009). Whereas progressives were primarily split between thinking that the pandemic would ultimately have no effect on FRT (44%) or that it is a good thing as “facial recognition has been developing too fast and before we can fully understand its potential disadvantages” (46%), independents/conservatives primarily felt that the pandemic would ultimately have no effect on FRT (46%, p-value=0.026).

Researchers were more active on social media than clinicians/clinician-researchers (75% vs. 33%, p-value<0.001). Clinicians/clinician-researchers were less supportive of using FRT in non-genetics clinical settings than researchers (88% vs 67% either neutral, moderately oppose, or strongly oppose it, p-value=0.031). Clinicians/clinician-researchers were also more likely to be aware of Facebase (27% moderately familiar, 38% slightly familiar), whereas researchers were more likely to be unfamiliar with it (58%, p-value<0.001). The largest proportion (42%) of clinicians/clinician-researchers were neutral (neither agreed nor disagreed) with Section 230 of the US Communications Decency Act of 1996, while researchers primarily “moderately disagreed” (42%) with it (p-value=0.029). While just over half (52%) of clinicians/clinicians-researchers felt that privacy risks of FRT are comparable to biometric data, a similar portion (50%) of researchers felt there to be more risks (p-value=0.012). More researchers moderately or strongly disagreed with the statement “it has become almost impossible to protect privacy anyway, so resisting facial recognition would be pointless”, compared with clinicians/clinician-researchers (69% vs 37%, p-value=0.031).

Discussion

This study explored how genetics professionals view privacy risks pertaining to increased FRT use. We observed an overall valuing of privacy as novel technologies emerge. However, views were context-specific and varied across demographic characteristics.

As healthcare technology has advanced, the notion of privacy has shifted from individuals’ “right” to protection, to the degree of “control” they might have over the use of their data, to specific applications of personal data-use and trade-offs between individual and societal interests (Clayton et al., 2019). This is reflected in how respondents in our study supported privacy as a basic human right (88% moderately or strongly agreed) and the “the right to be forgotten” (76% “moderately” or “strongly” agreed vs. 9% “moderately” or “strongly” disagreed), but with the majority (55%) also believing that the right to privacy carried equal weight to the right to free speech of the person sharing the private information and the public’s right to know. This position argues against an absolute prioritizing of privacy over other values.

Although there was an overall preference for regulation (overall composite scaled score 11.7, SD 1.5), respondents had little confidence in the government’s ability to regulate FRT and were even less confident about technology companies regulating themselves (81% and 91% “slightly” or “not at all” confident, respectively). This mirrors the general population’s views of emergent technologies as carrying intractable privacy risks (Draper & Turow, 2019), and points to a certain level of resignation toward privacy protection. The slightly higher faith in governments, however, reflects the higher public support for FRT use by the government when compared to private companies (Kostka et al., 2021), presumably due to built-in conflict of interest issues in the private sector (Mittelstadt, 2019), notwithstanding views of governments as too beholden to special interests, inefficient, or slow (Bragias et al., 2021). It should be highlighted that views on the use of FRT by autocratic governments have not, to our knowledge, been surveyed.

Women tended to be more concerned about the use of FRT for non-clinical surveillance (64% vs. 41%), and about the lower accuracy rates in marginalized groups, compared with males (88% vs 64% concerned, respectively). Additionally, more women (53%) agreed that the use of face masks during the COVID-19 pandemic was a good thing because FRT is “developing too fast and before we can understand its potential disadvantages,” compared to men who primarily saw the pandemic as non-consequential (56%). This echoes previous findings that women are more likely to be concerned about privacy around new technologies (Lin & Wang, 2020), on both social media (Tifferet, 2019), and in professional environments (Stark et al., 2020). Risk-aversion to AI development has been explained as resting on whether one’s personal demographic has been at risk of error from new technologies, with those who have been socially marginalized more likely to be risk averse (Devlin & correspondent, 2020). Female respondents’ relative reticence toward FRT may reflect their experiences or likelihood of facing social marginalization (Buolamwini & Gebru, 2018).

Our findings suggest heightened awareness amongst researchers about the need to act to safeguard privacy. Researchers were more strongly opposed (50% “moderately” and 19% “strongly” disagreed) than clinicians/clinicians-researchers to the statement that protecting privacy was pointless (31% “moderately disagree” and 6% “strongly disagree”). Accordingly, researchers also disagreed more with the Section 230 legislation (42% “moderately” and 11% “strongly” disagree vs 31% “moderately” and 15% “strongly” disagree). Further, half of the researchers (50%) suggested that privacy risks with FRT were greater than with other biometric data, whereas 52% of clinicians/clinician-researchers saw no difference. Arguably, these concerns are explained by researchers’ heightened familiarity with the challenges protecting large, interconnected databases or their higher likelihood of being active on social media (75% active compared to 33% of clinicians).

Overall, there was high support for FRT use in genetics clinics (80% “moderately” or “strongly” supported), although uses in other clinical settings, such as to identify conscious or unconscious patients, were not supported (79% “moderately” or “strongly” opposed). Still, many (65%) reported low levels of concern about FRT use in non-clinical applications, although the level of concern varied with the specific non-clinical application queried about. For example, respondents were less concerned about using FRT to unlock a personal device (38% slightly and 27% not concerned at all) than about using it to open personal locks (29% slightly concerned, 20% not concerned at all). These apparent inconsistencies suggest that, in privacy debates, context matters, and privacy concerns about FRT are not the same across medical contexts or, indeed, non-medical contexts, and neither are expectations for protections, which also vary depending on the social roles and vested interests of the people involved (Nissenbaum, 2009). Historically, for genomic databases, privacy protection for research participants relied more on the moral responsibility of individual scientists than any formal training in ethics (Brenner, 2013). This responsibility is also shaped by both lived experiences and professional knowledge of what privacy protections are possible. These context- and role-dependent variations were evident in our results.

While a substantial proportion of respondents were unaware of FRT’s lower accuracy rates in marginalized groups (39%) and of the mental health risks of privacy violations (62%), respondents expressed concern about these issues once informed about them by the survey (76% and 75%, respectively). This highlights how improving knowledge of these issues raises the level of concern and argues for the need for—and benefits of—broader education on these topics.

Traditional resources available to clinicians, including textbooks and atlases of dysmorphology, lack diverse patient representation and varied phenotypic images, which limits their diagnostic utility in individuals of different ancestral backgrounds (Koretzky et al., 2016; Muenke et al., 2016). Machine learning approaches to facial analysis have been seen as a means to improve diagnostic capabilities in understudied populations (Kruszka et al., 2019). For example, Williams syndrome and 22q11.2 deletion syndrome are typically diagnosed in the pediatric age group, and a facial analysis neural network classifier built to help diagnose these conditions in older individuals outperformed clinical geneticists across five different age groups, with overall accuracy gains over clinical geneticists of 15.5 and 22.7%, respectively (Duong et al., 2022). Another facial analysis application for the autosomal dominant disorder Rubinstein-Taybi syndrome (RSTS), however, showed poor discrimination efficacy in an African group while demonstrating excellent discrimination efficacy in a European one (Tekendo-Ngongang et al., 2020). Although machine learning applications hold promise for addressing inequities in diagnosis and treatment, research has shown that they can also perpetuate society’s prejudices, an observation that has been blamed on algorithms being trained on non-representative databases (Najibi, 2020) and one that many respondents in our survey (39%) seemed unaware of.

That most genetics professionals in our study reported to not be concerned about FRT use in minor or major crime (58% and 65% not concerned, respectively) fits the overall U.S. adult population trust in FRT, except for younger individuals and people of color (Smith, 2019). In our study, younger geneticists tended to “moderately” support FRT use to diagnose diseases based on facial morphology (51%), whereas 56% of older respondents “strongly” supported it. Younger respondents’ relative distrust may stem from more familiarity with online environments and the risks attached to newer technologies (Gerber et al., 2018). At the same time, older respondents were more likely to believe that FRT’s lower accuracy rate in minorities and women needed addressing as a condition for further developing FRT (73% vs 56%). This may reflect more experience with the history of social justice issues. People of color, who face more bias, express more concern for privacy with both genomic data and FRT (Raji et al., 2020; Sanderson et al., 2017), possibly due to fear that data may be used to facilitate further discrimination; however, we did not capture race data in this survey.

Political views also appear to influence perception of the privacy risks associated with FRT. Respondents who identified as politically progressive were more concerned about lower accuracy rates of FRT in marginalized groups, compared to independents/conservatives (83% vs 63% “moderately” or “strongly” concerned, respectively). The same pattern emerged for the use of FRT to solve major crimes and identify protestors, with independents/ conservatives primarily not concerned (88%; 29% “slightly concerned” and 17% “not concerned at all”) compared with progressives (57%; 18% “slightly concerned” and 0% “not concerned at all”). Evidence elsewhere suggests that progressives are more concerned about the consequences of AI technologies on societal inequities (Bao et al., 2022), while conservatives tend to be less trusting of AI on individual freedoms grounds (Castelo & Ward, 2021). This is echoed in our finding that researchers, who tended to identify as more progressive (81%) compared with clinicians (57%), were also more concerned about broader use of FRT, such as in solving major crime (53% vs 75% expressed low levels of concern, respectively).

Limitations and future research

There are several limitations to the study. The low response rate raises the possibility of nonresponse bias (Phillips et al., 2016). We followed recommended strategies for boosting response, such as an email invitation to participate, short survey duration, streamlined design, reminder emails, and a monetary incentive for survey completers. In the future, email pre-notification and the parallel use of different survey modalities should also be considered (Phillips et al., 2016). Low response rate has been an increasingly common problem in survey research and has been partly blamed on “survey fatigue” due the proliferation of market research and spam messages (Zhang et al., 2017). Also, web surveys regularly achieve lower response rates, compared with older methodologies (Medway & Fulton, 2012; Sammut et al., 2021). Genetics professionals are arguably even less likely to consider unsolicited email requests given the documented burnout in their ranks (Bernhardt et al., 2009). Moreover, the sensitivity of a survey’s topic has been shown to affect response rate (Cunningham et al., 2015), and a survey on attitudes toward privacy might have been deemed too intrusive. Further, only half the respondents used social media (51%). Since social media use has been associated with ease of filling out online surveys (Braun et al., 2020), this may also explain the response rate.

A factor that may have influenced the results of this study is the framing of the questions around the term “facial recognition”. This is because people’s prior knowledge of facial recognition or its potential use by companies or governments is likely to result in strong views, as our findings about the impact of political views suggests. It is important to note that syndrome diagnosis from facial imaging is, strictly speaking, not a form of facial recognition. That is, the analyses quantify facial shape features predictive of disease but are not actually intended to recognize individuals. Our choice of this term was deliberate, however, as it is technically feasible to layer facial recognition technology onto image analyses intended for disease diagnosis. This choice, however, may bias the results towards people viewing syndrome diagnosis from facial imaging with more caution than they would have if we had used an alternative term like “facial analysis.”

While our sample was likely representative of the population it targeted, it was limited by not collecting all relevant demographic data, including racial identity. Race is likely a key determinant of views on discrimination, data abuse, and regulation (Raji et al., 2020; Sanderson et al., 2017), and future studies should explore its relationship to people’s views. Future efforts should also explore views on terms such as anonymity versus confidentiality and oversight versus security, which researchers have often conflated (Clayton et al., 2018), and on the various components of privacy as identified in the mental health literature (i.e., anonymity, reserve, selective intimacy, isolation, and solitude (Aboujaoude, 2019).

Conclusion

Precision medicine mines large amounts of data to provide individualized care. Genomic data is a cornerstone of the technology, but it may have come at a cost to patient privacy (Stiles & Appelbaum, 2019). The growing use of FRT further raises the stakes, and the chances of psychological and societal harm from violations. Genetics professionals occupy a unique niche where both the risk and opportunity of FRT are on full display. Their level of awareness about FRT use and associated risks showed an overall valuing of privacy, but views were context-specific and varied across demographic groupings. However, the results also showed a willingness to learn and alter views as information is communicated, for example around discrimination and mental health effects. This suggests that professional evidence-based guidelines, if developed and disseminated, will find an audience of geneticists that is eager to receive and implement them.

Acknowledgements

This publication was supported by NIH-NIDCR U01DE024440 to O.D.K. and B.H., and also by the National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF-CTSI Grant Number UL1 TR001872. We thank Dr. Katrina Dipple for helpful discussions.

Footnotes

Declaration of Interests

Dr. Hallgrímsson is a co-founder of the company, DeepSurfaceAI. Dr. Aboujaoude, Janice Light, Dr. Brown, Dr. Boscardin and Dr. Klein declare no competing interests.

Data Availability

The datasets generated during and/or analysed during the current study are available from the corresponding authors on reasonable request.

References

  1. Aboujaoude E. (2019). Protecting Privacy to Protect Mental Health: The New Ethical Imperative. Journal of Medical Ethics, 45(9), 604–607. 10.1136/medethics-2018-105313 [DOI] [PubMed] [Google Scholar]
  2. Almeida D, Shmarko K, & Lomas E. (2022). The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: A comparative analysis of US, EU, and UK regulatory frameworks. AI and Ethics, 2(3), 377–387. 10.1007/s43681-021-00077-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bannister JJ, Wilms M, Aponte JD, Katz DC, Klein OD, Bernier FPJ, Spritz RA, Hallgrimsson B, & Forkert ND (2022). A Deep Invertible 3-D Facial Shape Model for Interpretable Genetic Syndrome Diagnosis. IEEE Journal of Biomedical and Health Informatics, 26(7), 3229–3239. 10.1109/JBHI.2022.3164848 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bannister JJ, Wilms M, Aponte JD, Katz DC, Klein OD, Bernier FPJ, Spritz RA, Hallgrímsson B, & Forkert ND (2022). Detecting 3D syndromic faces as outliers using unsupervised normalizing flow models. Artificial Intelligence in Medicine, 134, 102425. 10.1016/j.artmed.2022.102425 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bao L, Krause NM, Calice MN, Scheufele DA, Wirz CD, Brossard D, Newman TP, & Xenos MA (2022). Whose AI? How different publics think about AI and its social impacts. Computers in Human Behavior, 130, 107182. 10.1016/j.chb.2022.107182 [DOI] [Google Scholar]
  6. Bernhardt BA, Rushton CH, Carrese J, Pyeritz RE, Kolodner K, & Geller G. (2009). Distress and burnout among genetic service providers. Genetics in Medicine, 11(7), 527–535. 10.1097/GIM.0b013e3181a6a1c2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bragias A, Hine K, & Fleet R. (2021). ‘Only in our best interest, right?’ Public perceptions of police use of facial recognition technology. Police Practice and Research, 22, 1–18. 10.1080/15614263.2021.1942873 [DOI] [Google Scholar]
  8. Braun V, Clarke V, Boulton E, Davey L, & McEvoy C. (2020). The online survey as a qualitative research tool. International Journal of Social Research Methodology, 24, 1–14. 10.1080/13645579.2020.1805550 [DOI] [Google Scholar]
  9. Brenner SE (2013). Be prepared for the big genome leak. Nature, 498(7453), Article 7453. 10.1038/498139a [DOI] [PubMed] [Google Scholar]
  10. Buolamwini J, & Gebru T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
  11. California Consumer Privacy Act of 2018. (n.d.). Retrieved November 29, 2022, from https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV&title=1.81.5
  12. Castelo N, & Ward AF (2021). Conservatism predicts aversion to consequential Artificial Intelligence. PLOS ONE, 16(12), e0261467. 10.1371/journal.pone.0261467 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Clayton EW, Evans BJ, Hazel JW, & Rothstein MA (2019). The law of genetic privacy: Applications, implications, and limitations. Journal of Law and the Biosciences, 6(1), 1–36. 10.1093/jlb/lsz007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Clayton EW, Halverson CM, Sathe NA, & Malin BA (2018). A systematic literature review of individuals’ perspectives on privacy and genetic information in the United States. PloS One, 13(10), e0204417. 10.1371/journal.pone.0204417 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Council of Europe. (1950). Convention for the Protection of Human Rights and Fundamental Freedoms. Council of Europe Treaty Series; 005. https://www.echr.coe.int/documents/convention_eng.pdf [PubMed] [Google Scholar]
  16. Council of the European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance). Official Journal of the European Union. http://op.europa.eu/en/publication-detail/-/publication/3e485e15-11bd-11e6-ba9a-01aa75ed71a1 [Google Scholar]
  17. Cunningham CT, Quan H, Hemmelgarn B, Noseworthy T, Beck CA, Dixon E, Samuel S, Ghali WA, Sykes LL, & Jetté N. (2015). Exploring physician specialist response rates to web-based surveys. BMC Medical Research Methodology, 15(1), 32. 10.1186/s12874-015-0016-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. de Groot NF, van Beers BC, & Meynen G. (2021). Commercial DNA tests and police investigations: A broad bioethical perspective. Journal of Medical Ethics, 47(12), 788–795. 10.1136/medethics-2021-107568 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Deverka PA, Gilmore D, Richmond J, Smith Z, Mangrum R, Koenig BA, Cook-Deegan R, Villanueva AG, Majumder MA, & McGuire AL (2019). Hopeful and Concerned: Public Input on Building a Trustworthy Medical Information Commons. The Journal of Law, Medicine & Ethics: A Journal of the American Society of Law, Medicine & Ethics, 47(1), 70–87. 10.1177/1073110519840486 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Devlin H, & correspondent HDS (2020, February 16). AI systems claiming to “read” emotions pose discrimination risks. The Guardian. https://www.theguardian.com/technology/2020/feb/16/ai-systems-claiming-to-read-emotions-pose-discrimination-risks
  21. Draper NA, & Turow J. (2019). The corporate cultivation of digital resignation. New Media & Society, 21(8), 1824–1839. 10.1177/1461444819833331 [DOI] [Google Scholar]
  22. Duong D, Hu P, Tekendo-Ngongang C, Hanchard SEL, Liu S, Solomon BD, & Waikel RL (2022). Neural Networks for Classification and Image Generation of Aging in Genetic Syndromes. Frontiers in Genetics, 13. https://www.frontiersin.org/articles/10.3389/fgene.2022.864092 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Dupras C, & Bunnik EM (2021). Toward a Framework for Assessing Privacy Risks in Multi-Omic Research and Databases. The American Journal of Bioethics, 21(12), 46–64. 10.1080/15265161.2020.1863516 [DOI] [PubMed] [Google Scholar]
  24. Gerber N, Gerber P, & Volkamer M. (2018). Explaining the privacy paradox: A systematic review of literature investigating privacy attitude and behavior. Computers & Security, 77, 226–261. 10.1016/j.cose.2018.04.002 [DOI] [Google Scholar]
  25. Grother PJ, Ngan ML, & Hanaoka KK (2019). Face Recognition Vendor Test Part 3: Demographic Effects. NIST. https://www.nist.gov/publications/face-recognition-vendor-test-part-3-demographic-effects
  26. Gurovich Y, Hanani Y, Bar O, Nadav G, Fleischer N, Gelbman D, Basel-Salmon L, Krawitz PM, Kamphausen SB, Zenker M, Bird LM, & Gripp KW (2019). Identifying facial phenotypes of genetic disorders using deep learning. Nature Medicine, 25(1), Article 1. 10.1038/s41591-018-0279-0 [DOI] [PubMed] [Google Scholar]
  27. Hallgrímsson B, Aponte JD, Katz DC, Bannister JJ, Riccardi SL, Mahasuwan N, McInnes BL, Ferrara TM, Lipman DM, Neves AB, Spitzmacher JAJ, Larson JR, Bellus GA, Pham AM, Aboujaoude E, Benke TA, Chatfield KC, Davis SM, Elias ER, … Klein OD (2020). Automated syndrome diagnosis by three-dimensional facial imaging. Genetics in Medicine, 22(10), Article 10. 10.1038/s41436-020-0845-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Hsieh T-C, Bar-Haim A, Moosa S, Ehmke N, Gripp KW, Pantel JT, Danyel M, Mensah MA, Horn D, Rosnev S, Fleischer N, Bonini G, Hustinx A, Schmid A, Knaus A, Javanmardi B, Klinkhammer H, Lesmann H, Sivalingam S, … Krawitz PM (2022). GestaltMatcher facilitates rare disease matching using facial phenotype descriptors. Nature Genetics, 54(3), 349–357. 10.1038/s41588-021-01010-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Koretzky M, Bonham VL, Berkman BE, Kruszka P, Adeyemo A, Muenke M, & Hull SC (2016). Towards a more representative morphology: Clinical and ethical considerations for including diverse populations in diagnostic genetic atlases. Genetics in Medicine, 18(11), 1069–1074. 10.1038/gim.2016.7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Kostka G, Steinacker L, & Meckel M. (2021). Between security and convenience: Facial recognition technology in the eyes of citizens in China, Germany, the United Kingdom, and the United States. Public Understanding of Science, 30(6), 671–690. 10.1177/09636625211001555 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Kruszka P, Tekendo-Ngongang C, & Muenke M. (2019). Diversity and dysmorphology. Current Opinion in Pediatrics, 31(6), 702–707. 10.1097/MOP.0000000000000816 [DOI] [PubMed] [Google Scholar]
  32. Kulynych J, & Greely HT (2017). Clinical genomics, big data, and electronic medical records: Reconciling patient rights with research when privacy and science collide. Journal of Law and the Biosciences, 4(1), 94–132. 10.1093/jlb/lsw061 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Lin X, & Wang X. (2020). Examining gender differences in people’s information-sharing decisions on social networking sites. International Journal of Information Management, 50, 45–56. 10.1016/j.ijinfomgt.2019.05.004 [DOI] [Google Scholar]
  34. Marwick AE, & Boyd D. (2018). Privacy at the Margins| Understanding Privacy at the Margins—Introduction. International Journal of Communication, 12(0), Article 0. [Google Scholar]
  35. Matthews H, Vanneste M, Katsura K, Aponte D, Patton M, Hammond P, Baynam G, Spritz R, Klein OD, Hallgrimsson B, Peeters H, & Claes P. (2022). Refining nosology by modelling variation among facial phenotypes: The RASopathies. Journal of Medical Genetics, jmedgenet-2021–108366. 10.1136/jmedgenet-2021-108366 [DOI] [PMC free article] [PubMed]
  36. McGraw D, & Mandl KD (2021). Privacy protections to encourage use of health-relevant digital data in a learning health system. Npj Digital Medicine, 4(1), Article 1. 10.1038/s41746-020-00362-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Medway RL, & Fulton J. (2012). When More Gets You Less: A Meta-Analysis of the Effect of Concurrent Web Options on Mail Survey Response Rates. Public Opinion Quarterly, 76(4), 733–746. 10.1093/poq/nfs047 [DOI] [Google Scholar]
  38. Mittelstadt B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), Article 11. 10.1038/s42256-019-0114-4 [DOI] [Google Scholar]
  39. Muenke M, Adeyemo A, & Kruszka P. (2016). An electronic atlas of human malformation syndromes in diverse populations. Genetics in Medicine: Official Journal of the American College of Medical Genetics, 18(11), 1085–1087. 10.1038/gim.2016.3 [DOI] [PubMed] [Google Scholar]
  40. Najibi A. (2020, October 24). Racial Discrimination in Face Recognition Technology. Science in the News. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/
  41. Naqvi S, Kim S, Hoskens H, Matthews HS, Spritz RA, Klein OD, Hallgrímsson B, Swigut T, Claes P, Pritchard JK, & Wysocka J. (2022). Precise modulation of transcription factor levels reveals drivers of dosage sensitivity (p. 2022.06.13.495964). bioRxiv. 10.1101/2022.06.13.495964 [DOI] [PMC free article] [PubMed]
  42. Nissenbaum H. (2009). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press. [Google Scholar]
  43. Phillips AW, Reddy S, & Durning SJ (2016). Improving response rates and evaluating nonresponse bias in surveys: AMEE Guide No. 102. Medical Teacher, 38(3), 217–228. 10.3109/0142159X.2015.1105945 [DOI] [PubMed] [Google Scholar]
  44. Porras AR, Rosenbaum K, Tor-Diez C, Summar M, & Linguraru MG (2021). Development and evaluation of a machine learning-based point-of-care screening tool for genetic syndromes in children: A multinational retrospective study. The Lancet Digital Health, 3(10), e635–e643. 10.1016/S2589-7500(21)00137-0 [DOI] [PubMed] [Google Scholar]
  45. Raji ID, Gebru T, Mitchell M, Buolamwini J, Lee J, & Denton E. (2020). Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing (arXiv:2001.00964). arXiv. 10.48550/arXiv.2001.00964 [DOI]
  46. Sammut R, Griscti O, & Norman IJ (2021). Strategies to improve response rates to web surveys: A literature review. International Journal of Nursing Studies, 123, 104058. 10.1016/j.ijnurstu.2021.104058 [DOI] [PubMed] [Google Scholar]
  47. Sanderson SC, Brothers KB, Mercaldo ND, Clayton EW, Antommaria AHM, Aufox SA, Brilliant MH, Campos D, Carrell DS, Connolly J, Conway P, Fullerton SM, Garrison NA, Horowitz CR, Jarvik GP, Kaufman D, Kitchner TE, Li R, Ludman EJ, … Holm IA (2017). Public Attitudes toward Consent and Data Sharing in Biobank Research: A Large Multi-site Experimental Survey in the US. American Journal of Human Genetics, 100(3), 414–427. 10.1016/j.ajhg.2017.01.021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Smith A. (2019, September 5). More Than Half of U.S. Adults Trust Law Enforcement to Use Facial Recognition Responsibly. Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2019/09/05/more-than-half-of-u-s-adults-trust-law-enforcement-to-use-facial-recognition-responsibly/
  49. Staller S, Anigbo J, Stewart K, Dutra V, & Turkkahraman H. (2022). Precision and accuracy assessment of single and multicamera three-dimensional photogrammetry compared with direct anthropometry. The Angle Orthodontist, 92(5), 635–641. 10.2319/101321-770.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Stark L, Stanhaus A, & Anthony DL (2020). “I Don’t Want Someone to Watch Me While I’m Working”: Gendered Views of Facial Recognition Technology in Workplace Surveillance. Journal of the Association for Information Science and Technology, 71(9), 1074–1088. 10.1002/asi.24342 [DOI] [Google Scholar]
  51. Stiles D, & Appelbaum PS (2019). Cases in Precision Medicine: Concerns About Privacy and Discrimination After Genomic Sequencing. Annals of Internal Medicine, 170(10), 717–721. 10.7326/M18-2666 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Tekendo-Ngongang C, Owosela B, Fleischer N, Addissie YA, Malonga B, Badoe E, Gupta N, Moresco A, Huckstadt V, Ashaat EA, Hussen DF, Luk H-M, Lo IFM, Hon-Yin Chung B, Fung JLF, Moretti-Ferreira D, Batista LC, Lotz-Esquivel S, Saborio-Rocafort M, … Kruszka P. (2020). Rubinstein-Taybi syndrome in diverse populations. American Journal of Medical Genetics. Part A, 182(12), 2939–2950. 10.1002/ajmg.a.61888 [DOI] [PubMed] [Google Scholar]
  53. Tifferet S. (2019). Gender differences in privacy tendencies on social network sites: A meta-analysis. Computers in Human Behavior, 93, 1–12. 10.1016/j.chb.2018.11.046 [DOI] [Google Scholar]
  54. Zhang X, Kuchinke L, Woud ML, Velten J, & Margraf J. (2017). Survey method matters: Online/offline questionnaires and face-to-face or telephone interviews differ. Computers in Human Behavior, 71, 172–180. 10.1016/j.chb.2017.02.006 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets generated during and/or analysed during the current study are available from the corresponding authors on reasonable request.

RESOURCES