Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Jul 16.
Published in final edited form as: AMA J Ethics. 2019 Feb 1;21(2):E180–E187. doi: 10.1001/amajethics.2019.180

What Are Important Ethical Implications of Using Facial Recognition Technology in Health Care?

Nicole Martinez-Martin 1
PMCID: PMC6634990  NIHMSID: NIHMS1033894  PMID: 30794128

Abstract

Applications of facial recognition technology (FRT) in health care settings have been developed to identify and monitor patients as well as to diagnose genetic, medical, and behavioral conditions. The use of FRT in health care suggests the importance of informed consent, data input and analysis quality, effective communication about incidental findings, and potential influence on patient-clinician relationships. Privacy and data protection are thought to present challenges for the use of FRT for health applications.

Promises and Challenges of Facial Recognition Technology

Facial recognition technology (FRT) utilizes software to map a person’s facial characteristics and then store the data as a face template.1 Algorithms or machine learning techniques are applied to a database to compare facial images or to find patterns in facial features for verification or authentication purposes.2 FRT is attractive for a variety of health care applications, such as diagnosing genetic disorders, monitoring patients, and providing health indicator information (related to behavior, aging, longevity, or pain experience, for example).35

FRT is likely to become a useful tool for diagnosing many medical and genetic conditions.6,7 Machine learning techniques, in which a computer program is trained on a large data set to recognize patterns and generates its own algorithms on the basis of learning,8 have already been used to assist in diagnosing a patient with a rare genetic disorder that had not been identified after years of clinical effort.9 Machine learning can also detect more subtle correlations between facial morphology and genetic disorders than clinicians.4 It is thought that FRT can therefore eventually be used to assist in earlier detection and treatment of genetic disorders,10,11 and computer applications (commonly known as apps) such as Face2Gene have been developed to assist clinicians in diagnosing genetic disorders.12

FRT has other potential health care applications. FRT is being developed to predict health characteristics, such as longevity and aging.13 FRT is also being applied to predict behavior, pain, and emotions by identifying facial expressions associated with depression or pain, for example.14,15 Another major area for FRT applications in health care is patient identification and monitoring, such as monitoring elderly patients for safety or attempts to leave a health care facility16 or monitoring medication adherence through the use of sensors and facial recognition to confirm when patients take their medications.17

As with any new health technology, careful attention should be paid to the accuracy and validity of FRT used in health care applications as well as to informed consent and reporting incidental findings to patients. FRT in health care also raises ethical questions about privacy and data protection, potential bias in the data or analysis, and potential negative implications for the therapeutic alliance in patient-clinician relationships.

Ethical Dimensions of FRT in Health Care

Informed consent

FRT tools that assist with identification, monitoring, and diagnosis are expected to play a prominent role in the future of health care.6,18 Some applications have already been implemented.13,19 As FRT is increasingly utilized in health care settings, informed consent will need to be obtained not only for collecting and storing patients’ images but also for the specific purposes for which those images might be analyzed by FRT systems.20 In particular, patients might not be aware that their images could be used to generate additionally clinically relevant information. While FRT systems in health care can de-identify data, some experts are skeptical that such data can be truly anonymized21; from clinical and ethical perspectives, informing patients about this kind of risk is critical.

Some machine learning systems need continuous data input to train and improve the algorithms22 in a process that could be analogized to quality improvement research, for which informed consent is not regarded as necessary.23 For example, to improve its algorithms, FRT for genetic diagnosis would need to receive new data sets of images of patients already known to have specific genetic disorders.2 To maintain trust and transparency with patients, organizations should consider involving relevant community stakeholders in implementing FRT and in decisions about establishing and improving practices of informing patients about the organization’s use of FRT. As FRT becomes capable of detecting a wider range of health conditions, such as behavioral24 or developmental disorders,25 health care organizations and software developers will need to decide which types of analyses should be included in a FRT system and the conditions under which patients might need to be informed of incidental findings.

Bias

As with any clinical innovation, FRT tools should be expected to demonstrate accuracy for specific uses and to demonstrate that overall benefits outweigh risks.26 Detecting and evaluating bias in data and results should also receive close ethical scrutiny.27 In machine learning, the quality of the results reflects the quality of data input to the system28—an issue sometimes referred to as “garbage in, garbage out.” For example, when images used to train software are not drawn from a pool that is sufficiently racially diverse, the system may produce racially biased results.29 If this happens, FRT diagnostics might not work as well for some racial or ethnic groups as others. One recent example that gained notoriety was an FRT system used to identify gay men from a set of photos that may have simply identified the kind of grooming and dress habits stereotypically associated with gay men.30 The developers of this FRT system did not intend it to be used for a clinical purpose but rather to illustrate how bias can influence FRT findings.30

Thankfully, potential solutions for addressing bias in FRT systems exist. These include efforts to create AI systems that explain the rationale behind the results generated.31 Clinicians can also be trained to consider and respond to limitations and biases of FRT systems.32 In addition, organizations such as the National Human Genome Research Institute have sought to diversify the range of people whose images are included in their image databases.33

Patient privacy

FRT raises novel challenges regarding privacy. FRT systems can store data as a complete facial image or as a facial template.34 Facial templates are considered biometric data and thus personally identifiable information.35 The idea that a photo can reveal private health information is relatively new, and privacy regulations and practices are still catching up. A few states, such as Illinois, have regulations that limit uses for which consumer biometric data can be collected.36 The Health Insurance Portability and Accountability Act (HIPAA) governs handling of patients’ health records and personal health information and includes privacy protections for personally identifiable information. More specifically, it protects the privacy of biometric data, including “full-face photographs and any comparable images,” which are “directly related to an individual.”37 Thus, facial images used for FRT health applications would be protected by HIPAA.38 Entities covered by HIPAA, including health care organizations, clinicians, and third-party business associates, would need to comply with HIPAA regulations regarding the use and disclosure of protected health information.38 However, clinicians should advise patients that there may be limited protections for storing and sharing data when using a consumer FRT tool.

Some statutes that protect health information might not apply to FRT. The Genetic Information Nondiscrimination Act (GINA) of 2008, for example, does not apply to FRT for genetic diagnosis, as FRT does not fit GINA’s definition of genetic testing or genetic information.39 The Americans with Disabilities Act of 1990, which protects people with disabilities from discrimination in public life (eg, schools or employment),40 would also likely not apply to FRT used for diagnostic purposes if the conditions diagnosed are currently unexpressed. Employers might also be interested in using FRT tools to predict mood or behavior as well as to predict longevity, particularly for use in wellness programs to lower employers’ health care costs.

Broader influence of FRT

There will need to be careful thought and study of the broader impact of FRT in health care settings. One potential issue is that of liability. For example, if FRT diagnostic software develops to the point that it is used not just to augment but to replace a physician’s judgment, ethical and legal questions may arise regarding which entity appropriately has liability.41 Or if FRT is used to monitor compliance, track patients’ whereabouts, or assist in other kinds of surveillance, patients’ trust in physicians could be eroded, undermining the therapeutic alliance. It is therefore important to weigh the relative benefits and burdens of specific FRT uses in health care and to conduct research into how patients perceive its use. On the one hand, the use of FRT to monitor the safety of dementia patients could be perceived as having benefits that outweigh the burdens of surveillance. On the other, FRT medication adherence monitoring might not be sufficiently effective in improving adherence to outweigh the risk of undermining trust in the patient-physician relationship.42

As considered here, numerous applications of FRT in health care settings suggest the ethical, clinical, and legal importance of informed consent, data input and analysis quality, effective communication about incidental findings, and potential influence on patient-clinician relationships. Privacy and data protections are key to advancing FRT and making it helpful.

Biography

Nicole Martinez-Martin, JD, PhD is a postdoctoral fellow at the Stanford Center for Biomedical Ethics in Stanford, California. She earned a JD from Harvard Law School and a PhD from the University of Chicago in comparative human development. Her research focuses on neuroethics as well as the ethics of digital health technology and machine learning with a focus on mental health issues and special populations.

Footnotes

Conflict of Interest Disclosure

The author(s) had no conflicts of interest to disclose.

References

RESOURCES