Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Nov 1.
Published in final edited form as: Ethics Hum Res. 2023 Nov-Dec;45(6):46–50. doi: 10.1002/eahr.500190

Should Chatbots Be Used to Obtain Informed Consent for Research?

Mark A Rothstein 1
PMCID: PMC11050738  NIHMSID: NIHMS1977126  PMID: 37988278

Abstract

Chatbots are computer programs that use artificial intelligence and natural language processing to simulate human conversation (written or spoken), allowing humans to interact with digital devices as if they were communicating with a real person. Chatbots are being developed and tested for obtaining informed consent for research. An initial study indicated that they saved time and were successful in knowledge transfer, but informed consent serves other purposes, such as building trust and respecting the autonomy and dignity of potential research participants. Additional research and possible regulation are necessary before chatbots should be routinely used in health research.

Keywords: Artificial intelligence, Autonomy, Chatbots, Informed consent, IRBs


Chatbots are computer programs that use artificial intelligence (AI) and natural language processing to simulate human conversation (written or spoken), allowing humans to interact with digital devices as if they were communicating with a real person.1 Chatbot technology is omnipresent, including in website chat features, smart speakers, messaging applications, and virtual assistants such as Apple’s Siri and Amazon’s Alexa. Chatbots also have been used in healthcare to obtain patient information in health screening2 and to provide clinical information to patients.3 Although utilization of chatbots in health research would appear to be a logical extension of the technology, this new use raises several issues, including its appropriateness in certain populations and types of research, its potential regulation, and the consequences of dehumanizing the informed consent process.

Initial study of chatbot-based consent for research

One of the first studies on the feasibility and outcomes of using a chatbot for informed consent, by Smith et al. (2023),4 was a retrospective analysis of consent for genomic research comparing a traditional human-based consent process with chatbot-based consent. The chatbot was a modification of a previously developed platform, Gia® (Genetic Information Assistant), which provided potential participants with information about the study, educational material, and a quiz to assess their understanding. Several implications of the study by Smith et al. are important to consider before the technology is more widely adopted.

Some of the results of the study are especially promising. The informed consent process took 44 minutes with the chatbot and 76 minutes with in-person consent, thereby reducing the burden on both potential participants and research staff. Also, the total time from referral to consent completion was 5 days with the chatbot compared with 16 days for in-person consent, due to in-person scheduling limitations. For large research projects with many potential participants, the time lag for in-person consent could be even longer, thereby delaying the research. The results of a 10-question quiz (96% pass rate) indicated there was successful knowledge transfer using the chatbot and 86% of the chatbot users reported having a positive experience. Because online consent for certain types of research, such as for biobanking and utilizing mobile applications,5 already has been used successfully, the novel issue is whether a chatbot is feasible and appropriate for more complex informed consent.

Undoubtedly, chatbot technology has advantages over traditional, in-person consent. The process is much less resource intensive because study personnel are only used to answer difficult questions and, in this study, to obtain assent from children ages 7-17. The lower cost and more expedited process also facilitates the enrollment of larger cohorts of research participants. Furthermore, the asynchronous and flexible nature of chatbot consent, such as the ability to complete the consent process in stages, could increase the diversity of participants by enabling the enrollment of individuals whose work, education, or family responsibilities preclude in-person consent during working hours. These benefits already have been noted with online consent.

Notwithstanding these positive results and implications, this initial study of chatbots has at least two important limitations. First, the study involved only 37 families using traditional informed consent and 35 families using chatbot-enabled informed consent. The small sample size precluded a more detailed analysis of age, education, ethnicity, or other factors in evaluating objective and subjective measures of chatbot acceptability and comparability to in-person informed consent. Also, research participants in this study were overwhelmingly white.

Second, the families were not randomized, but self-selected their type of consent process. Presumably, those who chose chatbot consent were comfortable using digital technologies and more likely than participants who opted for in-person consent to report a positive experience with the chatbot. It is unknown whether a chatbot-based consent process would be as well received by more diverse populations of potential research participants, such as individuals who were not native English speakers, were older or uncomfortable with online technology, were ill, or had cognitive impairments.

Implementation and regulation

This study represents a preliminary proof of concept, and several fundamental questions in the implementation and regulatory domains remain to be addressed. These include for what types of research chatbots are appropriate; what role institutional review boards (IRBs) should play in overseeing the use of chatbots; and whether chatbots should be regulated and, if so, how and by whom.

The most logical application of chatbot-based informed consent would be for large, information-based research. In general, these are relatively low-risk studies and the benefits of in-person informed consent would not usually outweigh the burdens of more resource- and time-demanding in-person consent processes for both potential participants and research personnel. The electronic consent used by the National Institutes of Health (NIH) All of Us research program would appear to be a case in point.6

On the other hand, sensitive or interventional research, especially research involving more than minimal risk, would appear to be least appropriate for chatbot-based informed consent. Similarly, research with vulnerable populations, such as individuals who are cognitively impaired or who have mental health conditions, also could present challenges because informed consent involving vulnerable individuals requires discretion, flexibility, empathy, patience, and other uniquely human qualities.7 Nevertheless, there are reports that AI has greatly aided physicians in empathetically communicating with patients.8

If using chatbots to obtain informed consent is permitted for some protocols, but not others, IRBs would have the responsibility for determining approved uses. In general, IRBs and possibly the Office for Human Research Protections (OHRP) should decide whether there ought to be a categorical distinction between chatbot-based consent and simpler electronic consent. In specific cases, IRBs would need to determine whether a particular chatbot proposed for a study is technically capable of providing accurate, unbiased information necessary for prospective participants to make knowing and informed decisions.

These new responsibilities would be consistent with traditional duties of IRBs, such as determining when an exemption,9 expedited review,10 or waiver of informed consent11 is appropriate under the Common Rule. However, there are two important differences with chatbots, the possible lack of technical expertise by IRB members and administrators, and the current lack of any applicable regulations or guidance.

IRBs typically are comprised of members with a wide range of expertise, but occasionally research submissions involve technical matters beyond an IRB’s collective knowledge base. In such instances, the OHRP authorizes IRBs to engage outside experts, and it encourages IRBs to establish procedures to determine when an outside expert is needed, the method of selection, and the role of the expert in the review.12

According to one study, 55.4% of IRBs reported that they used outside experts to obtain additional scientific expertise.13 Although experts can understand and explain technical matters, some IRBs reported that outside experts may have limited knowledge of research regulations, may not provide actionable feedback, and may provide information beyond the scope of their assignment.14 Furthermore, the use of experts is likely to increase the cost of review for the IRB and delay the review process.

Expansion of chatbot informed consent also suggests two areas for possible regulation. First, guidelines could be issued by the OHRP15 to clarify and regularize the role of IRBs in evaluating protocols utilizing chatbots. Second, the NIH,16 perhaps through the Office of the Director or another trans-institute mechanism, could issue directives establishing technical standards for chatbot computer programs.

Dehumanizing informed consent

Even if the various issues described above can be satisfactorily resolved, chatbots raise a more general concern about whether they represent a narrow view of the role of informed consent as being limited to conveying information so that individuals can decide whether to participate in research. Symbolically, asking for consent may be just as important as giving consent. Informed consent should provide transparency and acknowledge the autonomy and significant role of research participants. As I have previously written: “Asking for consent is a demonstration of respect for the autonomy and dignity of the individual, a tangible exercise in trust building, and an expression of the moral equivalence of the researcher and research subject in an otherwise asymmetrical relationship.”17

To the extent that chatbot consent uses a familiar online process and could appear to some individuals like the “click through” consent used for routine software downloads, informed consent and the role of research participants might seem to be devalued by researchers.18 Also, simplified chatbot-based informed consent could give the impression that the personal, financial, and administrative interests of researchers, their institutions and funders, and pharmaceutical and other commercial entities are valued more than the interests of potential research participants. On the other hand, a shorter, more efficient consent process could be viewed as respecting the time of participants and their ability to understand their role in the research.

In practice, current methods of informed consent for research often fall short of the ideal of providing an informative, meaningful, interactive process.19 For example, in both clinical and research settings, studies show a lack of patient and participant understanding at the time of consent as well as a later lack of recall.20 However, knowledge transfer to enable autonomous decision making, although important, is only one aspect of informed consent,21 and informed consent is only one element of the ethical conduct of research. Thus, from a broader perspective, new technologies that could potentially devalue informed consent, in reality or perception, should not be developed and introduced without great care, reflection, and rigorous assessment of the implications.

Bioethics scholars and students are familiar with the first sentence of the first principle of the Nuremberg Code: “The voluntary consent of the human subject is absolutely essential.”22 The less known and less frequently quoted last part of this first principle is especially appropriate in the context of new chatbot-based consent. “The duty and responsibility for ascertaining the quality of the consent rests upon each individual who initiates, directs, or engages in the experiment. It is a personal duty and responsibility which may not be delegated to another with impunity.”23

Chatbots and the future of healthcare

Advances in many forms and uses of AI have the potential to alter fundamental aspects of society in ways that humanity might not appreciate at the present time.24 In healthcare, chatbots and robots increasingly could convey information25 and even perform an extensive range of clinical and research functions.26 According to a recent survey, however, 60% of Americans would be uncomfortable with their health care providers relying on AI in delivering health care services,27 and it is reasonable to assume that many more would be uncomfortable with AI-enabled programs and medical devices supplying such services without the direct involvement of providers.

Over time, technological innovations and evolving attitudes may change the relationship between humans and their AI devices and computer programs. Perhaps it is possible to maintain the human connection while also utilizing chatbots. If so, chatbots or comparable mechanisms could support rather than possibly undermine human values in health services, including obtaining informed consent for complex health research.

Acknowledgments

Dr. Eric Vilain, senior author of the Smith et al. article discussed above, graciously supplied a prepublication copy of the article for review. Megan Doerr and John Wilbanks, developers of eConsent software and procedures, provided valuable comments on an earlier draft. Kelly Carty Zimmerer, J.D. 2024, Louis D. Brandeis School of Law, University of Louisville, contributed excellent research assistance.

Disclosures

No external funding supported publication of this article, which was written before the author assumed his current position at the University of California, Irvine. The author has no conflicts of interest to declare.

References

RESOURCES