Abstract
Chatbots are computer programs that use artificial intelligence and natural language processing to simulate human conversation (written or spoken), allowing humans to interact with digital devices as if they were communicating with a real person. Chatbots are being developed and tested for obtaining informed consent for research. An initial study indicated that they saved time and were successful in knowledge transfer, but informed consent serves other purposes, such as building trust and respecting the autonomy and dignity of potential research participants. Additional research and possible regulation are necessary before chatbots should be routinely used in health research.
Keywords: Artificial intelligence, Autonomy, Chatbots, Informed consent, IRBs
Chatbots are computer programs that use artificial intelligence (AI) and natural language processing to simulate human conversation (written or spoken), allowing humans to interact with digital devices as if they were communicating with a real person.1 Chatbot technology is omnipresent, including in website chat features, smart speakers, messaging applications, and virtual assistants such as Apple’s Siri and Amazon’s Alexa. Chatbots also have been used in healthcare to obtain patient information in health screening2 and to provide clinical information to patients.3 Although utilization of chatbots in health research would appear to be a logical extension of the technology, this new use raises several issues, including its appropriateness in certain populations and types of research, its potential regulation, and the consequences of dehumanizing the informed consent process.
Initial study of chatbot-based consent for research
One of the first studies on the feasibility and outcomes of using a chatbot for informed consent, by Smith et al. (2023),4 was a retrospective analysis of consent for genomic research comparing a traditional human-based consent process with chatbot-based consent. The chatbot was a modification of a previously developed platform, Gia® (Genetic Information Assistant), which provided potential participants with information about the study, educational material, and a quiz to assess their understanding. Several implications of the study by Smith et al. are important to consider before the technology is more widely adopted.
Some of the results of the study are especially promising. The informed consent process took 44 minutes with the chatbot and 76 minutes with in-person consent, thereby reducing the burden on both potential participants and research staff. Also, the total time from referral to consent completion was 5 days with the chatbot compared with 16 days for in-person consent, due to in-person scheduling limitations. For large research projects with many potential participants, the time lag for in-person consent could be even longer, thereby delaying the research. The results of a 10-question quiz (96% pass rate) indicated there was successful knowledge transfer using the chatbot and 86% of the chatbot users reported having a positive experience. Because online consent for certain types of research, such as for biobanking and utilizing mobile applications,5 already has been used successfully, the novel issue is whether a chatbot is feasible and appropriate for more complex informed consent.
Undoubtedly, chatbot technology has advantages over traditional, in-person consent. The process is much less resource intensive because study personnel are only used to answer difficult questions and, in this study, to obtain assent from children ages 7-17. The lower cost and more expedited process also facilitates the enrollment of larger cohorts of research participants. Furthermore, the asynchronous and flexible nature of chatbot consent, such as the ability to complete the consent process in stages, could increase the diversity of participants by enabling the enrollment of individuals whose work, education, or family responsibilities preclude in-person consent during working hours. These benefits already have been noted with online consent.
Notwithstanding these positive results and implications, this initial study of chatbots has at least two important limitations. First, the study involved only 37 families using traditional informed consent and 35 families using chatbot-enabled informed consent. The small sample size precluded a more detailed analysis of age, education, ethnicity, or other factors in evaluating objective and subjective measures of chatbot acceptability and comparability to in-person informed consent. Also, research participants in this study were overwhelmingly white.
Second, the families were not randomized, but self-selected their type of consent process. Presumably, those who chose chatbot consent were comfortable using digital technologies and more likely than participants who opted for in-person consent to report a positive experience with the chatbot. It is unknown whether a chatbot-based consent process would be as well received by more diverse populations of potential research participants, such as individuals who were not native English speakers, were older or uncomfortable with online technology, were ill, or had cognitive impairments.
Implementation and regulation
This study represents a preliminary proof of concept, and several fundamental questions in the implementation and regulatory domains remain to be addressed. These include for what types of research chatbots are appropriate; what role institutional review boards (IRBs) should play in overseeing the use of chatbots; and whether chatbots should be regulated and, if so, how and by whom.
The most logical application of chatbot-based informed consent would be for large, information-based research. In general, these are relatively low-risk studies and the benefits of in-person informed consent would not usually outweigh the burdens of more resource- and time-demanding in-person consent processes for both potential participants and research personnel. The electronic consent used by the National Institutes of Health (NIH) All of Us research program would appear to be a case in point.6
On the other hand, sensitive or interventional research, especially research involving more than minimal risk, would appear to be least appropriate for chatbot-based informed consent. Similarly, research with vulnerable populations, such as individuals who are cognitively impaired or who have mental health conditions, also could present challenges because informed consent involving vulnerable individuals requires discretion, flexibility, empathy, patience, and other uniquely human qualities.7 Nevertheless, there are reports that AI has greatly aided physicians in empathetically communicating with patients.8
If using chatbots to obtain informed consent is permitted for some protocols, but not others, IRBs would have the responsibility for determining approved uses. In general, IRBs and possibly the Office for Human Research Protections (OHRP) should decide whether there ought to be a categorical distinction between chatbot-based consent and simpler electronic consent. In specific cases, IRBs would need to determine whether a particular chatbot proposed for a study is technically capable of providing accurate, unbiased information necessary for prospective participants to make knowing and informed decisions.
These new responsibilities would be consistent with traditional duties of IRBs, such as determining when an exemption,9 expedited review,10 or waiver of informed consent11 is appropriate under the Common Rule. However, there are two important differences with chatbots, the possible lack of technical expertise by IRB members and administrators, and the current lack of any applicable regulations or guidance.
IRBs typically are comprised of members with a wide range of expertise, but occasionally research submissions involve technical matters beyond an IRB’s collective knowledge base. In such instances, the OHRP authorizes IRBs to engage outside experts, and it encourages IRBs to establish procedures to determine when an outside expert is needed, the method of selection, and the role of the expert in the review.12
According to one study, 55.4% of IRBs reported that they used outside experts to obtain additional scientific expertise.13 Although experts can understand and explain technical matters, some IRBs reported that outside experts may have limited knowledge of research regulations, may not provide actionable feedback, and may provide information beyond the scope of their assignment.14 Furthermore, the use of experts is likely to increase the cost of review for the IRB and delay the review process.
Expansion of chatbot informed consent also suggests two areas for possible regulation. First, guidelines could be issued by the OHRP15 to clarify and regularize the role of IRBs in evaluating protocols utilizing chatbots. Second, the NIH,16 perhaps through the Office of the Director or another trans-institute mechanism, could issue directives establishing technical standards for chatbot computer programs.
Dehumanizing informed consent
Even if the various issues described above can be satisfactorily resolved, chatbots raise a more general concern about whether they represent a narrow view of the role of informed consent as being limited to conveying information so that individuals can decide whether to participate in research. Symbolically, asking for consent may be just as important as giving consent. Informed consent should provide transparency and acknowledge the autonomy and significant role of research participants. As I have previously written: “Asking for consent is a demonstration of respect for the autonomy and dignity of the individual, a tangible exercise in trust building, and an expression of the moral equivalence of the researcher and research subject in an otherwise asymmetrical relationship.”17
To the extent that chatbot consent uses a familiar online process and could appear to some individuals like the “click through” consent used for routine software downloads, informed consent and the role of research participants might seem to be devalued by researchers.18 Also, simplified chatbot-based informed consent could give the impression that the personal, financial, and administrative interests of researchers, their institutions and funders, and pharmaceutical and other commercial entities are valued more than the interests of potential research participants. On the other hand, a shorter, more efficient consent process could be viewed as respecting the time of participants and their ability to understand their role in the research.
In practice, current methods of informed consent for research often fall short of the ideal of providing an informative, meaningful, interactive process.19 For example, in both clinical and research settings, studies show a lack of patient and participant understanding at the time of consent as well as a later lack of recall.20 However, knowledge transfer to enable autonomous decision making, although important, is only one aspect of informed consent,21 and informed consent is only one element of the ethical conduct of research. Thus, from a broader perspective, new technologies that could potentially devalue informed consent, in reality or perception, should not be developed and introduced without great care, reflection, and rigorous assessment of the implications.
Bioethics scholars and students are familiar with the first sentence of the first principle of the Nuremberg Code: “The voluntary consent of the human subject is absolutely essential.”22 The less known and less frequently quoted last part of this first principle is especially appropriate in the context of new chatbot-based consent. “The duty and responsibility for ascertaining the quality of the consent rests upon each individual who initiates, directs, or engages in the experiment. It is a personal duty and responsibility which may not be delegated to another with impunity.”23
Chatbots and the future of healthcare
Advances in many forms and uses of AI have the potential to alter fundamental aspects of society in ways that humanity might not appreciate at the present time.24 In healthcare, chatbots and robots increasingly could convey information25 and even perform an extensive range of clinical and research functions.26 According to a recent survey, however, 60% of Americans would be uncomfortable with their health care providers relying on AI in delivering health care services,27 and it is reasonable to assume that many more would be uncomfortable with AI-enabled programs and medical devices supplying such services without the direct involvement of providers.
Over time, technological innovations and evolving attitudes may change the relationship between humans and their AI devices and computer programs. Perhaps it is possible to maintain the human connection while also utilizing chatbots. If so, chatbots or comparable mechanisms could support rather than possibly undermine human values in health services, including obtaining informed consent for complex health research.
Acknowledgments
Dr. Eric Vilain, senior author of the Smith et al. article discussed above, graciously supplied a prepublication copy of the article for review. Megan Doerr and John Wilbanks, developers of eConsent software and procedures, provided valuable comments on an earlier draft. Kelly Carty Zimmerer, J.D. 2024, Louis D. Brandeis School of Law, University of Louisville, contributed excellent research assistance.
Disclosures
No external funding supported publication of this article, which was written before the author assumed his current position at the University of California, Irvine. The author has no conflicts of interest to declare.
References
- 1.IBM, “The Value of Chatbots,” https://www.ibm.com/topics/chatbots. Accessed March 22, 2023;; Oracle, “What Is a Chatbot?” https://www.oracle.com/chatbots/what-is-a-chatbot. Accessed March 22, 2023.
- 2.Areia M, et al. , “Cost-Effectiveness of Artificial Intelligence for Screening Colonoscopy: A Modelling Study,” Lancet Digital Health 4, no. 6 (2022): e436–e444; [DOI] [PubMed] [Google Scholar]; Sato A, Haneda E, et al. , “Preliminary Screening for Hereditary Breast and Ovarian Cancer Using a Chatbot Augmented Intelligence Genetic Counselor: Development and Feasibility Study,” JMIR Formative Research 2021; 5e25184; [DOI] [PMC free article] [PubMed] [Google Scholar]; Wang WT, “Initial Experience with a COVID-19 Screening Chatbot before Radiology Appointments,” Journal of Digital Imaging 35, no. 5 (2022): 1303–07. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Roca S, et al. , “Microservice Chatbot Architecture for Chronic Patient Support,” Journal of Biomedical Informatics 102 (2020): 103305–50; [DOI] [PubMed] [Google Scholar]; Shah J, et al. , “Development and Usability Testing of a Chatbot to Promote Mental Health Services Use Among Individuals with Eating Disorders Following Screening,” International Journal of Eating Disorders 55, no. 9 (2022): 1229–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Smith ED, et al. , “Development and Implementation of Novel Chatbot-Based Genomic Research Consent.” bioRxiv [Preprint], 2023. Jan 24:2023.01.23.525221. doi: 10.1101/2023.01.23.525221. [DOI] [Google Scholar]
- 5.Doerr M, et al. , “Assessment of the All of Us Research Program’s Informed Consent Process,” AJOB Empirical Bioethics 12, no. 2 (2021): 72–83; [DOI] [PubMed] [Google Scholar]; Wilbanks J, “Electronic Informed Consent in Mobile Applications Research,” Journal of Law, Medicine & Ethics 48, no. 1 (Supp. 2020): 147–53; [DOI] [PubMed] [Google Scholar]; Wilbanks J, “Design Issues in E-Consent,” Journal of Law, Medicine & Ethics 46, no. 1 (2018): 110–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Doerr M, et al. , “Implementing a Universal Informed Consent Process for the All of Us Research Program,” Pacific Symposium on Biocomputing 24 (2019): 427–38. [PMC free article] [PubMed] [Google Scholar]
- 7.Biros M, “Capacity, Vulnerability, and Informed Consent for Research,” Journal of Law, Medicine & Ethics 46, no. 1 (2021): 72–75; [DOI] [PMC free article] [PubMed] [Google Scholar]; Shivayogi P, “Vulnerable Populations and Methods for Their Safeguarding,” Perspectives in Clinical Research 4, no. 1 (2013): 53–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Kolata G. “A.I.’s Helping Hand,” New York Times, June 13, 2023, at D1. [Google Scholar]
- 9.45 C.F.R. 46.104.
- 10.45 C.F.R. 46.110.
- 11.45 C.F.R. 46.116(e).
- 12.Office for Human Research Protections, “Guidance on IRB Continuing Review of Research,” 16–17, https://www.hhs.gov/ohrp/sites/default/files/ohrp/policy/continuingreview2010.pdf. Accessed March 29, 2023.
- 13.Serpico K, et al. , “Institutional Review Board Use of Outside Experts: A National Survey,” AJOB Empirical Bioethics 13, no. 4 (2022): 188–204. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Ibid.
- 15.Office for Human Research Protections, Regulations, Policy & Guidance, https://www.hhs.gov/ohrp/regulations-and-policy/index.html. Accessed March 23, 2023.
- 16.National Institutes of Health, Office of Innovation and Information Technology (OIIT), https://ors.od.nih.gov/OD/Pages/Office-of-Innovation-and-Information-Technology.aspx. Accessed March 26, 2023.
- 17.Rothstein MA, “Ethical Issues in Big Data Health Research,” Journal of Law, Medicine & Ethics 43, no. 2 (2015): 425–29, 427. [DOI] [PubMed] [Google Scholar]
- 18.Doerr M, et al. , “Formative Evaluation of Participant Experience with Mobile eConsent in the App-Mediated Parkinson mPower Study: A Mixed Methods Study,” Journal of Medical Internet Research 16, no. 5(2) (2017): e14, doi: 10.2196/mhealth.6521. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Grant SC, “Informed Consent – We Can and Should Do Better,” JAMA Network Open 4, no. 4 (2021): e2110848, 10.1001/jamanetworkopen.2021.10848; [DOI] [PubMed] [Google Scholar]; Resnik DB, “Informed Consent, Understanding, and Trust,” American Journal of Bioethics 21, no. 5 (2021): 61–63.. [DOI] [PubMed] [Google Scholar]
- 20.Fortun P, et al. , “Recall of Informed Consent by Healthy Volunteers in Clinical Trials,” (QJM: An International Journal of Medicine; 101, no. 8 (2008): 625–29. [DOI] [PubMed] [Google Scholar]
- 21.Beauchamp TL and Childress JF, Principles of Biomedical Ethics (New York: Oxford University Press, 8th ed., 2019), at 122. [Google Scholar]
- 22.Nuremberg Code, § 1, United States Holocaust Memorial Museum, https://www.ushmm.org/information/exhibitions/online-exhibitions/special-focus/doctors-trial/nuremberg-code. Accessed March 25, 2023.
- 23.Ibid.
- 24.Roose K, “A.I. Poses ‘Risk of Extinction,’ Tech Leaders Warn,” New York Times, May 31, 2023, at Al; [Google Scholar]; Metz C, “If Some Dangers Posed by A.I. Are Already Here, Then What Lies Ahead?” New York Times, May 8, 2023, at B5; [Google Scholar]; Harari YN, Harris T, and Raskin A, “If We Don’t Master A.I., It Will Master Us,” New York Times, March 27, 2023, at A18; [Google Scholar]; Metz C and Schmidt G, “Tech Leaders Urge a Pause in A.I., Citing ‘Profound Risks to Society,’” New York Times, March 30, 2023, at B5. [Google Scholar]
- 25.Ayers JW et al. , “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum,” JAMA Internal Medicine published online, April 23, 2023, doi: 10.1001/jamainternmed.2023.1838. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Chen M and Decary M, “Agents and Robots for Collaborating and Supporting Physicians in Healthcare Scenarios,” Journal of Biomedical Informatics 108 (2020): 1–11, 10.1016/j.jbi.2020.103483; [DOI] [PMC free article] [PubMed] [Google Scholar]; Intel, “Robotics in Healthcare: The Future of Robots in Medicine,” https://www.intel.com/content/www/us/en/healthcare-it/robotics-inhealthcare.html. Accessed March 29, 2023.
- 27.Tyson A, et al. , “60% of Americans Would Be Uncomfortable with Provider Relying on AI in Their Own Health Care,” Pew Research Center, February 22, 2023, pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care. [Google Scholar]
