Abstract
Objective
At the time of informed consent (IC) for coronary angiography (CAG), patients’ knowledge of the process is inadequate. Time constraints and a lack of personalization of consent are the primary causes of inadequate information. This procedure can be enhanced by obtaining IC using a chatbot powered by artificial intelligence (AI).
Methods
In the study, patients who will undergo CAG for the first time were randomly divided into two groups, and IC was given to one group using the conventional method and the other group using an AI-supported chatbot, chatGPT3. They were then evaluated with two distinct questionnaires measuring their satisfaction and capacity to understand CAG risks.
Results
While the satisfaction questionnaire was equal between the two groups (p = 0.581), the correct understanding of CAG risk questionnaire was found to be significantly higher in the AI group (<0.001).
Conclusions
AI can be trained to support clinicians in giving IC before CAG. In this way, the workload of healthcare professionals can be reduced while providing a better IC.
Keywords: Artificial intelligence, informed consent, coronary angiography
Introduction
Obtaining informed consent (IC) from patients is a critical step in the healthcare process, as it ensures that patients are fully informed about their treatment options and the potential risks and benefits of each option. 1
However, numerous studies have also demonstrated that doctors frequently are unable to successfully obtain a standard IC before coronary angiography (CAG).2–4 This limited patient understanding was primarily attributed to the medical staff's lack of awareness regarding patients’ knowledge,5,6 an environment that was not favorable to standardizing information, and insufficient time for appropriate information presentation. 3 Most of these challenges may be overcome if IC is obtained with the assistance of artificial intelligence (AI). In addition, the quality of IC can be increased, and clinicians can save time for other medical procedures.
In this study, we compare the conventional method of obtaining IC with the use of an AI-supported chatbot for obtaining IC from patients who will undergo CAG. It is the first study to investigate IC using AI in the medical field, and we believe it is significant because it encourages new developments in this field. This comparison allows us to evaluate the potential benefits and limitations of using AI in the IC process and to assess the accuracy and comprehensiveness of the information provided to patients.
The purpose of this study was to investigate whether AI would be sufficient to include in the IC process, as well as the applicability of fundamental ethical principles that should be considered when obtaining IC, such as patient autonomy and equal participation of the patient in the decision-making process.
Methods
The study was a randomized controlled trial conducted at a large tertiary care hospital in Turkey. Between December 2022 and January 2023, we recruited 139 consecutive patients receiving first-time elective CAG for this randomized monocentric pilot trial. IC was obtained from 69 patients using conventional methods, and from the other 70 patients using an AI chatbot. The patients were randomly assigned to either the study group (AI) or the control group (conventional).
For the study, after being reviewed by cardiologists and a biomedical ethicist, chatGPT3, which is open to the public, was used because it both best imitates a human being and has successful Turkish language support. The ChatGPT Language Model, created by OpenAI, was originally made available on November 30, 2022. Via an application programming interface, the user may converse with the model directly. 7
The patients whose IC was obtained with the AI chatbot received information by asking questions about CAG to the chatbot. Then, the chatbot provided answers to the patient's questions in a conversational and user-friendly format, allowing the patients to learn about the procedure and its potential risks and benefits in a way that was easy to understand. Patients who were given a CAG decision were taken to a separate room with a computer running a chatbot program. Before the chatbot interview, the patient's doctor provided no information regarding the CAG procedure. A computer technician who was unfamiliar with the CAG procedure assisted the patients in using the application.
Patients included in the study were provided with training on how to interact with the chatbot for IC. Both groups were informed that there were no time or question constraints during the interaction, and they were encouraged to ask as many questions as they desired. Additionally, participants were informed that at the end of the conversation, they would be asked to complete a satisfaction survey to assess their understanding and overall experience with the IC process.
In the conventional method, patients’ own physicians provided the patients with information about CAG. The doctor used a standard consent form to explain the procedure and its potential risks and benefits and answered any questions that the patients had.
After the IC process was completed, the accuracy of the information received by the patients was tested using a questionnaire. The questionnaire consisted of a series of multiple-choice questions about CAG and was designed to assess the patient's knowledge and understanding of the procedure. This study's questionnaire had not previously been validated in the literature. However, before the main study, the questionnaire underwent a pilot testing phase with a small group of patients to ensure its reliability. The pilot testing was designed to assess the clarity and understandability of the questions, allowing us to make adjustments based on patient feedback. The data were collected by trained research assistants who were not involved in the patient care.
A team of five cardiologists and a bioethicist reviewed the questions asked by patients to the chatbot and the answers it provided:
Are the answers for CAG correct?
Are these explanations adequate?
Is there any manipulation in the chatbot's responses that will guide and influence the patient's decision?
The team evaluated the adequacy of the answers in light of these three questions, as well as whether the ethical principles of IC were violated.
The data collected from the questionnaire were then compared between the two groups of patients (those who received information via the AI chatbot and those who received information via the conventional method). This allowed the researchers to evaluate the effectiveness of the AI chatbot in providing accurate and comprehensive information about CAG to patients.
The inclusion criteria for the study included the following:
Age: Patients must be 18 years of age or older.
Diagnosis: Patients must have been diagnosed with a condition that requires CAG.
Procedure: Patients must be scheduled to undergo CAG at the study hospital.
Language: Patients must be able to understand and speak Turkish or English, as the IC materials and the chatbot will be in Turkish and English.
The exclusion criteria for the study included the following:
Cognitive impairment: Patients with cognitive impairments that affect their ability to understand and provide IC
Mental illness: Patients with mental illnesses that affect their ability to understand and provide IC
Physical impairment: Patients with physical impairments that affect their ability to use the chatbot
The study was approved by the local ethics committee (date: 24.10.2022, No: ESH/GOEK 2022/5). All study participants willingly contributed to this research by providing both written and verbal IC, as meticulously obtained by the principal investigators prior to their inclusion in the study. During the comprehensive IC process, patients were meticulously apprised of their rights, ensuring complete autonomy in their decision to participate. They were reassured that their involvement was entirely voluntary, and they were under no obligation to participate in the study. Participants were explicitly informed that they could withdraw from the study at any time without any consequence. All participants were assured that their personal information and identity would be kept strictly confidential throughout the entire research process. This consent statement has been approved by the ethics board.
Statistical analysis
Categorical data were represented by numbers and percentages, and continuous data were represented by means ± standard deviations. The normal distribution of data was tested using skewness and curtosis. Categorical data were compared with the chi-square test, and continuous data were compared with the t-test. The data were analyzed using SPSS 20.0(IBM SPSS Ver. 20.0, IBM Corp, Armonk NY, USA). A p-value of <0.05 was considered significant.
Results
Table 1 displays the demographic data regarding the patients. There were no significant differences in terms of age, educational level, and gender between the two groups. There is no significant difference in terms of age, educational level, and gender. There was no significant difference between the two groups in terms of participant satisfaction, which was scaled from 1 (very poor/extremely unsatisfactory) to 5 (very good/extremely satisfied) (p = 0.581) (Table 2). The second questionnaire assessed the patients’ comprehension of CAG risks, awarding 1 point for accurate responses and 0 points for incorrect ones. The total score was calculated by adding correct answers to each of the six questions. As a result, the study group answered the third question (angiography has a risk of death) with significantly more accurate responses (p < 0001), and the final score was higher (p < 0001) (Table 3).
Table 1.
Demographic and clinical data
|
Variables |
Study group (n = 70) |
Control group (n = 69) | p |
|---|---|---|---|
| Age, years | 56.1 ± 7.4 | 58.9 ± 7.7 | 0.882 |
| Gender, n (%) Male Female |
38 (54.3%) 32 (45.7%) |
41 (59.4%) 28 (40.6%) |
0.541 |
| Height, cm | 168.1 ± 9.6 | 165.5 ± 7.4 | 0.004 |
| Weight, kg | 75.1 ± 10.5 | 87.4 ± 19.7 | <0.001 |
| Body Mass Index (kg/m2) | 26.6 ± 3.6 | 31.9 ± 7.0 | <0.001 |
| Education, n (%) Primary School Secondary School High School University |
33 (47.2%) 4 (5.7%) 19 (27.1%) 14 (20.0%) |
36 (52.2%) 5 (7.2%) 10 (14.5%) 18 (26.1%) |
0.317 |
| Profession, n (%) Employee Officer Unemployed Other |
28 (40.0%) 14 (20.0%) 14 (20.0%) 14 (20.0%) |
28 (40.6%) 9 (13.0%) 24 (34.8) 8 (11.6%) |
0.148 |
| Marital Status, n (%) Married Single Divorced |
65 (92.8%) 0 (0%) 5 (7.2%) |
64 (92.7%) 5 (7.3%) 0 (0%) |
0.007 |
| Diabetes Mellitus, n (%) Yes No |
17 (24.3%) 53 (75.7%) |
41 (59.4%) 28 (40.6%) |
<0.001 |
| Hypertension, n (%) Yes No |
32 (45.7%) 38 (54.3%) |
37 (53.6%) 32 (46.4) |
0.351 |
| Coronary Artery Disease History, n (%) No |
70 (100%) |
69 (100%) |
- |
| Chronic Heart Failure, n (%) Yes No |
0 (0%) 70 (100%) |
5 (7.2%) 64 (92.8%) |
0.022 |
| Serebrovascular Disease, n (%) Yes No |
4 (5.7%) 66 (94.3%) |
0 (0%) 69 (100%) |
0.044 |
Table 2.
Satisfaction questionnaire: According to a scale of 1 for very poor, 2 for a little, 3 for moderate, 4 for good, and 5 for very good, patients’ satisfaction was rated.
| Questions | Study group (n = 70) |
Control group (n = 69) |
p |
|---|---|---|---|
| 1. Do you understand what a coronary angiography procedure is? | 3.9 ± 0.9 | 4.3 ± 0.7 | 0.002 |
| 2. Do you understand why coronary angiography will performed on you? | 4.1 ± 0.7 | 4.3 ± 0.7 | 0.129 |
| 3. Do you understand what the risks of coronary angiography are? | 4.2 ± 0.7 | 3.9 ± .0.9 | 0.031 |
| 4. Did you find the information given to you sufficient? | 4.2 ± 0.7 | 3.9 ± 0.8 | 0.045 |
| 5. Were you able to ask the questions you were curious about easily? | 4.3 ± 0.6 | 4.1 ± 0.7 | 0.055 |
| 6. Were you generally satisfied with the answers to the questions you asked? | 4.3 ± 0.7 | 4.2 ± 0.9 | 0.177 |
| Score (mean ± SD) | 25.0 ± 3.5 | 24.6 ± 3.8 | 0.581 |
After responding to all the questions, the patient's overall score is shown in numbers.
Table 3.
Patients’ awareness of coronary angiography procedure and risk questionnaire: each correct response receives one point, while each incorrect response receives zero points.
| Questions | Study group (n = 70) | Control group (n = 69) | p |
|---|---|---|---|
| 1.Angiography is performed from the inguinal or arm vein True False |
70 (100%) 0 (0%) |
69 (100%) 0 (0%) |
0056 |
| 2. Angiography procedure is performed under general general anesthesia True False |
38 (54.3%) 32 (45.7% |
48 (69.6%) 21 (30.4%) |
0.064 |
| 3. Angiography has a risk of death True False |
70 (100%) 0 (0%) |
48 (69.6%) 21 (30.4%) |
<0.001 |
| 4. Angiography has a risk of kidney failure True False |
60 (85.7%) 10 (14.3%) |
50 (72.5%) 19 (27.5%) |
0.055 |
| 5. There is a risk of vessel rupture during the angiography procedure True False |
70 (100%) 0 (0%) |
61 (88.4%) 8 (11.6%) |
0.003 |
| 6. There is a risk of developing allergies due to the substances given during the angiography procedure. True False |
65 (92.9%) 5 (7.1%) |
55 (79.7%) 14 (20.3%) |
0.024 |
| Score (mean ± SD) | 5.3 ± 0.8 | 4.8 ± 0.8 | <0.001 |
The test score was the sum of each patient's scores.
All of the questions posed in the Methods section received affirmative responses from the expert team that evaluated the patient and chatbot questions.
Discussion
In this study, we compared the effectiveness of the conventional method of obtaining IC with the use of an AI-supported chatbot for obtaining IC from patients who will undergo CAG. Our results showed that the IC obtained with the AI chatbot was as successful as the conventional method in terms of accuracy and comprehensiveness of the information provided to the patients.
While the satisfaction questionnaire showed that the chatbot was as successful as the conventional method, it was observed that they gave more accurate answers to the questions in the questionnaire on the meaning of risks than the traditional method. One possible explanation for these findings is that the AI chatbot was able to provide personalized and tailored information to the patients, based on their individual characteristics and preferences. This may have made it easier for the patients to understand the information and to make informed decisions about their healthcare.
Researchers have sought new methods to overcome the limitations of the conventional approach in obtaining IC. It was discovered that employing videos or animations to provide IC improved the consent's quality.4,8 It has been demonstrated that new procedures for IC for CAG are needed because patients’ understanding of CAG is incomplete.9,10 The results of the present study suggested that technology may be a novel way to enhance the standard of IC.
In traditional consent, time constraints and language barriers are substantial obstacles. 8 These obstacles limit the patient's right comprehension of consent. In a study of the patients who were given standard IC before CAG, 68% were uninformed of the procedure's principal dangers. 11 Similarly, in our study, patients with traditional consent were shown to be deficient in understanding the common hazards of CAG. The patients in the study group were shown to have a better percentage of accurately understanding the risks. This software is not an IC application. However, we believe that the integration of voice chat, illustrations, and videos relevant to the procedure of such programs would improve the effectiveness of accurately informing the patient.
Digital consent, which can be considered the pioneer of assisted IC, has been used for a long time in the fields of Internet subscriptions, banking, and finance.12,13 The fact that digital consent is also used in obtaining IC for clinical studies and is more successful in this field than the traditional method 14 signaled that could soon replace the conventional IC. In this regard, the study's findings demonstrating applicability to IC may hasten the development in this area.
While giving IC, patients do not understand the consent, they do not read the consent documents, written consent is not individual, and the paternalistic approach of doctors brings many concerns about conventional IC.12,15 There is also the possibility of discrimination by physicians. For example, IC for a disabled or elderly person may differ from that for a younger or more beneficial patient to society. 16 Inadequate information resulting from discrimination, paternalistic approach, or non-personalization of medical sages may reduce the quality of the doctor's consent to the patient, and violating the ethical principles of a contemporary IC. 17
When we analyzed the chatbot's conversations with the patients, the chatbot did not approach patients paternalistically, it personalized the answers according to the educational level and knowledge of the person, and thus, it became more successful. Another possible explanation for the success of the AI chatbot in obtaining IC is that it was able to reduce the risk of bias and discrimination in the healthcare system. AI systems can be designed to be objective and unbiased by training them on diverse and representative datasets, ensuring fair and appropriate recommendations for all patients. 18 This may have helped to ensure that the patients who received information via the AI chatbot were not disadvantaged in any way compared to the patients who received information via the conventional method.
One key argument in favor of using AI for obtaining IC is that it can help improve the quality and efficiency of healthcare. By gathering and analyzing large amounts of data, AI systems can help healthcare providers make more accurate diagnoses and recommend the most appropriate treatments for each individual patient. 19 This can help reduce the risk of errors or omissions and can also help ensure that patients receive the best possible care. Although AI tools have an inherent risk of making false detections, they make fewer mistakes than humans, reducing the number of medical errors. 20 Another important reason why it is becoming more important is that the medical literature is growing to a level that human doctors cannot keep up with. Today, the use of a large number of powerful and data-driven devices has broadly compensated for this problem. 21 This success in the field of health, with fewer errors and faster work, supports our claims that it can also be used for IC. We believe that, while the patients in this study were at least as satisfied as those in the conventional method, they received more accurate information about the risks of CAG because they had the opportunity to ask questions freely and read the answers without feeling rushed.
Additionally, using AI for obtaining IC can help reduce the workload and burden on healthcare providers, allowing them to focus on providing direct care to patients. By automating the process of gathering and presenting information to patients, AI systems can help healthcare providers save time and reduce the risk of burnout and fatigue. This can help improve the overall quality of care and the patient experience.
The integration of AI-powered chatbots into the process of obtaining IC introduces a new dimension that necessitates consideration of the “human factor.” While AI systems can be valuable assistants in providing information to patients, biases in the data used to train these models can influence the information conveyed during the consent process. Moreover, AI's ability to sometimes present incorrect information as accurate and opacities in AI algorithms (blackbox) can raise concerns. 22 Additionally, AI has the potential to disrupt the doctor–patient relationship, as IC is not only an information-sharing process but also a process of establishing trust between the patient and the healthcare provider. 23 To address these issues, we propose using AI not as a replacement for obtaining IC but as a supportive agent to reduce the workload of healthcare professionals and provide patients with higher-quality IC.
Overall, there are compelling arguments in support of the notion that AI can be used to obtain IC in the healthcare field. By improving the quality and efficiency of care, providing personalized and tailored information to patients, and reducing the risk of bias and discrimination, AI has the potential to enhance the IC process and to help ensure that patients are able to make informed decisions about their healthcare.
However, it is important to note that the study had some limitations:
The sample size may have been small to accurately represent the population of patients undergoing CAG. This could affect the reliability of the findings and limit the generalizability of the conclusions. The study may have only included patients from a single hospital and a small geographic area. This could mean that the results may not apply to other hospitals or regions. The study may have relied on self-reported data from the patients. This could introduce bias into the results, as patients may not accurately recall or report their experiences and opinions. Furthermore, we acknowledge the lack of prior validation for the questionnaire utilized in this study.
In addition, doctors who were not included in the study in the conventional method gave IC to their patients in the condition and time they routinely did. In the study group, on the other hand, the patients were informed by asking questions freely in a more ideal environment and without time constraints. This may have contributed to the success of the study group. However, this is not a limitation. On the contrary, we think that the main message of the study that doctors support that the lacking IC caused by time constraints and an unsuitable environment can be developed with AI support.
Conclusion
This study found that AI-powered chatbots can be a promising and effective way to obtain IC prior to CAG. The AI chatbot provided accurate and comprehensive information to patients, resulting in improved patient understanding, particularly of the procedure's risks. This suggests that AI has the potential to improve the quality of IC in healthcare settings and empower patients to make well-informed treatment decisions. The successful integration of AI in obtaining IC has significant implications for patient engagement and healthcare delivery in the future.
Acknowledgements
The authors express gratitude to all of the clinic's medical professionals and nursing personnel.
Footnotes
Contributorship: FA, methodology development, data analysis, manuscript finalization, supervision, and article idea. OTY, AHA, CHB, and BM conceived the paper. They also improved the language, devised the approach, and supervised the project.
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Ethical approval: This study was approved by Eskisehir City Hospital Ethics Committee with the decision numbered ESH/GOEK on 16.12.2022.
Funding: The authors received no financial support for the research, authorship, and/or publication of this article.
Guarantor: There is no guarantor for this study.
ORCID iD: Fatih Aydin https://orcid.org/0000-0002-1017-1917
References
- 1.Beauchamp T, Childress J. Principles of biomedical ethics. 7th ed. Oxford, UK: Oxford University Press, 2013, pp.11–18. [Google Scholar]
- 2.Steffenino G, Viada E, Marengo B, et al. Effectiveness of video-based patient information before percutaneous cardiac interventions. J Cardiovasc Med 2007; 8: 348–353. [DOI] [PubMed] [Google Scholar]
- 3.Nicholson Thomas E, Edwards L, McArdle P. Knowledge is power. A quality improvement project to increase patient understanding of their hospital stay. BMJ Qual Improv Rep 2017; 6: u207103.w3042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Lattuca B, Barber-Chamoux N, Alos B, et al. Impact of video on the understanding and satisfaction of patients receiving informed consent before elective inpatient coronary angiography: a randomized trial. Am Heart J 2018 Jun; 200: 67–74. Epub 2018 Mar 12. PMID: 29898851. [DOI] [PubMed] [Google Scholar]
- 5.Horwitz LI, Moriarty JP, Chen C, et al. Quality of discharge practices and patient understanding at an academic medical center. JAMA Intern Med 2013; 173: 1715–1722. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Calkins DR, Davis RB, Reiley P, et al. Patient-physician communication at hospital discharge and patients’ understanding of the postdischarge treatment plan. Arch Intern Med 1997; 157: 1026–1030. [PubMed] [Google Scholar]
- 7. OpenAI.ChatGPT: Optimizing language models for dialogue. https://openai. com/blog/chatgpt/ (2022, accessed 30 January 2023).
- 8.Wald DS, Casey-Gillman O, Comer K, et al. Animation-supported consent for urgent angiography and angioplasty: a service improvement initiative. Heart 2020 Nov; 106: 1747–1751. Epub 2020 Mar 10. Erratum in: Heart. 2021 May;107(10):e3. PMID: 32156717; PMCID: PMC7656148. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Eran A, Erdmann E, Er F. Informed consent prior to coronary angiography in a real world scenario: what do patients remember? PLoS One 2010 Dec 20; 5: e15164. PMID: 21188151; PMCID: PMC3004853. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Chandrasekharan DP, Taggart DP. Informed consent for interventions in stable coronary artery disease: problems, etiologies, and solutions. Eur J Cardiothorac Surg 2011 Jun; 39: 912–917. Epub 2010 Oct 8. PMID: 20934881. [DOI] [PubMed] [Google Scholar]
- 11.Laurent M, Benoît PO, Boulmier D, et al. Patient information and coronary angiography: experience of the rennes group. Arch Mal Coeur Vaiss 2001; 94: 957–961. [PubMed] [Google Scholar]
- 12.Jones ML, Kaufman E, Edenberg E. AI And the ethics of automating consent. IEEE Secur Priv 2018; 16: 64–72. [Google Scholar]
- 13.Pascalev M. Privacy exchanges: restoring consent in privacy self-management. Ethics Inf Technol 2017; 19: 39–48. [Google Scholar]
- 14.Abujarad F, Peduzzi P, Mun S., et al. Comparing a multimedia digital informed consent tool with traditional paper-based methods: randomized controlled trial. JMIR Form Res 2021 Oct 19; 5: e20458. PMID: 34665142; PMCID: PMC8564662. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.‘What Did the Doctor Say?' Improving health literacy to protect patient safety. The Joint Commission. https://www.jointcommission.org/assets/1/18/improving_health_literacy.pdf (2007, accessed 15 January 2023).
- 16.Chen B, McNamara DM. Disability discrimination, medical rationing and COVID-19. Asian Bioeth Rev 2020 Sep 3; 12: 511–518. PMID: 32901207; PMCID: PMC7471485. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Lens JW. Medical paternalism, stillbirth, & blindsided mothers. Iowa L. Rev 2020; 106: 665. [Google Scholar]
- 18.Lee D, Yoon SN. Application of artificial intelligence-based technologies in the healthcare industry: opportunities and challenges. Int J Environ Res Public Health 2021 Jan 1; 18: 71. PMID: 33401373; PMCID: PMC7795119. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Väänänen A, Haataja K, Vehviläinen-Julkunen Ket al. et al. Proposal of a novel artificial intelligence distribution service platform for healthcare. F1000Res 2021 Mar 26; 10: 45. PMID: 34804493; PMCID: PMC8577055. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Meskó B, Hetényi G, Győrffy Z. Will solve the human resource crisis in healthcare? BMC Health Serv Res 2018 Jul 13; 18: 45. PMID: 30001717; PMCID: PMC6044098. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Astromskė K, Peičius E, Astromskis P. Ethical and legal challenges of informed consent applying in medical diagnostic consultations. AI Soc 2021; 36: 509–520. [Google Scholar]
- 22.Durán JM, Jongsma KR. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J Med Ethics 2021; 47: 329–335. [DOI] [PubMed] [Google Scholar]
- 23.Shahvisi A. No understanding, No consent: the case against alternative medicine. Bioethics 2016 Feb; 30: 69–76. PMID: 26806449. [DOI] [PubMed] [Google Scholar]
