Abstract
Objective
To evaluate and understand pregnant patients’ perspectives on the implementation of artificial intelligence (AI) in clinical care with a focus on opportunities to improve healthcare technologies and healthcare delivery.
Materials and Methods
We developed an anonymous survey and enrolled patients presenting to the labor and delivery unit at a tertiary care center September 2019–June 2020. We investigated the role and interplay of patient demographic factors, healthcare literacy, understanding of AI, comfort levels with various AI scenarios, and preferences for AI use in clinical care.
Results
Of the 349 parturients, 57.6% were between the ages of 25–34 years, 90.1% reported college or graduate education and 69.2% believed the benefits of AI use in clinical care outweighed the risks. Cluster analysis revealed 2 distinct groups: patients more comfortable with clinical AI use (Pro-AI) and those who preferred physician presence (AI-Cautious). Pro-AI patients had a higher degree of education, were more knowledgeable about AI use in their daily lives and saw AI use as a significant advancement in medicine. AI-Cautious patients reported a lack of human qualities and low trust in the technology as detriments to AI use.
Discussion
Patient trust and the preservation of the human physician-patient relationship are critical in moving forward with AI implementation in healthcare. Pregnant individuals are cautiously optimistic about AI use in their care.
Conclusion
Our findings provide insights into the status of AI use in perinatal care and provide a platform for driving patient-centered innovations.
Keywords: survey, artificial intelligence, patient perspective, obstetrics, pregnancy
BACKGROUND AND SIGNIFICANCE
The use of artificial intelligence (AI) technologies in clinical care is gaining rapid momentum. AI can improve patient care in many ways—by personalizing disease prevention, diagnosis, and treatment and aiding patient engagement and adherence.1 The first AI device to autonomously diagnose diabetic retinopathy received FDA clearance in April 2018, and, since then, more than 340 AI algorithms and devices have been FDA approved.2,3 Deep learning algorithms can assist providers in analyzing images, pathological samples, and other data. As most medical specialties utilize big data, including electronic health records and imaging, the potential of AI technologies to transform clinical care is becoming widely recognized.4 AI technologies can significantly enhance the practice of anesthesiology, where significant challenges include making fast decisions in real-time using large amounts of physiologic and clinical data.
There is a high need and potential for AI technologies in medicine, yet the current utilization of most tools in routine care is limited. The widespread implementation of electronic health records in inpatient and outpatient settings makes it much more practical to implement AI in routine care—indeed, this is likely to be a key focus area in medical informatics in the next 10 years. Still, realizing AI’s full potential and implementing those tools in clinical practice will depend on the engagement of all stakeholders. Multiple companies invest in developing solutions in all areas of healthcare AI.5 So far, healthcare professionals have been cautiously optimistic about accepting AI as part of their tools, although this varies by use case.6,7 There are limited data about the patient perspective on using AI tools in clinical practice, yet patient engagement is one of the most important determinants of healthcare quality.8 Lack of patient support and nonadherence can risk patient safety, leading to high healthcare costs, and, thus ultimately, might fail to implement AI technologies.9 While patients are not the only decision-makers about the approaches involved in their care, their permission to utilize these technologies has rarely been sought. Patient support will enhance the use of tools promoted by clinicians and healthcare systems. Therefore, understanding patients’ opinions about and trust in these technologies is key to introducing AI technologies to improve healthcare delivery.
To date, most patient survey studies of AI have been focused on specific applications, mainly in older patients with complex medical needs. Few evaluations of patients’ perceptions of the role of AI in their entire clinical care have been performed; importantly, no studies that we are aware of have investigated the attitudes of obstetric patients. Pregnant individuals are a well-informed group of patients with high expectations for their medical care; they are also one of the most risk-averse groups of patients.10 As a result, pregnant individuals have commonly been excluded from research and healthcare innovation, even when the intervention could be highly beneficial.11,12
OBJECTIVE
In this study, we sought to investigate pregnant patients’ understanding of the use of AI in their clinical care. By administering a short questionnaire, we analyzed the extent of patients’ knowledge about AI and their thoughts on approaching the physician-patient relationship in the context of medical innovation. This research aims to develop insight into gaps in public knowledge about AI, differences in understanding of this new technology, and opportunities to improve healthcare technologies and healthcare delivery.
MATERIALS AND METHODS
Study participants and data collection
This study and survey instrument were approved by the Institutional Review Board of the Brigham and Women’s Hospital (No. 2019P003030, November 2019). The survey was administered on our tertiary institution’s Labor and Delivery Suite to term patients waiting for labor induction, cesarean delivery, or in early labor between December 2019 and June 2020. Patients were identified as potential candidates for the study by their clinical team. Inclusion criteria comprised all patients presenting to the Labor and Delivery Suite for delivery who were 18 years old or greater and able to consent. Exclusion criteria included patients in preterm labor, in active labor nearing delivery, who were postpartum, who presented with significant fetal or maternal health issues, or who declined participation (patient or patient’s clinical team). Every patient was provided with a study information sheet that listed the study objectives and ensured their anonymity. Oral consent was obtained from each participant, and they were provided with a 3-page paper survey, which was subsequently collected by the clinical or research team. There were no incentives or remuneration for completing the survey. Answering all questions was optional. The data were transferred manually into a REDCap database and electronically compiled using the REDCap software.
COVID-19 precautions
The final 3 months of study recruitment occurred during the COVID-19 pandemic. The study was put on hold for the first 6 weeks of the pandemic. Subsequently, when universal SARS-CoV-2 testing was implemented for all admitted patients, the study was allowed to resume by the Institutional Review Board. In approaching patients during the pandemic, appropriate precautions were taken to maximize patient and provider safety, and only patients who tested negative for SARS-CoV-2 were approached.
Survey instrument
A literature search was performed, and no prior surveys investigating patients’ perspectives of AI use in anesthesiology were found. Due to the lack of previous investigations, the 21-point survey was developed (Supplementary Table S1) in collaboration with 30 expert anesthesiologists from 4 academic hospitals, an obstetrician, and 4 patients. The survey comprised 8 multiple-choice questions, 3 hypothetical digital health scenarios, 1 open-ended prompt, and 7 demographic questions. The first part assessed the patient’s knowledge of the roles of the clinical team and their understanding of AI. As the definition and understanding of AI are somewhat abstract, we developed 5 hypothetical scenarios relevant to the patient’s condition in the second part. We developed those scenarios based on emerging technologies in the aviation industry, automated anesthesia robot,13,14 chatbot for pregnant patients,15 mobile applications for pain management,16 and smart drug administration pumps.17 The responders were asked to evaluate their comfort with AI technology using 10 points Likert-type scale. The third part assessed the patient’s perception of the advantages and disadvantages of AI technologies. The last part of the survey contained demographic and overall health questions.
Statistical analysis
The k-means clustering method was used to identify a cluster of patients who specified similar comfort levels (on a scale of 0–10, ranging from least to most comfort) regarding 5 hypothetical scenarios involving the application of AI to their care. The optimum number of clusters was determined by comparing 24 cluster validity indices between 2 and 5 cluster partitions. Most indices identified 2 as the optimum number of clusters. Thus, each patient was categorized as belonging to 1 of 2 clusters (“AI cautious” and “Pro AI”), and all survey responses were compared between the 2 groups. Survey responses on a 0–10 scale were compared using 2-sample t-tests, ordinal categorical responses were compared using Cochran-Armitage trend tests, and nominal categorical responses were compared using chi-square or Fisher’s exact tests. All statistical hypothesis tests were 2-sided. The k-means clustering analysis was performed using the R package NbClust implemented in R software version 3.6.1 (R Foundation for Statistical Computing, Vienna, Austria).18 Hypothesis testing was performed using SAS software version 9.4 (SAS Institute, Cary, NC).
A sample size of at least 70 times the number of clustered variables is recommended for performing k-means cluster analysis.19 We planned to cluster patients based on 5 variables, so we aimed to enroll approximately 350 patients in the study.
RESULTS
Patient population
A total of 349 individuals completed the survey between December 2019 and June 2020. Of the patients who met the inclusion criteria, 79.9% agreed to participate. Of those, 3.2% of patients did not respond to at least 1 question. Table 1 summarizes the surveyed patient characteristics by comparison with all patients receiving care on the Labor and Delivery Unit of Brigham and Women’s Hospital between January 1, 2019 and December 31, 2019.
Table 1.
Study patient characteristics (N = 349)
| Demographics | Total per subcategory | Brigham and Women’s Hospital Deliveries (January 1, 2019 to December 31, 2019) |
|---|---|---|
| Age, n (%) | ||
| (One selection allowed) | ||
| <24 | 9 (2.6) | 438 (7.0) |
| 25–34 | 201 (57.6) | 3625 (57.9) |
| 35–44 | 128 (36.7) | 2147 (34.3) |
| 45+ | 2 (0.6) | 46 (0.7) |
| Unknown | 9 (2.6) | 0 (0.0) |
| Race/ethnicity, n (%) | ||
| (Multiple selections allowed) | ||
| White | 214 (61.3) | 3765 (60.2) |
| Hispanic | 32 (9.2) | 1049 (16.8) |
| African American | 41 (11.7) | 792 (12.7) |
| Asian | 43 (12.3) | 665 (10.6) |
| Indian American | 1 (0.3) | 180 (2.9) |
| Middle Eastern | 6 (1.7) | N/A |
| Pacific Islander | 1 (0.3) | 2 (0.0) |
| Unknown/Other | 11(3.2) | 92 (1.5) |
Most survey respondents were between the ages of 25 and 34 years (57.6%) and 35 and 44 years (36.7%). The race and ethnicity composition of the survey patients included white (61.3%), Hispanic (9.2%), African American (11.7%), and Asian (12.3%). In comparison between the survey population and the total obstetric population for the year, the age and race/ethnicity distributions were similar, except smaller proportion of Hispanic patients who participated in the survey. In addition, the level of education of most of the patients who completed the survey was high, including 35.0% who had graduated college, 55.9% had a graduate degree, and only 4.3% had high school or lower education.
Physician involvement in the use of clinical AI technologies
In responding to 5 hypothetical healthcare scenarios, patients had to select their level of agreement (from 0 to 10) with the following choices: AI should not be used in their care at all, used in the presence of a physician, or directly implemented without physician. The average response, over all scenarios in all patients, was 5.5, 5.6, 4.4–6.8 (mean, median, interquartile range 25%–75%) on the 10-point Likert-type scale. Using these values, cluster analysis was performed, and the surveyed population was split into 2 clusters: those who responded in favor of AI-use, Pro-AI; and those who were in favor of more physician oversight, AI-cautious (Figure 1).
Figure 1.
Cluster analysis for patient responses to the 5 hypothetical scenarios. The scenarios are as follows. How comfortable are you with the use of AI for: Airplane autopilot: autopilot during airplane travel? All clinical decisions: What if similar “Autopilot” technologies were applied to healthcare? Triaging: A chatbot helps you decide when you should be evaluated based on symptoms and contractions. Pain management: A phone app helps you select your mode of pain control during labor. Medication delivery: A smart medication pump predicts your contractions and delivers medications. The circles represent means, the error bars represent 1 SD above and below the means, and the diamonds represent medians.
Characteristics of Pro-AI and AI-cautious patients
The Pro-AI and AI-cautious clusters contained 177 and 172 patients, respectively. There were no differences between the 2 clusters with regards to age, percentage of White patients, anxiety and depression diagnoses, and their level of anxiety over the previous week (Table 2). Patients in the Pro-AI cluster had a higher level of education than the AI-cautious group, P = .02. Additionally, the AI-cautious group exhibited a higher number of patients who self-reported having well-managed chronic health conditions than the Pro-AI group, P = .05.
Table 2.
Patient characteristics by AI-cluster
| Variables | Pro-AI | AI-cautious | P value |
|---|---|---|---|
| Age, n (%) | .95 | ||
| <24 | 5 (2.8) | 4 (2.3) | |
| 25–34 | 102 (57.6) | 99 (57.6) | |
| 35–44 | 68 (38.4) | 60 (34.9) | |
| 45+ | 0 (0) | 2 (1.2) | |
| Missing | 2 (1.1) | 7 (4.1) | |
| Race/ethnicity, n (%) | .20 | ||
| White | 109 (61.6) | 105 (61.0) | |
| Hispanic | 17 (9.6) | 15 (8.7) | |
| African American | 16 (9.0) | 25 (14.5) | |
| Asian | 28 (15.8) | 15 (8.7) | |
| Indian American | 0 (0.0) | 1 (0.6) | |
| Middle Eastern | 3 (1.7) | 3 (1.7) | |
| Pacific Islander | 0 (0.0) | 1 (0.6) | |
| Missing | 4 (2.3) | 7 (4.1) | |
| Level of education, n (%) | .02* | ||
| High school | 7(4.0) | 18 (10.5) | |
| College | 59 (33.3) | 63 (36.6) | |
| Graduate degree | 107 (60.5) | 88 (51.2) | |
| Missing | 4 (2.3) | 5 (2.9) | |
| Overall health, n (%) | .05 | ||
| Good health | 166 (93.8) | 150 (87.2) | |
| Chronic conditions | 8 (4.5) | 17 (9.9) | |
| Missing | 3 (1.7) | 5 (2.9) | |
| Anxiety, n (%) | .47 | ||
| Diagnosed with anxiety | 43 (24.3) | 36 (20.9) | |
| None | 131 (74.0) | 132 (76.7) | |
| Missing | 3 (1.7) | 4 (2.3) | |
| Depression, n (%) | .68 | ||
| Diagnosed with depression | 23 (13.0) | 20 (11.6) | |
| None | 150 (84.7) | 149 (86.6) | |
| Missing | 4 (2.3) | 3 (1.7) | |
| Felt anxious in the past 7 days, n (%) | .14 | ||
| Never | 38 (21.5) | 40 (23.3) | |
| Rarely | 47 (26.6) | 54 (31.4) | |
| Sometimes | 62 (35.0) | 55 (32.0) | |
| Often | 25 (14.1) | 17 (9.9) | |
| All the time | 4 (2.3) | 2 (1.2) | |
| Missing | 1 (0.6) | 4 (2.3) |
P value for ordinal categorical responses based on Cochran-Armitage trend tests, and nominal categorical responses based on chi-square or Fisher’s exact tests.
Significant P < .05.
There was no difference in understanding regarding the anesthesiologist’s role in the care of both clusters (Supplementary Table S2). However, the Pro-AI cluster reported a more robust knowledge of how AI works, P = .04, and a heightened awareness of how AI affects their daily life activities, P < .001(Figure 2A and B). When prompted whether they would prefer their physician to inform them of AI use, the AI-cautious cluster had a significantly higher proportion of respondents stating that they wished their physician would not use AI altogether, P = .002 (Figure 2C).
Figure 2.
Patient responses by AI cluster.
Perspectives on the use of AI technologies
To further understand patients’ perceptions, the survey included questions asking patients what they believed were the most significant risks and benefits of AI technologies. Overall, patients believed that the greatest strength was advancing technology within medicine, while the most important concern was that the algorithms would not be as effective as physicians. In comparing the Pro-AI and AI-cautious groups (Figure 2D), the AI-cautious patients foresaw more risks in the implementation of AI, including the belief that the algorithms may not be as good as physicians, highlighting a lack of trust in the technology or concern about the risks of loss of privacy of health information. The Pro-AI patients saw benefits from the introduction of AI, including advances in technology and medicine, as well as improved patient safety (Figure 2E). Overall, 69.2% of patients believed the benefits of AI would outweigh the risks, with the Pro-AI group representing most of these patients, P < .001 (Figure 2F).
Survey respondents were also given the opportunity to respond to the open-ended prompt: How do you think AI would best apply to your care? The word cloud of their responses is shown in Figure 3. The most frequently stated words were “pain,” “management,” “monitoring,” “care,” and “dosage.”
Figure 3.
Word cloud representing the rate of word responses to the question, “How do you think AI would best apply to your care?”.
DISCUSSION
In this study, we report pregnant patients’ perceptions about AI technologies in those presenting for delivery at a tertiary care institution. Most of the surveyed patients were supportive of AI implementation in their care and believed the benefits of AI use outweighed the risks. Additionally, cluster analysis revealed 2 distinct groups, Pro-AI patients and AI-cautious patients, who had differing levels of comfort, education, and general knowledge regarding AI implementation. These results provide a starting point in assessing the perspectives of obstetric patients about the use of AI technologies in clinical care and identify opportunities for targeted patient engagement and improvement of healthcare delivery.
More broadly, our work on the adoption of AI technologies can be considered in the light of adopting novel health information technologies. Multiple theories have addressed patient engagement in the clinical adoption of innovations such as electronic medical records and new devices.20–22 Pregnant patients are understudied, as traditionally, novel technologies and therapies are not offered even when they can significantly benefit those patients. For example, delayed research and approval of the COVID-19 vaccine in pregnant patients resulted in 3.5-fold increase in maternal morbidity and 13 times higher mortality.11,12 The importance of patient education and engagement was emphasized in AMIA’s recent publication of the AI principles.23 This position paper highlights the need for research and particular attention to vulnerable populations such as pregnant patients. This work, importantly, ascertains the perspective of an understudied population and provides insights into the opportunities to improve care delivery.
Demographically, the patients enrolled in this study represent a young, well-educated, and overall healthy patient population. With respect to the entire Brigham and Women’s Hospital Labor and Delivery patient census in 2019, the surveyed population is similar and generalizable except for lower numbers of Hispanic patients due to language barriers.
Compared to most studies addressing patient perspectives, our patient population is younger and healthier.24–26 One study addressing patients’ views of wearable AI devices reported an average age of 56 years, where 55% had undergraduate or graduate degrees and many had chronic health issues such as cancer, neurological disorders, and diabetes.27 Nelson et al28 also reported an average age of 53 years in their study highlighting patient perspectives of AI use in skin cancer screening. Similar studies in patient satisfaction of anesthesia-related outcomes report an older patient population.29 Our younger study population could contribute to higher rates of comfort with the potential applications of AI compared to other studies.
To further characterize the patients’ attitude towards the AI technologies, we developed 5 hypothetical scenarios based on emerging technologies for clinical care that are likely to be relevant and become available for pregnant patients. While most patients were comfortable with everyday AI use, such as use in airline travel, the use of scenarios progressively closer to clinical care allowed us to differentiate the responses into the pro-AI and AI-cautious clusters. The largest difference between both clusters was seen in the 2 scenarios in which the AI technologies appeared to take over functions typically performed by physicians—advising on when to seek further care and assisting with the selection of pain control. In the last scenario, predicting contractions and offering automated pain management, AI can provide an obvious benefit that would not otherwise be available, which is likely the reason that most patients would be accepting of such a device in the presence of a physician. Using vignettes and scenarios in surveys can replicate real-world behavior remarkably well and, therefore, our approach is likely to reflect patients’ true decision-making processes.30
Clustering based on patient comfort with AI technologies into pro-AI and AI-cautious allowed further investigation of the characteristics of both groups. There were no significant differences in age, percentage of White patients, and health characteristics of both groups, suggesting that these factors likely play a minor role in defining patients’ opinions. Due to the small number of patients from non-White races, the role of race and ethnicity requires future investigation. AI-cautious patients had a trend for a higher rate of self-reported chronic disease, as in other studies.25 The most significant demographic difference between Pro-AI and AI-cautious patients was their level of education. There were significantly more Pro-AI patients with graduate education than AI-cautious patients, demonstrating that higher levels of education correlated with an increased understanding of AI and willingness to adopt its use. Similarly in radiology, patient trust in AI increased with higher levels of education suggesting that interventions to increase AI awareness should be directed to patients with lower education in particular.31
Pro-AI patients were more knowledgeable about the general applications of AI and more aware of the ways they actively and passively use AI in their daily lives, while AI-cautious patients were less familiar with these topics. This is similar to surveys of the general public in which a low level of knowledge about AI resulted in less favorable views on the implementation in clinical care.24 This further supports a connection between a better understanding of these technologies and openness to adoption in patient care. Many patients are probably unaware of the extent to which they already encounter AI in other parts of their lives, for example, using Google Maps or Amazon. As everyday use of AI technologies continues to grow, it is likely that more people will be accepting of the clinical use of those innovations.
Patients in the AI-cautious group reported an increased concern for the risk of protected health information security breaches and a lack of trust in technology compared to the pro-AI group. Similar to our findings, one recent study identified patient confidentiality as a significant factor for the success of a phone-based technology to improve treatment adherence.32 Other reports highlight trust and open discussion as vital for the implementation of AI and other technologies.33 The ethical concerns about using healthcare information to design and implement AI tools are long-standing.34 Those can best be addressed by developing and implementing measures to protect patient privacy as well as increased transparency of healthcare data processes.35
AI-cautious patients also reported feeling that “the machines may not be as good as human doctors.” Moreover, these patients stated they prefer the human nature of physician interaction. Similarly in dermatology, patients were receptive to AI for skin cancer screening only if it preserved respect and integrity in the physician-patient relationship.28 The concern about the interference of AI in the physician-patient relationship can be ameliorated by greater transparency in decision-making processes.34
A key factor contributing to the acceptance of AI in the healthcare setting is the perceived benefit for patient health. Most healthcare providers look favorably on technologies that foster team dialog about patient needs.36 In our survey, most AI-cautious patients expressed significant concerns about health AI which were associated with a lack of perceived benefit. Indeed, a pre-post implementation study of diabetes decision support systems demonstrated a substantial decrease in the support amongst providers when the technology failed to meet performance expectations.36 Similarly, the main concern about adopting health AI is focused on providing better care for the patients.34 Therefore, primary efforts in developing AI applications should demonstrate the added value that these tools will bring. Our finding that most patients would like new AI technologies to help with their pain, one of the most feared experiences during childbirth, further reinforces the overall positive patient attitude towards AI in clinical care.
Limitations
This study has several limitations. The participants in this survey are from a single tertiary care center in a large city. We focused our efforts to enroll all eligible patients, however, a few declined participation in this study, due to overall fatigue associated with labor and not likely due to a general disinterest in AI. Additionally, the study population had a high level of education, limiting the generalizability of these findings to other populations. Some of the surveyed patients were also referred from other healthcare facilities to a tertiary care hospital, which would generally increase their exposure to the healthcare system, new technologies, research, and an increased understanding of their care. As such, the results from this study may be subject to sampling bias and may not reflect the views of all pregnant patients. In addition, as the patients were surveyed in the healthcare facility while awaiting delivery, their views at that moment may not necessarily reflect their overall opinions. We utilized 5 realistic scenarios to gain a better understanding of patient perspectives. As a small percentage of the patients were recruited during the COVID-19 pandemic, it is possible that they had higher exposure to remote technologies and telemedicine even though we did not find a significant difference based on the time of recruitment. Further work and assessment of patient perspectives from a variety of healthcare institutions are required to develop a comprehensive understanding of patient opinions of AI use in the peripartum period.
CONCLUSION
The results of this study support directing efforts towards patient education, both general education as well as increased health and technology literacy. Patient trust, the preservation of the human physician-patient relationship, and physician autonomy are critical principles to consider as we move forward with AI implementation both during childbirth and in healthcare overall. These issues may be especially salient when AI is used to make suggestions to patients directly.
FUNDING
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
AUTHOR CONTRIBUTIONS
WA, KJG, NMC, and VPK participated in the study design and survey development. WA, VPK, and NMC participated in data collection. KGF, WA, VPK, and DWB participated in data analysis or interpretation of data for the work; all authors participated in the drafting of the manuscript or critical revision; all authors approved of the final manuscript and agree to be accountable for the integrity of the work.
SUPPLEMENTARY MATERIAL
Supplementary material is available at Journal of the American Medical Informatics Association online.
CONFLICT OF INTEREST STATEMENT
KJG reports funding from NIH/NHLBI grants K08 HL146963, K08 HL146963-02S1, and a PJP Grant from the Preeclampsia Foundation. KJG has served as a consultant to Illumina Inc., Aetion, Roche, and BillionToOne outside the scope of the submitted work. VPK reports funding from the Foundation for Anesthesia Education and Research (FAER) training grant, Partners Innovation, Brigham Research Institute, Anesthesia Patient Safety Foundation (APSF), and Connors Center IGNITE Award. VPK reports consulting fees from Avania CRO unrelated to the current work. DWB reports grants and personal fees from EarlySense, personal fees from CDI Negev, equity from Valera Health, equity from CLEW, equity from MDClone, personal fees and equity from AESOP Technology, personal fees and equity from FeelBetter, and grants from IBM Watson Health, outside the submitted work.
Supplementary Material
Contributor Information
William Armero, Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA; David Geffen School of Medicine at UCLA, Los Angeles, California, USA.
Kathryn J Gray, Division of Maternal-Fetal Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA.
Kara G Fields, Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA.
Naida M Cole, Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA; Department of Anesthesia and Critical Care, The University of Chicago, Chicago, Illinois, USA.
David W Bates, Division of General Internal Medicine and Primary Care, Brigham and Women’s Hospital, Boston, Massachusetts, USA; Department of Health Care Policy and Management, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, USA.
Vesela P Kovacheva, Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA.
Data Availability
The survey instrument in this article is available in the online Supplementary Material. Individual patient data will be shared on reasonable request to the corresponding author.
REFERENCES
- 1. Davenport T, Kalakota R.. The potential for artificial intelligence in healthcare. Future Healthc J 2019; 6 (2): 94–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Benjamens S, Dhunnoo P, Mesko B.. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med 2020; 3: 118. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.U.S. Food and Drug Administration. FDA-approved A.I.-based algorithms. Secondary U.S. Food and Drug Administration. https://medicalfuturist.com/fda-approved-ai-based-algorithms/. Accessed February 8, 2022.
- 4. Esteva A, Robicquet A, Ramsundar B, et al. A guide to deep learning in healthcare. Nat Med 2019; 25 (1): 24–9. [DOI] [PubMed] [Google Scholar]
- 5. Savage N. The race to the top among the world’s leaders in artificial intelligence. Nature 2020; 588 (7837): S102–4. [Google Scholar]
- 6. Jungmann F, Jorg T, Hahn F, et al. Attitudes toward artificial intelligence among radiologists, IT specialists, and industry. Acad Radiol 2021; 28 (6): 834–40. [DOI] [PubMed] [Google Scholar]
- 7. Martinho A, Kroesen M, Chorus C.. A healthy debate: exploring the views of medical doctors on the ethics of artificial intelligence. Artif Intell Med 2021; 121: 102190. [DOI] [PubMed] [Google Scholar]
- 8. Doyle C, Lennox L, Bell D.. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open 2013; 3 (1): e001570. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Iuga AO, McGuire MJ.. Adherence and health care costs. Risk Manag Healthc Policy 2014; 7: 35–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. van der Zande ISE, van der Graaf R, Oudijk MA, van Delden JJM.. A qualitative study on acceptable levels of risk for pregnant women in clinical research. BMC Med Ethics 2017; 18 (1): 35. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Bianchi DW, Kaeser L, Cernich AN.. Involving pregnant individuals in clinical research on COVID-19 vaccines. JAMA 2021; 325 (11): 1041–2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Lokken EM, Huebner EM, Taylor GG, et al. Disease severity, pregnancy outcomes, and maternal deaths among pregnant patients with severe acute respiratory syndrome coronavirus 2 infection in Washington State. Am J Obstet Gynecol 2021; 225 (1): 77 e1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Zaouter C, Hemmerling TM, Mion S, Leroux L, Remy A, Ouattara A.. Feasibility of automated propofol sedation for transcatheter aortic valve implantation: a pilot study. Anesth Analg 2017; 125 (5): 1505–12. [DOI] [PubMed] [Google Scholar]
- 14. Pambianco DJ, Whitten CJ, Moerman A, Struys MM, Martin JF.. An assessment of computer-assisted personalized sedation: a sedation delivery system to administer propofol for gastrointestinal endoscopy. Gastrointest Endosc 2008; 68 (3): 542–7. [DOI] [PubMed] [Google Scholar]
- 15. Sagstad MH, Morken N-H, Lund A, Dingsør LJ, Nilsen ABV, Sorbye LM.. Quantitative user data from a chatbot developed for women with gestational diabetes mellitus: observational study. JMIR Form Res 2022; 6 (4): e28091. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Zhao P, Yoo I, Lancey R, Varghese E.. Mobile applications for pain management: an app analysis for clinical usage. BMC Med Inform Decis Mak 2019; 19 (1): 106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Zhu T, Li K, Kuang L, Herrero P, Georgiou P.. An insulin bolus advisor for type 1 diabetes using deep reinforcement learning. Sensors (Basel) 2020; 20 (18): 5058. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Charrad M, Ghazzali N, Boiteau V, Niknafs A.. NbClust: an R package for determining the relevant number of clusters in a data set. J Stat Soft 2014; 61 (6): 1–36. [Google Scholar]
- 19. Dolnicar S, Grün B, Leisch F, Schmidt K.. Required sample sizes for data-driven market segmentation analyses in tourism. J Travel Res. 2014; 53 (3): 296–306. [Google Scholar]
- 20. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q 1989; 13 (3): 319–40. [Google Scholar]
- 21. Davis FD, Bagozzi RP, Warshaw PR.. Useracceptanceofinformationtechnology: a comparison of two theoretical models. Manage Sci 1989; 35 (8): 982–1003. [Google Scholar]
- 22. Venkatesh MGM V, Gordon BD, Davis FD.. User acceptance of information technology: toward a unified view. MIS Q 2003; 27 (3): 425–78. [Google Scholar]
- 23. Solomonides AE, Koski E, Atabaki SM, et al. Defining AMIA’s artificial intelligence principles. J Am Med Inform Assoc 2022; 29 (4): 585–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Giansanti D, Monoscalco L.. A smartphone-based survey in mHealth to investigate the introduction of the artificial intelligence into cardiology. Mhealth 2021; 7 (8): 8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Lennartz S, Dratsch T, Zopfs D, et al. Use and control of artificial intelligence in patients across the medical workflow: single-center questionnaire study of patient perspectives. J Med Internet Res 2021; 23 (2): e24221. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Haan M, Ongena YP, Hommes S, Kwee TC, Yakar D.. A qualitative study to understand patient perspective on the use of artificial intelligence in radiology. J Am Coll Radiol 2019; 16 (10): 1416–9. [DOI] [PubMed] [Google Scholar]
- 27. Tran VT, Riveros C, Ravaud P.. Patients’ views of wearable devices and AI in healthcare: findings from the ComPaRe e-cohort. NPJ Digit Med 2019; 2: 53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Nelson CA, Perez-Chada LM, Creadore A, et al. Patient perspectives on the use of artificial intelligence for skin cancer screening: a qualitative study. JAMA Dermatol 2020; 156 (5): 501–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Teunkens A, Vanhaecht K, Vermeulen K, et al. Measuring satisfaction and anesthesia related outcomes in a surgical day care centre: a three-year single-centre observational study. J Clin Anesth 2017; 43: 15–23. [DOI] [PubMed] [Google Scholar]
- 30. Hainmueller J, Hangartner D, Yamamoto T.. Validating vignette and conjoint survey experiments against real-world behavior. Proc Natl Acad Sci U S A 2015; 112 (8): 2395–400. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Ongena YP, Haan M, Yakar D, Kwee TC.. Patients’ views on the implementation of artificial intelligence in radiology: development and validation of a standardized questionnaire. Eur Radiol 2020; 30 (2): 1033–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Baranoski AS, Meuser E, Hardy H, et al. Patient and provider perspectives on cellular phone-based technology to improve HIV treatment adherence. AIDS Care 2014; 26 (1): 26–32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Hamilton JG, Genoff Garzon M, Westerman JS, et al. “A Tool, Not a Crutch”: patient perspectives about IBM Watson for oncology trained by Memorial Sloan Kettering. J Oncol Pract 2019; 15 (4): e277–88. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Lai MC, Brian M, Mamzer MF.. Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J Transl Med 2020; 18 (1): 14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Seroussi B, Hollis KF, Soualmia LF.. Transparency of health informatics processes as the condition of healthcare professionals’ and patients’ trust and adoption: the rise of ethical requirements. Yearb Med Inform 2020; 29 (1): 7–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Romero-Brufau S, Wyatt KD, Boyum P, Mickelson M, Moore M, Cognetta-Rieke C.. A lesson in implementation: a pre-post study of providers’ experience with artificial intelligence-based clinical decision support. Int J Med Inform 2020; 137: 104072. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The survey instrument in this article is available in the online Supplementary Material. Individual patient data will be shared on reasonable request to the corresponding author.



