Abstract
Artificial intelligence (AI) in the domain of healthcare is increasing in prominence. Acceptance is an indispensable prerequisite for the widespread implementation of AI. The aim of this integrative review is to explore barriers and facilitators influencing healthcare professionals’ acceptance of AI in the hospital setting. Forty-two articles met the inclusion criteria for this review. Pertinent elements to the study such as the type of AI, factors influencing acceptance, and the participants’ profession were extracted from the included studies, and the studies were appraised for their quality. The data extraction and results were presented according to the Unified Theory of Acceptance and Use of Technology (UTAUT) model. The included studies revealed a variety of facilitating and hindering factors for AI acceptance in the hospital setting. Clinical decision support systems (CDSS) were the AI form included in most studies (n = 21). Heterogeneous results with regard to the perceptions of the effects of AI on error occurrence, alert sensitivity and timely resources were reported. In contrast, fear of a loss of (professional) autonomy and difficulties in integrating AI into clinical workflows were unanimously reported to be hindering factors. On the other hand, training for the use of AI facilitated acceptance. Heterogeneous results may be explained by differences in the application and functioning of the different AI systems as well as inter-professional and interdisciplinary disparities. To conclude, in order to facilitate acceptance of AI among healthcare professionals it is advisable to integrate end-users in the early stages of AI development as well as to offer needs-adjusted training for the use of AI in healthcare and providing adequate infrastructure.
Subject terms: Medical ethics, Health policy, Public health
Introduction
Artificial intelligence (AI) is associated with the mechanization of intelligent human behaviour1, especially to display intelligent human-like thinking and reasoning2. AI is a domain of computer science that is involved in the development of technology that is able to excerpt underlying information from a data set and transform them into operative knowledge. This transformation is based on algorithms that could either be predetermined or adaptive3. The term AI was coined in 1956 by John McCarthy but is often connected to the now so-called Turing test. The latter being a hypothetical setup to test, whether or not a machine was able to exhibit intelligent behaviour. Many methods—e.g. knowledge graphs or machine learning techniques have been applied to approximate such behaviour; often without reaching applicability due to computational limits4. However, computational limits have seemingly been overcome in many applications5. With the increased proliferation of novel AI solutions, issues of reliability, correctness, understanding and trustworthiness have come to the forefront. These issues and the expansion into applications not yet covered by AI solutions mean that the potential of AI technologies has not been fully applied yet, and the continuing growth in the development of AI technologies does not cease to promise new perspectives2. Many fields that introduced this new form of intelligence in their domains have witnessed a growth in productivity and efficacy1. However, the advantages and disadvantages of AI have to be weighed against one another prior to widespread introduction1.
The characterization that defines AI as systems that exhibit behaviours or decisions commonly attributed to human intelligence and cognition is widely accepted2. The typically necessary components of such decisions include recognition of a complex situation, the ability to abstract, and the application of factual knowledge6. Not all components are always present. Not in every case these systems are “learning”5. Decisive for the differentiation to classical systems is that AI systems evaluate complex situations individually and are not based on simple a priori known parameterizations with few input variables5.
AI developers are trying to apply their technologies in many fields such as engineering, gaming and education1. Lately, the development of AI technologies has expanded to medical practice and its implementation in complex healthcare work environments has begun1,7–11. Choudhury et al.12 have defined AI in healthcare as ‘an adaptive technology leveraging advanced statistical algorithm(s) to analyse structured and unstructured medical data, often retrospectively, with the final goal of predicting a future outcome, identifying hidden patterns, and extracting actionable information with clinical and situational relevance’ (p. 107)12. While AI systems can be applied in the supporting functions (e.g. administrative, legal, financial tasks) around healthcare with similar risks and rewards as in other industries, application to the primary functions of healthcare put a higher demand on suppliers due to regulation and possible impact. While otherwise, typical statistical fluctuations might not be acceptable in the healthcare setting, approaches using knowledge graphs or rule-based techniques, even in combination with machine learning, can lead to intelligent systems, that are robust enough to withstand the scrutiny of governing bodies and medical guidelines. Furthermore, refraining from systems that act fully on their own, but offer support to a licensed professional overseeing the actual application, can be made to satisfy legal and regulatory hurdles1.
Until now, AI has been established in the healthcare sector with the purpose of proposing efficient and practice-oriented solutions for patients and healthcare providers. In this field, AI is being developed to benefit healthcare professionals such as physicians and nurses in decision-making, diagnosis, prognosis, treatment and relief from physically demanding tasks1,11,13. However, they are not being extended to larger settings. Ethical issues, lack of standardization, and unclear legal liability are among the challenges that face the widespread of AI in healthcare today14.
Newly introduced change and its implementation are being faced with mixed attitudes and feelings by healthcare professionals1,13. Accepting change is not a simple process. Humans are known to resist change in exchange for the comfortable status quo. However, in order to improve efficiency and workflow in the long run, acceptance is a key element to adopting and implementing newly introduced changes such as AI in daily practice5,15.
In the context of technology, acceptance is defined as the willingness, intention and internal motivation to use a technology as a result of positive attitudes towards the technology or system16. Acceptance of AI systems plays a similar role as with the introduction of all other new tools. However, the less predictable handling of complex situations and the desired human-like behaviour quickly lead to more resistance15. On the side of the developers of these systems, acceptability rather than acceptance is studied. This is usually associated with terms like comprehensibility or transparency, which are supposed to lead directly to acceptability17. This is applied at the technical and legal level and in decisions about deployment at the management level. The level of acceptance by the user eludes such approaches and should instead be evaluated directly. Only through this step acceptance may be traced back to acceptability17.
This integrative review aims to unravel the variety of reported causes for the limited acceptance as well as facilitating factors for the acceptance of AI usage in the hospital setting to date. The assessment and analysis of reasons for distrust and limited usage are of utmost importance to face the increasing demands and challenges of the healthcare system as well as for the development of adequate, needs-driven AI systems while acknowledging their associated limitations. This includes the identification of factors influencing the acceptance of AI as well as a discussion of the mechanisms associated with the acceptance of AI in light of current literature. This review’s findings aim to serve as a basis for further practical recommendations to improve healthcare workers’ acceptance of AI in the hospital setting and thereby harness the full potential of AI.
Results
As shown in Fig. 1, the database search generated (n = 21,114) references. After deleting duplicates, sorting the articles according to the inclusion and exclusion criteria and applying forward citation tracking, a total of (n = 42) articles were included in this review.
Most studies were carried out in Europe (n = 13) followed by Asia (n = 12) and North America (n = 10). Further studies were conducted in Africa (n = 4) and Australia (n = 2). One international study was carried out in 25 different countries. There were qualitative studies (n = 18), quantitative studies (n = 16) and studies with a mixed-method approach (n = 8). All study participants were healthcare professionals working in a hospital setting. Instruments for data collection included interviews and surveys. The sample size of included studies varied between 12 and 562 and the age of participants ranged between 18 and 71 years (age reported in 21 studies). The average score for critical appraisal measured by means of the MMAT was 4.45 (Table 3).
Table 3.
Author(s) and year | Nature/form of AI | Participant age | Participant’s profession | Sample size | Study design and method | MMAT score | Barriers | Facilitators |
---|---|---|---|---|---|---|---|---|
Blanco et al. (2018)26 | CDSS | Not applicable (n.a.) | Nurses, physicians, pharmacists, radiology technicians and environmental services workers | 34 (interviews); 13 (survey) |
Qualitative semi-structured interviews and surveys |
5 | Sensitive systems induce alert fatigue | |
Catho et al. (2020)37 | CDSS | n.a. | Physicians | 29 |
Qualitative semi-structured interviews |
5 | Reduction in time spent with patients | |
Chow et al. (2015)44 | CDSS | n.a. | Physicians | 11 (focus group discussions); 265 (survey) |
Mixed-methods focus groups and survey |
4 | Junior physicians were more likely to follow the systems recommendation than senior physicians | |
Tscholl et al. (2018)43 | Monitoring System | 35–44 years old | Physicians | 128 (interviews); 38 (online survey) |
Mixed-methods Interviews and survey |
5 | Lack of precision in the representation of the information | Visibility of information at a glance enables to interpret the patients‘ situation more quickly |
Liberati et al. (2017)25 | CDSS | n.a. | Physicians, nurses, managers, IT staff | 30 |
Qualitative semi-structured interviews and surveys |
5 | Lack of understanding of functionalities | |
Elahi et al. (2020)46 | Prognostic model | n.a. | Physicians | 25 (questionnaires); 11 (interviews) |
Mixed-methods survey and semi-structured interviews |
5 | Infeasibility of the system if dependent on a strong internet connection | objective assessment of patient risk and support difficult triage decisions, particularly in resource-limited settings |
English et al. (2017)28 | CDSS | 25–61 years old | Pharmacists | 25 |
Quantitative survey |
4 | Facilitating conditions influence clinical pharmacists’ use of the system | |
Fan et al. (2020)15 | Medical diagnosis support system | Average age 40 years old | Healthcare professionals in the medical imaging department | 191 |
Quantitative survey |
4 | ||
Grau et al. (2019)27 | CDSS | n.a. | Physicians | 21 |
Qualitative semi-structured interviews |
5 | Sensitive systems induce alert fatigue | |
Hand et al. 201882 | CDSS | n.a. | Physicians, nurses and allied health professionals | 39 |
Quantitative survey |
4 |
17/37 (45.9%) felt it would help improve clinician satisfaction 31/35 (88.6%) indicated that they were willing to always or often use the CDSS for fertility discussions |
|
Hsiao et al. (2013)83 | Pain management decision support systems | n.a. | Nurses | 101 |
Quantitative survey |
3 | perceived ease of use and perceived usefulness account for 64% of the total explained variance in nurse anaesthetists’ acceptance of PM-DSS. | |
Jauk et al. (2021)32 | CDSS | 26–42 years old | Physicians and nurses | 47 (questionnaires); 15 (expert group) |
Mixed-methods interviews & survey |
4 | 14.9% of participants did not believe that the application can be used to detect delirium at an early stage | |
Kanagasundaram et al. (2016)29 | CDSS | n.a. | Physicians | 24 |
Qualitative interviews |
5 |
Alert fatigue System was cited to be an insult to knowledge Workflow interruption |
|
Khong et al. (2015)34 | CDSS | Average age junior nurses: 29,8 years old and average age senior nurses 45,5 years old | Nurses | 14 |
Qualitative semi-structured interviews |
5 |
Worry that too much trust in the system might lead to over-reliance and limit the development of clinical skills Participants doubted systems’ accuracy |
|
Kitzmiller et al. (2019)41 | Predictive analytics | n.a. | Physicians and nurses | 22 |
Qualitative semi-structured interviews |
5 | Distal and inconvenient location was perceived to negatively affect routine engagement with the system | |
Horsfall et al. (2021)22 | AI in surgery | 31–61 years old or older | Physicians and nurses | 100 for quantitative survey, 33 for qualitative |
Mixed-methods survey |
5 | 85% of participants strongly or somewhat agreed to the use of AI to enhance real-time alert of hazards or complications | |
Liang et al. (2019)35 | Robots | 30–36 years old | Nurses | 23 |
Qualitative Semi-structured interviews |
3 | Fear of a loss of job |
Perceived to be ideal for performing repetitive actions, routine tasks and assisting with precision treatment Robotics could also be a useful tool in multi-language communication with children and family caregivers from foreign countries, improving their understanding of the healthcare situation |
Lin et al. (2021)84 | AI in precision medicine | 21–40 years old | Physicians and nurses | 245 nurses and 40 physicians |
Quantitative survey |
4 | The most dominant determinant for acceptance was perceived usefulness of the system | |
McBride et al. (2019)39 | Robots | 18 to over 50 years old | Physicians, nurses and support staff | 164 |
Quantitative survey |
4 |
Most participants had concerns about care and handling (p = 0.056) Nursing (52.6%) and medical staff (59.6%) were concerned that robotic-assisted surgery will add significant cost and financial pressure on the facility |
Most of the nursing, medical and support staff agreed that theoretical, practical training, educational guides and staff support would facilitate the introduction of new technology in the workplace |
Norton et al. (2015)52 | CDSS | <39 to more than 50 years old | Physicians and nurses | 32 |
Quantitative Survey |
4 | Nonsurgeons reported that the tool would make their job easier more so than surgeons | |
Good educational training tool for residents | ||||||||
Oh et al. (2016)23 | CDSS | n.a. | Physicians and pharmacists | 98 |
Mixed-methods survey |
4 | Self-reported lower likelihood to change certain behaviours | |
O’Leary et al. (2014)31 | Clinical pathway support system | n.a. | Physicians, nurses and physiotherapists | 19 |
Mixed-methods Interviews and Surveys |
4 | Over half of the participants felt that clinical pathway support systems could help the reductions of errors | |
Omar et al. (2017)38 | CDSS | n.a. | Physicians | n.a. |
Qualitative Semi-structured interviews |
1 | Some junior nurses preferred to seek advice from senior nurses rather than AI | |
Esmaeilzadeh et al. (2015)85 | CDSS | n.a. | Physicians | 335 | Quantitative survey | 4 |
Significant relationship between perceived threat to professional autonomy and intention to use CDSS (β = −0.392, p-value = 0.000) |
|
Petitgand et al. (2020)21 | CDSS | n.a. | Physicians | 20 | Qualitative semi-structured Interviews | 5 | Systems may favour errors | |
Sandhu et al. (2020)45 | Machine learning | n.a. | Physicians and nurses | 15 |
Qualitative Semi-structured Interviews |
5 | Unfamiliarity with the system resulted in confusion and misunderstanding | Most useful for residents still developing clinical skills or low-resource community settings |
Schulte et al. (2020)50 | Automatic speech recognition | Mean age of 41.8 ± 9.8 years | Physicians | 185 |
Quantitative Survey |
4 | Voice recognizer without headset | |
Stifter et al. (2018)51 | CDSS | 21–71 years old | Nurses | 60 |
Quantitative Survey |
4 | Higher acceptability among participants with less than one year of experience than those with 1 or more years of experience | |
Walter et al. (2020)53 | Automated pain recognition | Mean age of 40.31 years ± 11.5 | Physicians and nurses | 102 |
Quantitative Survey |
5 | Pain detection accuracy of > 80% | |
Yurdaisik and Aksoy (2021)30 | AI | n.a. | Physicians, technicians and medical students | 204 |
Quantitative Survey |
4 | Only 5.3% of participants stated that they will assume the legal responsibility of imaging results | Among the participants, 51.9% think that AI applications will save time for radiologists |
Zheng et al. (2021)33 | AI in ophtalmology | Less than 25 to older than 45 years old | Physicians and technicians | 562 |
Quantitative Survey |
4 | 56.4% said that in the current ophthalmic AI practice, medical responsibilities are unclear | |
Aljarboa et al. (2019)18 | CDSS | 25–51 years old | Physicians | 12 |
Qualitative Semi-structured interviews |
5 | Alerts direct attention to important issues | |
Jones et al. (2022)54 | CDSS | 29–62 years old | Physicians and nurses | 33 |
Qualitative Interviews |
5 | Sensitive systems induce alert fatigue | |
Panicker and Sabu (2020)36 | Computer-assisted medical diagnosis system | 27–58 years old | Physicians and technicians | 18 |
Qualitative Interviews |
5 | Participants doubted systems’ accuracy | |
So et al. (2021)42 | AI | 25 years old to 55 or older | Physicians, nurses, pharmacists, physiotherapists and technicians | 96 |
Quantitative Survey |
5 | Working experience significantly favoured use of AI | |
Strohm et al. (2020)86 | AI in radiology | n.a. | Physicians | 25 |
Qualitative Semi-structured interviews |
5 | Unresolved question of legal responsibility for damage occurred due to e.g. false negatives and false positives resulting from an AI-generated diagnosis | |
Pumplun et al. (2021)49 | Machine learning | n.a. | Physicians, professionals in administrative roles | 22 |
Qualitative Interviews |
5 |
Lack of transparency Limited resources Uncertainties in governmental regulations, strict requirements for the protection of sensitive patient data, and existing medical ethics |
|
Prakash and Das (2021)19 | CDSS | 82% younger than 40 years old | Physicians | n.a. |
Mixed-methods interviews and surveys |
5 | Lack of understanding of functionalities | |
Zhai et al. (2021)87 | AI | 18 to more than 50 years old | Physicians and medical students | 307 |
Mixed-methods Survey |
5 | ||
Aljarboa and Miah (2021)24 | CDSS | 25–51 years old | Physicians | 54 |
Qualitative interviews |
5 | Importance of privacy and security factors as confidentiality and privacy of patient data is essential for use | |
Nydert et al. (2017)20 | CDSS | n.a. | Physicians | 17 |
Qualitative interviews |
5 | Risk of overreliance on the system; double-check of recommended dosage is needed | Greatest benefit within emergency care |
Alumran et al. (2020)47 | Electronic triage and acuity scale | n.a. | Nurses | 71 |
Quantitative survey |
5 |
The years of nurse’s experience influenced their usage of the E-CTAS. There was a positive correlation between years of experience likelihood to become an E-CTAS user |
In the following paragraphs, the results of our findings will be presented with reference to the UTAUT model. Table 1 represents a summary of the results in relation to the four main UTAUT aspects.
Table 1.
The four main UTAUT aspects | Results pertaining to each of the aspects |
---|---|
Performance expectancy |
- alerts and medical errors - time and workload - accuracy of AI technologies |
Effort expectancy |
- transparency and adaptability of the system - the system’s characteristics - training to use the system |
Social influence |
- influencing effects on decision making - communication in the workplace |
Facilitating conditions |
- legal liability - organizational culture - organizational infrastructure |
Performance expectancy
Heterogeneous findings are reported with respect to healthcare professionals’ confidence that using AI systems will benefit their performance. In the included studies, results reflecting on performance expectancy were reported with regards to alerts and medical errors, and the accuracy of AI technologies.
In three studies handling the adoption of clinical decision support systems (CDSS), participants indicated that in acute hospital settings, CDSS reduced the rate of medical errors through warnings and recommendations18–20. On the other hand, in one study about the barriers to adopting CDSS, participants reported that CDSS induced errors in emergency care settings21. AI in neurosurgery was the topic of a study in which 85% of 100 surgeons, anaesthetists and nurses considered alerts to be useful in the early detection of complications22. Similar results were reported in a study that evidenced that 90% of participating pharmacists and physicians (36/40) considered that an automated electronic alert improved the care of patients with acute kidney injury23. These findings were also supported in further studies about healthcare professionals’ perception of CDSSs in which participants described alerts as effective in drawing attention to key aspects18,24. Nevertheless, in one study about barriers to the uptake CDSSs, respondents found the number of alerts to be excessive25. In addition, in three studies, participating physicians and nurses mentioned fatigue resulting from frequent alerts26–28. Moreover, Kanagasundaram et al.29 reported that some physicians dismissed alerts29.
Healthcare professionals’ estimation of the accuracy of technologies based on AI was inconsistent. Results of a study showed that 22.5% of staff from a radiology department (N = 118) deemed AI-based diagnostic tools to be superior to radiologists in the near future30. However, only 12.2% (N = 204) claimed that they would “always use AI when making medical decisions in the near future”30. A study by O’Leary in which doctors, nurses and physiotherapists’ appraisal of the diagnostic abilities of AI support systems in view of rare or unusual diseases was investigated, found that 82% of respondents (N = 19) considered the tool to be useful31. Jauk et al.32 concluded that 14.9% of participating doctors and nurses (7/47) did not believe that a machine learning system could detect early-stage delirium32. Similarly, 49.3% of physicians (277/562) in a study assessing the use of AI in ophthalmology indicated that the quality of the system was difficult to guarantee33. In three studies that assessed healthcare professionals’ attitudes towards CDSS, findings implied that participants doubted the CDSS and diagnostics systems’ accuracy as they considered the quality of resulting information to be insufficient for decision making21,28,34. In another study on the same topic, physicians reported that CDSSs are useful but that their functions are limited27. Similarly, technical issues that might affect an AI system and render its results inconsistent were found to negatively affect the performance expectancy of physicians, nurses and operating room personnel and resulted in frustration10,19,22,25,34. In addition, in a study that investigated their attitudes towards potential robot’s use in a paediatric unit, nurses reported that they were sceptical of the abilities of the system35. Similarly, nurses stated in a study about adopting a CDSS that technical issues might affect the system and render its results inconsistent26.
Nevertheless, in qualitative studies on the topics of implementing AI in radiology and integrating a machine learning system into clinical workflow, physicians and nurses perceived AI to be accurate and based on sufficient scientific evidence in terms of diagnostics, objectivity and quality of information8,32,35,36.
Effort expectancy
Heterogeneous findings were also reported with respect to how easy the users believe it is to use a system. In the included studies, results reflecting on effort expectancy were reported in regard to time and workload, transparency and adaptability of the system, the system’s characteristics and training to use the system.
Efficiency with respect to time and workload was a recurrent theme in several included articles10,18,20,26,35–38. In a study by McBride et al.39 on robots in surgery, physicians were concerned about an increased operative time in robotic-assisted surgeries, whereas nursing and support theatre staff did not share these concerns39. However, in a study about the acceptance of a machine learning predictive system, 89.4% of nurses and doctors (42/47) did not report an increase in workload when using the algorithm in their clinical routine32. In a qualitative study about physicians’ adoption of CDSSs, participants reported CDSSs to be time-consuming37. Moreover, in a study about the attitude of radiologists towards AI, 51.9% of respondents (N = 204) appraised AI-based diagnostic tools to save time for radiologists30. Besides timely invests, McBride et al.39 reported that 52.6% of nursing staff (40/76) and 59.6% of medical staff (28/47) showed concerns that robotic-assisted surgery would increase financial pressure39.
In a study about the adoption of AI, physicians stated that a lack of transparency and adaptability of a CDSS system or machine learning system aiding diagnostics would negatively affect its adoption39. Moreover, participants of a study about the acceptance of a predictive machine learning system, argued that protocols founding the systems should be comprehensive and evidence-based32. A tendency to reject the systems was evidenced when participants reported unfamiliarity. This was stated in a study about the experience with a CDSS implemented in paediatrics29.
The system’s characteristics also seem to affect the expected effort to use a system which in turn influences its acceptance40. Participants of a study about the perception of a CDSS reported that when the system was perceived as intuitive, easily understood, and simple it was highly regarded by participants41. However, when the system was complex and required added tasks, such as reported in a study about integrating machine learning in the workflow, it was deemed undesirable36. In one study addressing the overall perception of AI by healthcare professionals, at least 70% of respondents (67/96) agreed on each item referring to the ease of use of AI-based systems42. However, Jauk et al.32 reported that 38.3% of users (18/47) of a machine learning algorithm reported that they were not able to integrate the system into their clinical routine32. In a study by Tscholl et al.43, 82% of anaesthesiologists (31/36) agreed or strongly agreed with the statement that the technology was “intuitive and easy to learn”43. When participants believed the AI-based system was aligned with their tasks, had consistent reporting of values and required minimal time and effort, they welcomed it43.
Other studies about CDSS systems reported that participants considered the systems to be inadequate, limited and inoperative in clinical practice19,38,44. A standardized CDSS system with clear guidelines seemed appealing to participants who approved of structured systems and commented positively on their ease of use27,34. Conversely, in a study about AI in radiology, participants reported that the system lacked standardization and automation and was therefore deemed unreliable10.
The importance of training for the successful implementation of AI systems was stressed upon in several studies. In one study referring to a continuative, predictive monitoring system41 and two addressing machine learning systems10,45, participants reported a lack of experience with the systems which resulted in feeling overwhelmed38,43,46. Alumran et al.47 observed that about half (53.49%) of nurses (N = 71) who did not use an AI system also did not participate in prior training47. Half of those receiving one training used the system whereas taking two training courses resulted in the use of the system in 83% of trained nurses. When taking three training courses this percentage increased to 100%47.
Social influence
The description of how much of an effect the opinion of others has on the study participants believing that they should use the AI systems was reported on in several studies. Results of studies reflecting on social influence were reported in regard to the influencing effects on decision-making and communication in the workplace.
In two studies about the acceptance and adoption of CDSS, physicians reported that their decision to use the system was independent of the opinion of supervisors and colleagues18,24. However, they reported that patients’ satisfaction with an AI system positively influenced their acceptance18,24.
One facilitating factor to the adoption of CDSS systems was believed to be communication between (potential) users of the systems25. Some studies pointed out the positive effects of CDSS systems and computerized diagnostic systems on the improvement of interdisciplinary practice and communication25,36. Nevertheless, in one study, physicians suggested that CDSS systems may reduce time spent with patients37. In view of the use of robotics in pediatrics, nurses emphasized that working with robots would have a negative effect on patients due to a reduction in human touch and connection35.
Facilitating conditions
Healthcare professionals’ views on organizational support to use the system were discussed in several included studies. The main discussion themes on this topic were legal liability, the organizational culture of accepting or rejecting AI systems and organizational infrastructures.
Concerns about legal liability and accountability were raised in several studies. Medical practitioners in a study about a diagnostic CDSS did not have a clear understanding of who would be accountable in case of a system error, which resulted in confusion and fear of the system48. Only 5.3% of respondents (N = 204) in a study about the attitude of radiologists towards AI stated that they would assume legal responsibility for imaging results provided by AI30. In two of the reviewed publications, participants addressed the topic of data protection. They mentioned the importance of maintaining data privacy as a positive aspect in the acceptance of AI systems, especially in CDSSs24,25.
In a study about the implementation of AI in radiology, the effect of organizational culture on the acceptance of the system versus the resistance to change was discussed. Several participants mentioned structuring the adoption on the system by selecting champions and expert groups10,32. However, in another study reporting on a wound-related CDSS, some nurses preferred to base their behaviour on their own decision-making process and feared that their organization was forcing them to do otherwise34.
The importance of an adequate infrastructure to implement AI systems as well as space and monetary resources were stressed18,49. The fact that AI systems oftentimes are in need of high-speed internet with a stable connection rendered them inoperable in the face of unavailability of good internet conditions which was expressed as problematic by some participants41,42. Additionally, in a study by Catho et al.37 on the adoption of CDSSs, several participating physicians highlighted the importance of providing technical support to users in order to increase acceptance of the system37.
Gender
Only three studies investigated whether there was an effect of gender on acceptance. None of them found significant effects50–52.
Age
With respect to age, three studies investigated whether there was an effect of age on the use of AI. Two studies did not observe an effect50,52. Walter et al.53 found that 55.8% of younger participants claimed that they would use automated pain recognition. In the older age group, only 40.4% of respondents reported that they would use the system (N = 102)53.
Experience
Stifter et al.51 reported that participants with less than one year of experience reported higher levels of perceived ease of use, perceived usefulness and acceptability of a CDSS than those with more than one year of experience, although the last was statistically non-significant51. In contrast, So et al.42 reported a statistically significant positive correlation between working experience and use of AI42. Similarly, Alumran et al.47 observed that an increase in working experience correlated with the use of an electronic triage system47.
Voluntariness of use
Participants of the included studies talked about the fear of AI replacing healthcare professionals as well as a loss of autonomy related to the use of AI. These two aspects could have an effect on the voluntariness to use AI systems.
Participants raised the concern that AI may replace healthcare professionals in their duties at some point. Among respondents, 54.9% reported that physician candidates should opt for “specialty areas where AI cannot dominate”39. Similarly, 6.3% of respondents expected AI to completely replace radiologists in the future39. In a study by Zheng et al.33, 24% of respondents (135/562) denied the claim that AI would completely replace physicians in ophthalmology54. Nevertheless, 77% of physicians and 57.9% of other professional technicians believed that AI would at least partially replace physicians in ophthalmology54. These findings were also replicated in two qualitative studies that explored the acceptance and adoption of CDSS in which physicians vocalized their fear of being replaced by the systems, and of their work becoming outdated18,47.
In a study about confidence in AI, physicians revealed a fear of loss of autonomy in stressful situations2,47. Nurses who participated in a study about the potential use of robots in paediatrics’ units expressed the concern that robots may limit the development of clinical skills29.
In a study assessing the acceptance of a CDSS in neurosurgery, senior physicians and nurses suggested that junior colleagues should refer to them for guidance and final decisions and not to an AI-based system28. They feared that blindly following the recommendations of AI-based systems may negatively impact decision-making processes28. Similarly, in a study about CDSS in electronic prescribing, junior nurses claimed that they preferred to seek advice from senior nurses instead of an AI-based system, especially in situations in which the system was deemed complex35. In addition, in two studies about the acceptance of two different CDSS systems, junior physicians were more open to the use of AI systems than their seniors25,48.
Discussion
The present review included 42 studies and sought to integrate findings about the influencing factors on the acceptance of AI by healthcare professionals in the hospital setting. All findings and evidence were structured with reference to the UTAUT model40. Based on the included studies (N = 42), acceptance was primarily studied for CDSSs (N = 21).
An important factor that could affect the acceptance of AI in healthcare is safety. Different AI systems could lead to different risks of error occurrence which affect the acceptance of the system among healthcare professionals. Although it can be stated that AI-based prediction systems have shown to result in lower error rates than traditional systems55,56, it may be argued that systems taking over simple tasks are deemed more reliable and trustworthy and are therefore more widely accepted than AI-based systems operating on complex tasks such as surgical robots. Furthermore, Choudhury et al.3, who studied the acceptability of an AI-based blood utilization calculator argued that AI-based systems are often based on data from a norm-typical patient population; however, if the system is applied to unanticipated patient populations (e.g. patients with sickle cell disease), the AI-based recommendation may become inadequate. Such a sample selection bias may not only endanger patient safety but is also likely to increase levels of scepticism about performance expectancy resulting in decreased acceptance among healthcare professionals3,57. Moreover, the safety of a system might be affected by technical complications that may influence the quality of the system’s output and therefore limit healthcare professionals’ trust in the system58,59. Besides technical complications, insufficient data and information may compromise the accuracy and validity of AI output60. By consequence, ensuring high-quality input data as well as ensuring that the system is applied to the anticipated patient population is of utmost importance to AI-based systems’ acceptance60.
Additionally, another aspect of safety that was reported to affect effort expectancy and therefore acceptance, is the degree of alert sensitivity of an AI system61. The phenomenon of alarm fatigue which refers to “characteristics that increase a clinician’s response time and/or decrease the response rate to a clinical alarm as a result of too many alarms”62 is a result of the AI system and could affect the safety of patient care. To sum up, overly sensitive alarms may induce desensitization and alert dismissal29. Although the function is to hint at potential medical complications, overly sensitive alarms may paradoxically lead to risks to patient safety due to desensitization and alert dismissal in critical situations62. Therefore, alarm sensitivity is a factor that might have an effect on healthcare professionals’ acceptance of an AI system and should be taken into consideration when designing AI-based systems in order to enhance acceptance and usage of the systems63.
Furthermore, differences in AI acceptance between various occupational groups is a factor that could influence the acceptance of an AI system in a healthcare setting. In this review, we observed a tendency of respondents to perceive AI-based systems more negatively if one’s own professional group was using the AI system rather than another professional group39. We could not find more information to back up this theory in the literature. It would therefore be interesting to follow up if the use of the AI system by one’s own professional group does indeed affect his or her perception of the system.
Human factors such as personality and experience were found to affect the perception of an AI system. Depending on the healthcare professional, their needs and the work environment, the acceptance of an AI system might differ3. The same AI system might be perceived as helpful by a person and would therefore be accepted while another professional might find that the system could hold up their work and would therefore deem it as unacceptable3. Moreover, as found in our review and supported by the literature, more experienced healthcare professionals tend to trust their knowledge and experience more than an AI system. Consequently, they might override the system’s recommendations and make their own decisions based on their personal judgement3. This might be related to their fear of losing autonomy in a situation where the AI system is recommending something that is not in line with their critical thinking process.
In addition, time and staff resources are factors that could potentially affect the acceptance of AI systems in healthcare. These factors were perceived differently by different disciplines. With regards to robotic-assisted surgery, medical staff anticipated an increase in operating time and the diagnostic process39. Other studies reported that 89.4% of users expected an increase in workload when using a machine-learning algorithm in their clinical routines32. Moreover, physicians are often under time constraints during their visits to patients and are overloaded with documentation work. Therefore, they might accept an AI system such as a CDSS if they witness that it might reduce their workload and assist them3. In order to facilitate the acceptance and thus implementation of AI systems in clinical settings, it is of utmost importance to integrate these systems into clinical routines and workflows, thereby allowing to reduce the workload.
Interestingly, AI-based systems for the support of the diagnostic process seem to be more established in radiology than among other medical disciplines30. This indicates differences in the levels of AI acceptance among healthcare professionals between medical specialties. In implementation studies with reference to AI in radiology, transformative changes with regards to improvements in diagnostic accuracy and value of image analysis were reported64,65. This raises the question of whether healthcare professionals in the area of radiology are more technically inclined and specialize on the basis of this enhanced interest or whether innovations of AI in radiology are more easily and better integrated into existing routines and are therefore more widely established and accepted as reported by Recht and Bryan (2017)64 and Mayo and Leung (2018)65. Furthermore, insufficient knowledge of the limits and potentials of AI technologies’ use may impact healthcare professionals’ acceptance negatively8,11. However, as cited many times in the literature, a former introduction to the technology as well as proper training and education on the correct usage of AI might encourage users to accept this technology within their field3,7,8,13,66,67. Moreover, transparency in AI data processing is of utmost importance when AI is introduced to healthcare. If the user is able to acknowledge the benefit of the technology and comprehends what AI-based recommendations are based upon, his or her acceptance towards it increases13,15,68,69. On the other hand, when the user perceives the use of the AI technology as a threat then his or her level of acceptance decreases68. Based on a study reporting the effects of training on acceptance of an AI-based system, it can be stated that the number of training correlated positively with the percentage of participating nurses using the system47. In medical education, the necessity to provide training in AI beyond clinical and biomedical skills is emphasized70,71. Nonetheless, training requires time and several studies have reported that healthcare professionals lack the time outside their official duty hours to learn how to use new AI-based technologies7,8,15,68. Thus, it is an organizational duty to not only offer training for potential users of the AI systems but also to provide staff with timely resources to take part in this training to foster AI acceptance. Furthermore, it should be discussed whether trainings in AI should be integrated early into the educational curriculum72,73. Kolachalama and Garg (2018) emphasize the need to integrate expertise from data science and to focus on topics of literacy and practical guidelines in such trainings71. Nevertheless, intrinsic motivation to participate in training may also contribute to the seemingly positive effects of the training on the use behaviour observed in the study by Alumran et al. (2020)47.
It is important to note that we were not able to replicate the findings of the effect of gender on technology acceptance as proposed by the UTAUT model. In contrast to the UTAUT model, we argue that in this case, there is probably no effect of gender on AI acceptance. However, with regard to age, contradictory results were reported both in our review as well as in the literature. For example, two studies from the literature showed that age impacts trust in AI and that the younger generation leans more toward trusting AI systems than their older counterparts74,75. On the contrary, a study by Choudhoury and Asan (2022)76 revealed that age did not play a significant role in trusting or intending to use AI.
Nevertheless, training and providing adequate infrastructure with respect to technical support and internet access were unanimously found to be facilitating factors for the acceptance and implementation of AI-based systems in the hospital context and should therefore be considered by the management levels of hospitals1,13. To continue, especially with reference to alert systems, aspects such as the alert sensitivity of an AI system and potential consequences in case of elevated sensitivity levels such as alert fatigue and alert dismissal should be kept in mind when determining the safety of a system61,63. In order to design a user-friendly AI-based system and enhance its acceptance, it is of utmost importance to involve healthcare professionals early on in the designing stages of the system77. We recommend the implementation of user-centred design78 during the development of an AI system in healthcare, which would allow the involvement of healthcare professionals in the different stages of the development and evaluation of a system. By incorporating the abilities, characteristics and boundaries of healthcare professionals, the development would result in a secure, uncomplicated and effective AI system. This resulting system would receive high acceptance rates because of healthcare professionals participating in its creation and its integration into clinical routines and workflows would be uncomplicated. Moreover, we also propose longer and intensive research to understand how AI as a complex intervention affects work processes and how people react to it and behave with it. A better understanding of AI-assisted work and decision-making processes could thus be continuously incorporated and the further development of AI systems would profit from it. Finally, in order to facilitate usability and intuitive handling of AI in clinical routine, we recommend to implement training in regards to the theoretical basics, ethical considerations and limitations in view of AI as well as practical skills of usage as early as in undergraduate education.
Reasons for the limited acceptance among healthcare professionals are manifold: Personal fears related to a loss of professional autonomy, lack of integration in clinical workflow and routines, overly sensitive settings for alarm systems, and loss of patient contact are reported. Also, technical reservations such as unintuitive user interfaces and technical limitations such as the unavailability of strong internet connections impede comprehensive usage and acceptance of AI. Hesitation to accept AI in the healthcare setting has to be acknowledged by those in charge of implementing AI technologies in hospital settings. Once the causes of hesitation are known and personal fears and concerns are recognized, appropriate interventions such as training, reliability of AI systems and their ease of use may aid in overcoming the indecisiveness to accept AI in order to allow users to be keen, satisfied and enthusiastic about the technologies.
Methods
An integrative review of the acceptance of AI among healthcare professionals in the hospital setting was performed. The review protocol was registered in the PROSPERO Database (CRD42021251518). Integrative reviews allow us to reflect on and assess the strength of scientific evidence, identify particular clinical issues, recognize gaps in the current literature, and evaluate the need for further research. An integrative review is based on prior extensive research on a specified topic by means of a literature search79. This type of review is of complex nature which makes it prone to the risk of bias. To reduce bias, specific methods are required. Therefore, this review is based on the methodological framework proposed by Whittemore and Knafl80. Initially, the topic of interest and the significance of the review is identified. Then, the literature is explored systematically according to a set of identified eligibility criteria. After that, relevant inputs from the included studies are extracted and their quality is appraised. Finally, the outcomes of the studies included in this review are presented and relevance and recommendations for future research are consequently made.
The results of the reviewed articles are presented based on the unified theory of acceptance and use of technology (UTAUT). This theory explains a user’s intention to use information technology systems. It is based on various information technology acceptance models, one of them being the technology acceptance model (TAM)40. The UTAUT consists of four main aspects: performance expectancy, effort expectancy, social influences, and facilitating conditions, next to four regulating factors: gender, age, experience and voluntariness of use, which affect the four main aspects40 (Table 2).
Table 2.
The four main aspects of the UTAUT | |
Performance expectancy | characterizes the user’s confidence that using technology will benefit his work performance. |
Effort expectancy | represents a user’s beliefs of how easy it is to use the system. |
Social influence | describes how much the user feels that significant others believe that they should use the technology. |
Facilitating conditions | represents the degree to which the user believes that there exists organizational and technical support to use the system. |
Regulating factors | |
Gender | |
Age | |
Experience | the user´s familiarity with the system, is thought to affect effort expectancy, social influence and facilitating conditions. |
Voluntariness of use | which clarifies whether the system is mandatory or voluntary, is proposed to impact social influence. |
Data collection
Data were sought from records in various databases and grey literature sources. We systematically searched the databases MEDLINE via PubMed, Cochrane Library via Wiley Interscience, Embase and ScienceDirect via Elsevier, Institution of Electrical and Electronics Engineers (IEEE) Xplore via IEEE, Web of Science via Clarivate Analytics, as well as the Cumulative Index to Nursing and Allied Health Literature (CINAHL) via EBSCO for qualitative, quantitative and mixed methods studies. Furthermore, grey literature was searched by means of the dissertation databases Bielefeld Academic Search Engine via BASE, ProQuest, Technische Informationsbibliothek (TIB) as well as the DART Europe E-Theses Portal.
Studies that align with the aim of this study and its research questions were searched for. Keywords were joined using Boolean terms, medical subject headings, and truncation. In close collaboration with a librarian from the local medical university library, the following search string was generated: (Artificial Intelligence OR Machine Learning OR Deep Learning OR Neural Network OR Technol* System OR Smart System OR Intelligent System OR Assistive System OR Decision Support System OR Human–Computer Interaction OR Human Machine Interaction OR Cognitive System OR Decision Engineering OR Natural Language Understanding) AND (Approval OR Intention to Use OR Acceptance OR Adoption OR Acceptability) AND (Nurse OR Doctors OR Physician OR MD OR Clinician OR Healthcare professional OR Healthcare OR Healthcare Worker) AND (Hospital OR Acute Care OR Inpatient care OR Standard Care OR Intensive Care OR Intermediate Care OR Ward). In a subsequent phase, google scholar forward citation tracking was applied to articles included in the database search.
Inclusion criteria
Quantitative, qualitative and mixed methods original studies published from 2010 up to and including June 2022, in which participants are healthcare professionals and whose clinical fields of work are directly affected by AI (e.g., physicians, nurses, pharmacists, imaging technicians, physiotherapists) were assessed and explored. Studies written in English or German and investigating factors of AI acceptance were considered for review. Other inclusion criteria included studies taking place in hospital settings and studies that describe the development of AI systems with the involvement of healthcare professionals.
Exclusion criteria
Studies, in which participants were care recipients and family members as well as studies taking place in ambulatory settings, hospices, nursing homes or rehabilitation centres were excluded.
Screening and extraction process
All studies that resulted from the search were exported to the RAYYAN software, which was used for the screening process48. Duplicates were deleted. The remaining research articles were screened separately by two independent reviewers based on title and abstract (M.M. and S.L.). Conflicts between the reviewers were resolved through discussion. The eligibility of relevant studies was appraised based on independent full-text reading by the same two authors. If assessed differently, conflicts were discussed. An extraction table was created by the two reviewers to gather and extract data from the included studies (Table 3).
Quality appraisal
The quality of all included articles were critically assessed by means of the Mixed Methods Appraisal Tool (MMAT) by two authors (M.M. and S.L.)81. The MMAT assesses the study quality on the basis of five quality criteria. These criteria include the appropriateness of the research question, of the data collection methods and of the measurement instruments. Ultimately, each study attains a score from zero to five. The higher the score attained, the greater the quality of the appraised study81.
Quality appraisal of studies included in integrative reviews improves rigour and diminishes the risk for bias80.
Future directions
Most studies assessed the age of participants. Unfortunately, just four studies assessed the correlation between participant age and levels of acceptance whereof only two observed an effect of age on AI acceptance. In view of the UTAUT model which assumes an effect of age on technology acceptance, it would be of interest to see whether the UTAUT still represents current findings in technology acceptance. Since its publication, the development and use of technology in the wider population have increased substantially. It cannot be ruled out that the availability and integration of technology in the broader population may alter the influence of factors such as age defined in the UTAUT. As a consequence, it would be of interest to re-evaluate the UTAUT model.
Limitations
We found mixed findings with respect to different AI systems. Most studies addressed CDSSs. It can be argued that by including different types of AI-based systems in the study, interfering variables due to differential proceedings in the handling and function of the systems may have distorted the reported results. It would be of interest to investigate differential hindering and facilitating factors for the acceptance of AI for different kinds of AI-based systems.
In this integrative review, various perspectives of healthcare professionals in hospital settings regarding the acceptance of AI were revealed. Many facilitating factors to the acceptance of AI as well as limiting factors were discussed. Factors related to acceptance or limited acceptance were discussed in association with the characteristics of the UTAUT model. After reviewing 42 studies and discussing them in rapport with studies from the literature, we conclude that hesitation to accept AI in the healthcare setting has to be acknowledged by those in charge of implementing AI technologies in hospital settings. Once the causes of hesitation are known and personal fears and concerns are recognized, appropriate interventions such as training, reliability of AI systems and their ease of use may aid in overcoming the indecisiveness to accept AI in order to allow users to be keen, satisfied and enthusiastic about the technologies.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Supplementary information
Author contributions
S.L., M.M. and A.S. made substantial contributions regarding study conceptualization and design. S.L. and M.M. conducted the search, the data analysis and interpretation of the data, and S.L. and M.M. wrote the manuscript. A.S. was involved in the analysis and interpretation of the data. A.S. critically reviewed the manuscript for important intellectual content, supervised the study and supported S.L. and M.M. as senior investigators. S.S., A.L., H.S. and C.B. reviewed and made substantial contributions to the manuscript. All authors contributed to the conceptualization and design of this study, including the preparation of study material, and reviewed and revised the manuscript. All authors agree to be accountable for all aspects of this work and approve the final manuscript as submitted.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Data availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
These authors contributed equally: Sophie Isabelle Lambert, Murielle Madi.
Change history
7/11/2023
A Correction to this paper has been published: 10.1038/s41746-023-00874-z
Contributor Information
Sophie Isabelle Lambert, Email: solambert@ukaachen.de.
Murielle Madi, Email: mmadi@ukaachen.de.
Supplementary information
The online version contains supplementary material available at 10.1038/s41746-023-00852-5.
References
- 1.Maskara R, Bhootra V, Thakkar D, Nishkalank N. A study on the perception of medical professionals towards artificial intelligence. Int. J. Multidiscip. Res. Dev. 2017;4:34–39. [Google Scholar]
- 2.Oh S, et al. Physician confidence in artificial intelligence: an online mobile survey. J. Med. Internet Res. 2019;21:e12422. doi: 10.2196/12422. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Choudhury A, Asan O, Medow JE. Clinicians’ perceptions of an artificial intelligence–based blood utilization calculator: qualitative exploratory study. JMIR Hum. Factors. 2022;9:1–9. doi: 10.2196/38411. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Pallay, C. Vom Turing-Test zum General Problem Solver. Die Pionierjahre der künstlichen Intelligenz. in Philosophisches Handbuch Künstliche Intelligenz (ed. Mainzer, K.) 1–20 (Springer Fachmedien Wiesbaden, 2020). 10.1007/978-3-658-23715-8_3-1.
- 5.Liyanage H, et al. Artificial intelligence in primary health care: perceptions, issues, and challenges. Yearb. Med. Inform. 2019;28:41–46. doi: 10.1055/s-0039-1677901. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Dimiduk DM, Holm EA, Niezgoda SR. Perspectives on the impact of machine learning, deep learning, and artificial intelligence on materials, processes, and structures engineering. Integr. Mater. Manuf. Innov. 2018;7:157–172. doi: 10.1007/s40192-018-0117-8. [DOI] [Google Scholar]
- 7.Aapro M, et al. Digital health for optimal supportive care in oncology: benefits, limits, and future perspectives. Support. Care Cancer. 2020;28:4589–4612. doi: 10.1007/s00520-020-05539-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Lugtenberg M, Weenink JW, Van Der Weijden T, Westert GP, Kool RB. Implementation of multiple-domain covering computerized decision support systems in primary care: a focus group study on perceived barriers. BMC Med. Inform. Decis. Mak. 2015;15:1–11. doi: 10.1186/s12911-015-0205-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Radionova N, et al. The views of physicians and nurses on the potentials of an electronic assessment system for recognizing the needs of patients in palliative care. BMC Palliat. Care. 2020;19:1–9. doi: 10.1186/s12904-020-00554-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Strohm L, Hehakaya C, Ranschaert ER, Boon WPC, Moors EHM. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. Eur. Radiol. 2020;30:5525–5532. doi: 10.1007/s00330-020-06946-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Waymel Q, Badr S, Demondion X, Cotten A, Jacques T. Impact of the rise of artificial intelligence in radiology: what do radiologists think? Diagn. Interv. Imaging. 2019;100:327–336. doi: 10.1016/j.diii.2019.03.015. [DOI] [PubMed] [Google Scholar]
- 12.Choudhury, A., Saremi, M. L. & Urena, E. Perception, trust, and accountability affecting acceptance of artificial intelligence: from research to clinician viewpoint. In Diverse Perspectives and State-of-the-Art Approaches to the Utilization of Data-Driven Clinical Decision Support Systems 105–124 (IGI Global, 2023).
- 13.Abdullah R, Fakieh B. Health care employees’ perceptions of the use of artificial intelligence applications: survey study. J. Med. Internet Res. 2020;22:1–8. doi: 10.2196/17620. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Jiang L, et al. Opportunities and challenges of artificial intelligence in the medical field: current application, emerging problems, and problem-solving strategies. J. Int. Med. Res. 2021;49:1–11. doi: 10.1177/03000605211000157. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Fan W, Liu J, Zhu S, Pardalos PM. Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS) Ann. Oper. Res. 2020;294:567–592. doi: 10.1007/s10479-018-2818-y. [DOI] [Google Scholar]
- 16.Chismar, W. G. & Wiley-Patton, S. Does the extended technology acceptance model apply to physicians. In Proc. 36th Annual Hawaii International Conference on System Sciences, HICSS 2003 (ed. Sprague, R. H. Jr) (IEEE Computer Society, 2003).
- 17.Schmidt P, Biessmann F, Teubner T. Transparency and trust in artificial intelligence systems. J. Decis. Syst. 2020;29:260–278. doi: 10.1080/12460125.2020.1819094. [DOI] [Google Scholar]
- 18.Aljarboa, S., Shah, M. & Kerr, D. Perceptions of the adoption of clinical decision support systems in the Saudi healthcare sector. In Proc. 24th Asia-Pacific Decision Science Institute International Conference (eds Blake, J., Miah, S. J., Houghton, L. & Kerr, D.) 40–53 (Asia Pacific Decision Sciences Institute, 2019).
- 19.Prakash AV, Das S. Medical practitioner’s adoption of intelligent clinical diagnostic decision support systems: a mixed-methods study. Inf. Manag. 2021;58:103524. doi: 10.1016/j.im.2021.103524. [DOI] [Google Scholar]
- 20.Nydert P, Vég A, Bastholm-Rahmner P, Lindemalm S. Pediatricians’ understanding and experiences of an electronic clinical-decision-support-system. Online J. Public Health Inform. 2017;9:e200. doi: 10.5210/ojphi.v9i3.8149. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Petitgand C, Motulsky A, Denis JL, Régis C. Investigating the barriers to physician adoption of an artificial intelligence-based decision support system in emergency care: an interpretative qualitative study. Stud. Health Technol. Inform. 2018;270:1001–1005. doi: 10.3233/SHTI200312. [DOI] [PubMed] [Google Scholar]
- 22.Horsfall HL, et al. Attitudes of the surgical team toward artificial intelligence in neurosurgery: international 2-stage cross-sectional survey. World Neurosurg. 2021;146:e724–e730. doi: 10.1016/j.wneu.2020.10.171. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Oh J, Bia JR, Ubaid-Ullah M, Testani JM, Wilson FP. Provider acceptance of an automated electronic alert for acute kidney injury. Clin. Kidney J. 2016;9:567–571. doi: 10.1093/ckj/sfw054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Aljarboa, S. & Miah, S. J. Acceptance of clinical decision support systems in Saudi healthcare organisations. Inf. Dev. 10.1177/02666669211025076 (2021).
- 25.Liberati EG, et al. What hinders the uptake of computerized decision support systems in hospitals? A qualitative study and framework for implementation. Implement. Sci. 2017;12:1–13. doi: 10.1186/s13012-017-0644-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Blanco N, et al. Health care worker perceptions toward computerized clinical decision support tools for Clostridium difficile infection reduction: a qualitative study at 2 hospitals. Am. J. Infect. Control. 2018;46:1160–1166. doi: 10.1016/j.ajic.2018.04.204. [DOI] [PubMed] [Google Scholar]
- 27.Grau LE, Weiss J, O’Leary TK, Camenga D, Bernstein SL. Electronic decision support for treatment of hospitalized smokers: a qualitative analysis of physicians’ knowledge, attitudes, and practices. Drug Alcohol Depend. 2019;194:296–301. doi: 10.1016/j.drugalcdep.2018.10.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.English D, Ankem K, English K. Acceptance of clinical decision support surveillance technology in the clinical pharmacy. Inform. Health Soc. Care. 2017;42:135–152. doi: 10.3109/17538157.2015.1113415. [DOI] [PubMed] [Google Scholar]
- 29.Kanagasundaram NS, et al. Computerized clinical decision support for the early recognition and management of acute kidney injury: a qualitative evaluation of end-user experience. Clin. Kidney J. 2016;9:57–62. doi: 10.1093/ckj/sfv130. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Yurdaisik I, Aksoy SH. Evaluation of knowledge and attitudes of radiology department workers about artificial intelligence. Ann. Clin. Anal. Med. 2021;12:186–190. [Google Scholar]
- 31.O’Leary, P., Carroll, N. & Richardson, I. The practitioner’s perspective on clinical pathway support systems. In IEEE International Conference on Healthcare Informatics 194–201 (IEEE, 2014).
- 32.Jauk S, et al. Technology acceptance of a machine learning algorithm predicting delirium in a clinical setting: a mixed-methods study. J. Med. Syst. 2021;45:48. doi: 10.1007/s10916-021-01727-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Zheng B, et al. Attitudes of medical workers in China toward artificial intelligence in ophthalmology: a comparative survey. BMC Health Serv. Res. 2021;21:1067. doi: 10.1186/s12913-021-07044-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Khong PCB, Hoi SY, Holroyd E, Wang W. Nurses’ clinical decision making on adopting a wound clinical decision support system. Comput. Inform., Nurs. 2015;33:295–305. doi: 10.1097/CIN.0000000000000164. [DOI] [PubMed] [Google Scholar]
- 35.Liang H-F, Wu K-M, Weng C-H, Hsieh H-W. Nurses’ views on the potential use of robots in the pediatric unit. J. Pediatr. Nurs. 2019;47:e58–e64. doi: 10.1016/j.pedn.2019.04.027. [DOI] [PubMed] [Google Scholar]
- 36.Panicker RO, Sabu MK. Factors influencing the adoption of computerized medical diagnosing system for tuberculosis. Int. J. Inf. Technol. 2020;12:503–512. [Google Scholar]
- 37.Catho G, et al. Factors determining the adherence to antimicrobial guidelines and the adoption of computerised decision support systems by physicians: a qualitative study in three European hospitals. Int. J. Med. Inform. 2020;141:104233. doi: 10.1016/j.ijmedinf.2020.104233. [DOI] [PubMed] [Google Scholar]
- 38.Omar A, Ellenius J, Lindemalm S. Evaluation of electronic prescribing decision support system at a tertiary care pediatric hospital: the user acceptance perspective. Stud. Health Technol. Inform. 2017;234:256–261. [PubMed] [Google Scholar]
- 39.McBride KE, Steffens D, Duncan K, Bannon PG, Solomon MJ. Knowledge and attitudes of theatre staff prior to the implementation of robotic-assisted surgery in the public sector. PLoS ONE. 2019;14:e0213840. doi: 10.1371/journal.pone.0213840. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS Q. 2003;27:425–478. doi: 10.2307/30036540. [DOI] [Google Scholar]
- 41.Kitzmiller RR, et al. Diffusing an innovation: clinician perceptions of continuous predictive analytics monitoring in intensive care. Appl. Clin. Inform. 2019;10:295–306. doi: 10.1055/s-0039-1688478. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.So S, Ismail MR, Jaafar S. Exploring acceptance of artificial intelligence amongst healthcare personnel: a case in a private medical centre. Int. J. Adv. Eng. Manag. 2021;3:56–65. [Google Scholar]
- 43.Tscholl DW, Weiss M, Handschin L, Spahn DR, Nöthiger CB. User perceptions of avatar-based patient monitoring: a mixed qualitative and quantitative study. BMC Anesthesiol. 2018;18:188. doi: 10.1186/s12871-018-0650-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Chow A, Lye DCB, Arah OA. Psychosocial determinants of physicians’ acceptance of recommendations by antibiotic computerised decision support systems: a mixed methods study. Int. J. Antimicrob. Agents. 2015;45:295–304. doi: 10.1016/j.ijantimicag.2014.10.009. [DOI] [PubMed] [Google Scholar]
- 45.Sandhu S, et al. Integrating a machine learning system into clinical workflows: qualitative study. J. Med. Internet Res. 2020;22:e22421. doi: 10.2196/22421. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Elahi C, et al. An attitude survey and assessment of the feasibility, acceptability, and usability of a traumatic brain injury decision support tool in Uganda. World Neurosurg. 2020;139:495–504. doi: 10.1016/j.wneu.2020.04.193. [DOI] [PubMed] [Google Scholar]
- 47.Alumran A, et al. Utilization of an electronic triage system by emergency department nurses. J. Multidiscip. Healthc. 2020;13:339–344. doi: 10.2147/JMDH.S250962. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst. Rev. 2016;5:210. doi: 10.1186/s13643-016-0384-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Pumplun L, Fecho M, Wahl N, Peters F, Buxmann P. Adoption of machine learning systems for medical diagnostics in clinics: qualitative interview study. J. Med. Internet Res. 2021;23:e29301. doi: 10.2196/29301. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Schulte A, et al. Automatic speech recognition in the operating room–An essential contemporary tool or a redundant gadget? A survey evaluation among physicians in form of a qualitative study. Ann. Med. Surg. 2020;59:81–85. doi: 10.1016/j.amsu.2020.09.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Stifter J, et al. Acceptability of clinical decision support interface prototypes for a nursing electronic health record to facilitate supportive care outcomes. Int. J. Nurs. Knowl. 2018;29:242–252. doi: 10.1111/2047-3095.12178. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Norton WE, et al. Acceptability of the decision support for safer surgery tool. Am. J. Surg. 2015;209:977–984. doi: 10.1016/j.amjsurg.2014.06.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Walter S, et al. “What about automated pain recognition for routine clinical use?” A survey of physicians and nursing staff on expectations, requirements, and acceptance. Front. Med. 2020;7:566278. doi: 10.3389/fmed.2020.566278. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Jones EK, Banks A, Melton GB, Porta CM, Tignanelli CJ. Barriers to and facilitators for acceptance of comprehensive clinical decision support system–driven care maps for patients with thoracic trauma: interview study among health care providers and nurses. JMIR Hum. Factors. 2022;9:e29019. doi: 10.2196/29019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Weng SF, Reps J, Kai J, Garibaldi JM, Qureshi N. Can machine-learning improve cardiovascular risk prediction using routine clinical data? Stephen. PLoS ONE. 2017;12:1–14. doi: 10.1371/journal.pone.0174944. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Liu T, Fan W, Wu C. A hybrid machine learning approach to cerebral stroke prediction based on imbalanced medical dataset. Artif. Intell. Med. 2019;101:101723. doi: 10.1016/j.artmed.2019.101723. [DOI] [PubMed] [Google Scholar]
- 57.Challen R, et al. Artificial intelligence, bias and clinical safety. BMJ Qual. Saf. 2019;28:231–237. doi: 10.1136/bmjqs-2018-008370. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Bedaf S, Marti P, Amirabdollahian F, de Witte L. A multi-perspective evaluation of a service robot for seniors: the voice of different stakeholders. Disabil. Rehabil. Assist. Technol. 2018;13:592–599. doi: 10.1080/17483107.2017.1358300. [DOI] [PubMed] [Google Scholar]
- 59.Hebesberger D, Koertner T, Gisinger C, Pripfl J. A long-term autonomous robot at a care hospital: a mixed methods study on social acceptance and experiences of staff and older adults. Int. J. Soc. Robot. 2017;9:417–429. doi: 10.1007/s12369-016-0391-6. [DOI] [Google Scholar]
- 60.Varshney, K. R. Engineering safety in machine learning. In 2016 Information Theory Applications Work (ITA) 2016 (Institute of Electrical and Electronics Engineers (IEEE), 2017).
- 61.Ko Y, et al. Practitioners’ views on computerized drug-drug interaction alerts in the VA system. J. Am. Med. Inform. Assoc. 2007;14:56–64. doi: 10.1197/jamia.M2224. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Ruskin KJ, Hueske-Kraus D. Alarm fatigue: Impacts on patient safety. Curr. Opin. Anaesthesiol. 2015;28:685–690. doi: 10.1097/ACO.0000000000000260. [DOI] [PubMed] [Google Scholar]
- 63.Poncette A-S, et al. Improvements in patient monitoring in the intensive care unit: survey study. J. Med. Internet Res. 2020;22:e19091. doi: 10.2196/19091. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Recht M, Bryan RN. Artificial intelligence: threat or boon to radiologists? J. Am. Coll. Radiol. 2017;14:1476–1480. doi: 10.1016/j.jacr.2017.07.007. [DOI] [PubMed] [Google Scholar]
- 65.Mayo RC, Leung J. Artificial intelligence and deep learning—radiology’s next frontier? Clin. Imaging. 2018;49:87–88. doi: 10.1016/j.clinimag.2017.11.007. [DOI] [PubMed] [Google Scholar]
- 66.Sarwar S, et al. Physician perspectives on integration of artificial intelligence into diagnostic pathology. npj Digit. Med. 2019;2:1–7. doi: 10.1038/s41746-019-0106-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Rogove HJ, McArthur D, Demaerschalk BM, Vespa PM. Barriers to telemedicine: survey of current users in acute care units. Telemed. e-Health. 2012;18:48–53. doi: 10.1089/tmj.2011.0071. [DOI] [PubMed] [Google Scholar]
- 68.Safi S, Thiessen T, Schmailzl KJG. Acceptance and resistance of new digital technologies in medicine: qualitative study. J. Med. Internet Res. 2018;7:e11072. doi: 10.2196/11072. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Bitterman DS, Aerts HJWL, Mak RH. Approaching autonomy in medical artificial intelligence. Lancet Digit. Health. 2020;2:e447–e449. doi: 10.1016/S2589-7500(20)30187-4. [DOI] [PubMed] [Google Scholar]
- 70.Wartman SA, Combs CD. Medical education must move from the information age to the age of artificial intelligence. Acad. Med. 2018;93:1107–1109. doi: 10.1097/ACM.0000000000002044. [DOI] [PubMed] [Google Scholar]
- 71.Kolachalama VB, Garg PS. Machine learning and medical education. npj Digit. Med. 2018;1:2–4. doi: 10.1038/s41746-018-0061-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Paranjape K, Schinkel M, Panday RN, Car J, Nanayakkara P. Introducing artificial intelligence training in medical education. JMIR Med. Educ. 2019;5:e16048. doi: 10.2196/16048. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Grunhut J, Marques O, Wyatt ATM. Needs, challenges, and applications of artificial intelligence in medical education curriculum. JMIR Med. Educ. 2022;8:1–5. doi: 10.2196/35587. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Hoff KA, Bashir M. Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Factors. 2014;57:407–434. doi: 10.1177/0018720814547570. [DOI] [PubMed] [Google Scholar]
- 75.Oksanen A, Savela N, Latikka R, Koivula A. Trust toward robots and artificial intelligence: an experimental approach to human–technology interactions online. Front. Psychol. 2020;11:568256. doi: 10.3389/fpsyg.2020.568256. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Choudhury, A. & Asan, O. Impact of cognitive workload and situation awareness on clinicians’ willingness to use an artificial intelligence system in clinical practice. IISE Trans. Healthc. Syst. Eng. 1–12 (2022) 10.1080/24725579.2022.2127035.
- 77.Kolltveit BCH, et al. Telemedicine in diabetes foot care delivery: Health care professionals’ experience. BMC Health Serv. Res. 2016;16:1–8. doi: 10.1186/s12913-016-1377-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Abras, C., Maloney-Krichmar, D. & Preece, J. User-centered design. In Encyclopedia of Human–Computer Interaction Vol. 37 (ed. Bainbridge, W.) 445–456 (SAGE Publications, 2004).
- 79.Russell CL. An overview of the integrative research review. Prog. Transplant. 2005;15:8–13. doi: 10.1177/152692480501500102. [DOI] [PubMed] [Google Scholar]
- 80.Whittemore R, Knafl K. The integrative review: updated methodology. J. Adv. Nurs. 2005;52:546–553. doi: 10.1111/j.1365-2648.2005.03621.x. [DOI] [PubMed] [Google Scholar]
- 81.Hong QN, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ. Inf. 2018;34:285–291. [Google Scholar]
- 82.Hand M, et al. A clinical decision support system to assist pediatric oncofertility: a short report. J. Adolesc. Young-Adult Oncol. 2018;7:509–513. doi: 10.1089/jayao.2018.0006. [DOI] [PubMed] [Google Scholar]
- 83.Hsiao J-L, Wu W-C, Chen R-F. Factors of accepting pain management decision support systems by nurse anesthetists. BMC Med. Inform. Decis. Mak. 2013;13:1–13. doi: 10.1186/1472-6947-13-16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84.Lin H-C, et al. From precision education to precision medicine: factors affecting medical staffs intention to learn to use AI applications in hospitals. Technol. Soc. 2021;24:123–137. [Google Scholar]
- 85.Esmaeilzadeh P, Sambasivan M, Kumar N, Nezakati H. Adoption of clinical decision support systems in a developing country: antecedents and outcomes of physician’s threat to perceived professional autonomy. Int. J. Med. Inform. 2015;84:548–560. doi: 10.1016/j.ijmedinf.2015.03.007. [DOI] [PubMed] [Google Scholar]
- 86.Strohm L, et al. Factors influencing the adoption of computerized medical diagnosing system for tuberculosis. JMIR Hum. Factors. 2021;9:1–12. [Google Scholar]
- 87.Zhai H, et al. Radiation oncologists’ perceptions of adopting an artificial intelligence-assisted contouring technology: model development and questionnaire study. J. Med. Internet Res. 2021;23:1–16. doi: 10.2196/27122. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data that support the findings of this study are available from the corresponding authors upon reasonable request.