Abstract
Objective
The aim of this study was to draw upon the collective knowledge of experts in the fields of health and technology to develop a questionnaire that measured healthcare professionals’ perceptions of Artificial Intelligence (AI).
Methods
The panel for this study were carefully selected participants who demonstrated an interest and/or involvement in AI from the fields of health or information technology. Recruitment was accomplished via email which invited the panel member to participate and included study and consent information. Data were collected from three rounds in the form of an online survey, an online group meeting and email communication. A 75% median threshold was used to define consensus.
Results
Between January and March 2019, five healthcare professionals and three IT experts participated in three rounds of study to reach consensus on the structure and content of the questionnaire. In Round 1 panel members identified issues about general understanding of AI and achieved consensus on nine draft questionnaire items. In Round 2 the panel achieved consensus on demographic questions and comprehensive group discussion resulted in the development of two further questionnaire items for inclusion. In a final e-Delphi round, a draft of the final questionnaire was distributed via email to the panel members for comment. No further amendments were put forward and 100% consensus was achieved.
Conclusion
A modified e-Delphi method was used to validate and develop a questionnaire to explore healthcare professionals’ perceptions of AI. The e-Delphi method was successful in achieving consensus from an interdisciplinary panel of experts from health and IT. Further research is recommended to test the reliability of this questionnaire.
Keywords: Artificial intelligence, Digital health, Technology
Introduction
As our Australian healthcare system becomes digitally enabled, emerging technologies such as Artificial Intelligence (AI), are expected to profoundly change the way that healthcare is delivered.1–3 Machine learning and deep learning techniques are being tested in a wide range of areas within health, for example medical image analysis,4 disease epidemic surveillance,5 pathology classification6 and treatment support in community healthcare settings.7 The capabilities of AI are significant and compelling, and there is a growing expectation that it will enable a sustainable healthcare system and empower healthcare professionals to contribute to the improvement of patient outcomes, safety and care.8,9 Public perception of AI has changed in the last 10 years, with concern arising around its ethical implications and the lack of expected progress, but also more optimistic attitudes about AI hopes for healthcare and a focus on its inclusion in education.10 Research into the merits of AI technology in healthcare is increasing,11 however workforce readiness and preparation is not well understood.3,12 This study aims to highlight the importance of research that places the healthcare professional at the centre of AI technology implementation, by developing a questionnaire that measures healthcare workforce perception of AI.
Workforce perception of AI
Workforce perception is a powerful indicator of organisational readiness and requires consideration in this new age of technological advancement.12 Technology adoption theories developed during the third industrial revolution in the 1980s, for example the popular technology acceptance model13 or diffusion of innovations theory,14 explored perception as a predictor of use and acceptance which was useful for designers to improve technology characteristics and function. These theories found that the adoption of computers in healthcare was influenced by the prior experience, knowledge and skill set of the user.15 If the technology aligned with professional values, was trusted, easy to use and improved job performance the healthcare professional would willingly accept it into their practice.16–18 The more difficult and complicated the technology, the less the user engaged, demonstrating the power of design, useability and usefulness.19–21
The socio-technical theory adds another dimension to this field of research, by acknowledging the essential interdependencies that exist between individuals, organisations, and technology.22 Developed in the 1940s by psychologists at the Travistock Institute of Human Relations, this model explores the “de-humanising effect” of scientific breakthroughs, reveals powerful human factors that impact the quality, safety and value of new interventions, and demands a holistic review of technology implementation.22–25 Socio-technological studies have found that stakeholders within healthcare systems often hold different perceptions depending on their expectations and objectives for the technology.26–28 A healthcare professionals’ focus on improving patient outcome and clinical decision-making, may differ from the organisation’s vision of maximising financial performance or workload productivity.28,29 The difficulty lies in finding an appropriate way to understand all stakeholder perceptions when implementing new technology like AI into healthcare.30
Research about perceptions of AI is emerging as organisations seek to understand workforce readiness, however at this stage these studies largely focus on medical professionals.31 Jha et al.31 developed a survey instrument to measure American physicians’ perceptions about the impact of health information systems on primary care delivery, and found that physicians were sceptical about its ability to perform better than humans. A qualitative study measuring psychiatrists perceptions of AI role replacement,32 found that their views were divergent about the value and impact of AI, but that it would never replace the relational aspects of psychiatric care. Laï et al.33 used a qualitative approach to study physicians’ perceptions of AI, and found that they share concerns about the management of data, the development of knowledge, the upheaval of the doctor-patient relationship, and the disruption of the diagnosis and decision-making landscape. The Robot Use Self-efficacy in Healthcare (RUSH) study in Finland,34 focused on their healthcare workforce more broadly and developed a theoretical questionnaire to measure perceived self-efficacy in task-specific robot use. They found healthcare professionals were confident in their use of the technology and on average were very interested in its application. A survey of students’ was conducted35 to understand whether their perceptions of AI influenced their career intentions for radiology. Students perceived that AI would play an important role in their careers, and were less likely to pursue radiology due to the perception that AI would one day replace them.35 The introduction of AI into healthcare delivery requires an understanding of how the workforce perceives AI, so that formal processes and training can be implemented to support its management, use and application in healthcare.36-40 A questionnaire that measures healthcare professionals’ perception of AI more broadly does not yet exist. Thus, the aim of this study was to draw upon the collective knowledge of experts in the fields of health and technology, to develop a questionnaire that measured healthcare professionals’ perceptions of AI.
The e-Delphi method
An e-Delphi method was chosen for this study to develop and validate a questionnaire that measures healthcare professionals’ perception of AI. Originally created in the 1950s by the Rand corporation,41 the aim of the Delphi method is to gain consensus on group opinion.42 A panel of carefully selected participants that demonstrate an interest or involvement in the field related to the research,42,43 are invited to participate in several rounds of feedback or discussion to provide an impartial reflection of current knowledge or perception.41,44
The main advantage of this method of study is that group diversity is replaced by a single representative opinion.45 The Delphi method has been used in healthcare to establish research priorities,46 develop competencies and frameworks,47,48 and guide key components of intervention.49 Traditionally, the Delphi method uses a paper-based questionnaire to collect data from participants, however digital methods, called e-Delphi methods, are now being used.50,51 The use of electronic and internet-based questionnaires allows for a faster response time, points of anonymity and a reduction of resource costs. This study used the e-Delphi method to develop and validate a questionnaire that explores healthcare professionals’ perceptions of AI.
Materials and methods
Design
The study followed the traditional structure of the Delphi method, consisting of a series of structured rounds to facilitate discussion among experts and to reach consensus about the questionnaire.41 Instead of using post-mail to correspond, the e-Delphi method used an online survey platform, virtual meeting rooms and e-mail to facilitate discussion, collect data and provide structure to the research.52
Delphi method consensus remains disputed in the literature.19,22,31 Consensus for agreement in this study was based upon Diamond et al.’s53 systematic review, which proposed 75% as the median threshold to define consensus. At the outset of the study it was decided that group agreement greater that 75% on each question would be an acceptable level of consensus for the study.
Participants
The panel for this study were carefully selected participants who demonstrated an interest and/or involvement in AI from the fields of health or information technology (IT).42,43,54 Recruitment was accomplished via email which invited the panel member to participate in the modified e-Delphi study and included study and consent information. A follow-up phone call provided the opportunity for further questions and clarification of the project. The participants were known to the researcher but remained anonymous to other panel members initially to encourage the expression of unbiased opinion, particularly in the first round.
Criteria for the choice of expert panel members were either: a) healthcare professionals who were registered with the Australian Health Practitioners Regulation Agency (AHPRA), who had clinical experience of more than 10 years in either an acute or primary health sector, and came from a broad range of disciplines, and with an interest in health technology; or professionals from the IT sector with greater than 10 years’ experience in technology development or project management related to healthcare; b) able to access an email account; c) be willing to participate. The diverse panel could provide an impartial reflection of current knowledge or perception in both the health and technology spheres.41
Data collection
Data were collected from three rounds of the modified e-Delphi study between January and March of 2019 in the form of an online survey, an online group meeting and email communication. The study was conducted according to the national statement on Ethical Conduct in Human Research (2007)55 and approved by Southern Cross University Human Ethics Committee (HREC Register Number ECN-18-086).
Round 1: Identifying issues, structure and content
Round 1 was delivered in the form of an electronic survey and was sent to each participant via the Qualtrics platform. The Qualtrics survey consisted of 18 open-ended questions that were designed to identify the key issues associated with developing the questionnaire and establish its suggested structure and content (see Table 1). Preliminary questions for the questionnaire were informed by the validated Finnish-language questionnaire, Robot Use Self-Efficacy in Healthcare work (RUSH)33 for example: “I have been adequately trained to use AI technology that is specific to my role.” The panel could provide comments and suggest additional questions and topics for the questionnaire. Panel members had two weeks to respond, with individually completed surveys returned anonymously to the researcher.
Table 1.
Structure of Round 1 e-Delphi survey.
Topic | No. of questions |
---|---|
Key issues associated with questionnaire that explores healthcare professionals’ perceptions of AI | 1 |
Review and comment of suggested demographic items | 5 |
Review and comment of suggested perception items | 10 |
Comments and suggestions of further items for discussion in Round 2 | 2 |
Round 2: Consensus on draft questionnaire
At the completion of Round 1 small group meetings were held to enable more robust discussion about the feedback from the Round 1. A draft questionnaire was presented and discussed. Each panel member was given an electronic version of the draft questionnaire via email prior to the meeting and this was also presented in the meeting on the online screen. Three, one-hour zoom meetings were held with 2-3 attendees in each meeting according to their availability. Panel discussions were documented electronically and experts were asked to indicate agreement or disagreement verbally. For questions where there was disagreement, open discussion was facilitated and 75% group consensus was required to determine the outcome.
Round 3: Final feedback and consensus on questionnaire
The questionnaire was further revised following Round 2 and formatted to reflect the consensus agreement. This was sent via email to all panel members for individual review. As Round 2 was not anonymous, Round 3 provided the opportunity for independent review of the final questionnaire draft and any further comment. It had been agreed in the small group meetings that if response exhaustion was reached (i.e. agreement on the questionnaire content without further changes) with over 75% group consensus in Round 3, then Round 4 would not be necessary.
Results
Between January and March 2019, five healthcare professionals and three IT experts participated in three rounds of the e-Delphi study to reach consensus on the structure and content of the questionnaire that was designed to explore healthcare professionals’ perceptions of AI.
Panel characteristics
The panel included experts from a range of professions including four healthcare professionals from optometry, medicine, nursing and allied health; and four IT professionals from health, science, engineering and finance backgrounds. The majority of panel members were knowledgeable about AI but not all had substantial hands-on experience with AI applications. They had a diverse range of industry experience, four working as clinicians, three as technologists, two business owners, two academics, one program manager and three executive level professionals3 or a combination of these. Panel members were located in three Australian states (Victoria, Queensland, New South Wales). All eight panel members participated in all three rounds. The response rate to the online survey, group discussion and email discussion was 100% for each round and response exhaustion was reached within these three rounds.
Round 1: Identifying issues, structure and content
Panel responses identified issues about general understanding of AI and established demographic variables that would be of interest to future studies (see Table 2). The panel suggested that AI was not well understood by healthcare professionals, that AI education was not yet established in healthcare and that a universal, user-friendly definition of AI was needed to precede the questionnaire. Because cohorts being surveyed, would be diverse, they suggested that this definition should also need to be supported by clinical examples of AI. The panel achieved consensus on demographic questions related to age, but they did not achieve consensus on the range of disciplines, gender and job description options, and suggested they needed to be inclusive of the diversity found within healthcare settings (see Table 3). These items were taken to Round 2 for further discussion. Nine draft items measuring perception of AI achieved more than 75% consensus from panel members, and covered a range of topics including perceptions of use of AI, financial and ethical impact, training, as well as broad perceptions of AI’s impact on healthcare (see Table 3). The panel suggested that AIs impact on role be given consideration in the questionnaire, which were taken to Round 2 for discussion in the small group meetings.
Table 2.
Issues raised by panel in Round 1.
Understanding of AI | • “If participants have little or no understanding of AI how valid are their responses to the perception of AI in their workplaces?”• “Perhaps you need to measure their understanding of AI at the beginning of the survey and then provide some more specific definitions and examples to gain more accurate information around use of AI in their workplaces.” • “The information about AI should be participant friendly - particularly for those participants with a limited understanding of AI.” • “Without being aware of the full range of available AI, I wonder if more specific examples would assist participants to identify the use of AI in their work. This also makes me think that at the beginning of the survey some inclusion of examples of things that are not AI but might be considered to be AI by participants might be useful.” |
Demographics | Use: • “what will the differences between users and non-users be? Does technology use impact perception and will it be an important variable to include?”Discipline• “Not all disciplines within Australia have been represented as options” • “I am interested in the options provided - assuming that these are the professions that you are targeting I am wondering why pharmacy and paramedicine are not represented?Gender• “The issue of gender is becoming an increasingly contested and sensitive area. I recommend careful consideration for inclusion of this question as well as the options provided.” Job Description• “How would Healthcare assistants respond the previous question? Perhaps provide some more Allied Health specific options such as Discipline Team Leader, Allied Health Manager, Clinical Educator. How would an Informatician answer the previous question? I am interested in the way you have scaffolded the responses to this question - I imagine most respondents would be clinicians - would it be possible to have this as the first option followed by all other clinical options and then management options?” |
Further Items | • “Have you considered questions about AI taking over part of role? This could be really interesting” |
Table 3.
Consensus items generated from Round 1 and Round 2.
Round 1 |
---|
Demographic item: Age |
Item 1. The use of AI on my specialty could improve the delivery of direct patient care |
Item 2. The use of AI in my specialty could improve clinical decision making |
Item 3. The use of AI could improve population health outcomes |
Item 4. The introduction of AI will reduce financial costs associated with my role |
Item 5. I have been adequately trained to use AI that is specific to my role. |
Item 6. AI may take over part of my role as a healthcare professional |
Item 7. There is an ethical framework in place for the use of AI technology in my workplace |
Item 8. Should AI technology make an error; full responsibility lies with the healthcare professional |
Item 9. The introduction of AI will change my role as a healthcare professional in the future |
Round 2. |
Demographic item: gender options, AI use, Discipline options, job description options |
Item 1. AI will change my role as a healthcare professional in the future |
Item 2. Overall healthcare professionals are prepared for the introduction of AI technology |
Round 2: Online group meeting
The small group meetings for Round 2 lasted one hour each and were held in three sessions with 2-3 participants in each. In Round 2 the panel achieved consensus on the remaining demographic questions and included use of AI, for example: “Based on your knowledge of the technology that you are currently using within your role, have you been using AI technology?” (see Table 3). Consensus was achieved (100%) for gender, AI use, health discipline and job description options, sufficiently comprehensive to include all healthcare professionals practising in Australia. Two further items were developed by the panel to measure perception of AI and consensus was achieved on all 11 items (see Table 3). These included questions regarding perceptions of impact on role, for example: “I believe that AI will change my role as a healthcare professional in the future” and questions regarding perceptions of professional preparedness, for example: “I believe that I have been adequately trained to use AI that is specific to my role”.
Round 3: Email communication
In a final e-Delphi round, a draft of the final questionnaire was distributed via email to the panel members for comment. No further amendments were put forward and 100% consensus was achieved.
Discussion
The aim of this study was to develop and validate a questionnaire that will measure healthcare professionals’ perceptions of AI. Perception is a powerful indicator of workforce readiness and future research that utilises a user-centred approach to technology, will be needed to underpin formal AI processes and training, maximise engagement, and support its management, use and application.56 There is growing acknowledgement that given the potential impact of AI, expert voices in industry, research and policy, should pay more attention to the perceptions and understandings of those that are currently underrepresented, in this case healthcare professionals.57 Healthcare workforce perceptions will be a key factor in determining successful implementation and will impact future societal applications of AI in healthcare.58 To our knowledge, this study is the first of its kind to focus on healthcare professionals’ perceptions of AI. Through the use of an e-Delphi method, an interdisciplinary panel of experts, obtained consensus on an 11-item questionnaire and raised important issues that require consideration in the future.
Panel members identified early that the questionnaire should be developed with the presumption that healthcare professionals’ have not yet received education about AI. Healthcare workforce education is at the forefront of global discussion and the need to improve digital competencies and understanding has been emphasized frequently at the international policy level.59–63 A recent study by Monash University explored public attitudes towards AI technologies in Australia and found that survey participants changed their initial opinions and preconceptions about AI when provided with education.64 Research into undergraduate, postgraduate and specialised medical professionals’ knowledge of AI has begun, in an effort to build a framework for education in the future.35,65–67 These studies acknowledge that understanding of AI is limited amongst medical students and that deciding on the content of education is challenging, but training will be necessary to realise the full capacity of this technology. Education in the healthcare setting needs to be interdisciplinary, and an absence of literature exploring AI education in other health disciplines, suggests that further research is needed.
The impact of technology use on perceptions of AI was also raised during the Delphi study. From the panel discussions, it was apparent that this relationship was human-centred in nature and ‘use’ was defined simplistically as current engagement with AI technology. This differs from the TAM theory, which posits that perceptions of ease and usefulness of technology impact intention to use, which relates to product design.13 It is not well understood how much AI technology currently exists in the healthcare setting, although estimates are that it is currently used in diagnosis and treatment recommendations, patient engagement and adherence, and administrative activities.68 A connection between technology perception and use is thought to have a bi-directional relationship, with an increased engagement influencing the healthcare professionals’ perception of technology capabilities, professional competence and trust.69–71 Conversely, a pre-existing perception negatively impacts technology use if healthcare professionals do not have an understanding of how it will enhance performance or improve care delivery.72 Insights into healthcare professionals’ current use of AI will inform future studies that explore workforce perceptions.
The use of the Delphi method for this study demonstrates the value of an interdisciplinary approach, which should also be adopted in the design and implementation of AI technology in healthcare. Many believe that a closer relationship is essential between the innovators and developers from the fields of data science and technology; and the key stakeholders within healthcare, who understand the priorities, risks and context of care delivery.73-75 Interdisciplinary collaboration will ensure that the perceptions of two very different industries are represented and a balanced approach to AI technology can be implemented.76
Limitations
The selection of a small number of panel experts could be considered a limitation of this study, however it is equally as important in the Delphi method to ensure that there is not an over-representation of panel members. The lack of anonymity in the Round 2 small group meetings may have limited contribution or compelled panel members to conform to discussion. The face-to-face structure of the round may have compromised validity when panel members were faced with strong opinions and subsequently changed their view. To manage this, panel members were explicitly asked to consider opposing points of view, thereby eliciting further discussion and relieving socio-psychological pressure. The final Round 3 minimised bias by allowing individuals to provide further anonymous comment. The transparent nature of the study design, informed by Diamond et al.’s53 quality indicators, is thought to have led to the full attendance of particpants, facilitating high quality feedback and contributing to the rigor of the study.
Conclusion
An e-Delphi method was used to validate and develop a questionnaire to explore healthcare professionals’ perceptions of AI. The questionnaire aims to understand how the workforce perceives AI so that formal processes and training can be implemented to support its management, use and application for healthcare. The e-Delphi method was successful in achieving consensus from an interdisciplinary panel of experts from health and IT. Further research is recommended to test the reliability of this questionnaire.
Acknowledgements
We would like to thank our expert panel for their participation in this research.
Footnotes
Contributorship: LS researched literature and all authors conceived the study. LS was involved in protocol development, gaining ethical approval, expert recruitment and data analysis. LS wrote the first draft of the manuscript. All authors reviewed and edited the manuscript and approved the final version of the manuscript.
Declaration of conflicting interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Ethical approval: The human research ethics committee of Southern Cross University approved this study (ECN-18-086).
Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.
Guarantor: LS takes responsibility for the article, including for the accuracy and appropriateness of the reference list.
Peer Review: Douglas Archibald, University of Ottawa Faculty of Medicine have reviewed this manuscript.
ORCID iD: Lucy Shinners https://orcid.org/0000-0002-7160-5838
References
- 1.Bughin J, Hazan E, Ramaswamy S, et al. Artificial intelligence: the next digital frontier. USA: McKinsey Global Institute, 2017. [Google Scholar]
- 2.Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism 2017; 69S: S36–S40. [DOI] [PubMed] [Google Scholar]
- 3.Goldsack J, Zanetti C. Digital era of medicine. Digit Biomark 2020; 4: 136–142. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Dutta S, Long WJ, Brown DFM, et al. Automated detection using natural language processing of radiologists recommendations for additional imaging of incidental findings. Ann Emerg Med 2013; 62: 162–169. [DOI] [PubMed] [Google Scholar]
- 5.Fernandez-Granero MA, Sanchez-Morillo D, Leon J. A. An artificial intelligence approach to early predict symptom-based exacerbations of COPD. Biotechnol Biotechnol Equip 2018; 32: 778–784. [Google Scholar]
- 6.Garcia-Chimeno Y, Garcia-Zapirain B. HClass: Automatic classification tool for health pathologies using artificial intelligence techniques. Biomedical Mater Eng 2015; 26: S1821–S1828. [DOI] [PubMed] [Google Scholar]
- 7.D’Alfonso S, Santesteban-Echarri O, Rice S, et al. Artificial Intelligence-Assisted online social therapy for youth mental health. Front Psychol 2017; 8: 796. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Humans versus artificial intelligence. Nurse Pract 2015; 40: 13. [DOI] [PubMed]
- 9.Powles J, Hodson H. Google DeepMind and healthcare in an age of algorithms. Health and Technology 2017; 7: 351–67. [DOI] [PMC free article] [PubMed]
- 10.Fast E, Horvitz E (Eds). Long-term trends in the public perception of artificial intelligence. In: Proceedings of the AAAI Conference on Artificial Intelligence, 2017.
- 11.Russell S, Norvig P. Artificial intelligence: a modern approach. 3rd ed. England: Pearson, 2016. [Google Scholar]
- 12.Alami H, Lehoux P, Auclair Y, et al. Artificial intelligence and health technology assessment: anticipating a new level of complexity. J Med Internet Res 2020; 22: e17707. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Davis FD. A technology acceptance model for empirically testing new end-user information systems: Theory and results. Massachusetts Institute of Technology; 1985.
- 14.Rogers EM. Diffusion of innovations. New York/London: Free Press/Collier Macmillan, 1983. [Google Scholar]
- 15.Martínez-Torres MR, Toral Marín SL, García FB, et al. A technological acceptance of e-learning tools used in practical and laboratory teaching, according to the European higher education area. Behav Inform Technol 2008; 27: 495–505. [Google Scholar]
- 16.Stevenson JE, Nilsson GC, Petersson GI, et al. Nurses’ experience of using electronic patient records in everyday practice in acute/inpatient ward settings: a literature review. Health Inform J 2010; 16: 63–72. [DOI] [PubMed] [Google Scholar]
- 17.Shinners L, Aggar C, Grace S, et al. Exploring healthcare professionals’ understanding and experiences of artificial intelligence technology use in the delivery of healthcare: an integrative review. Health Inform J 2020; 26: 1225–1236. [DOI] [PubMed] [Google Scholar]
- 18.Nadarzynski T, Miles O, Cowie A, et al. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: a mixed-methods study. Digit Health 2019; 5: 2055207619871808. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Baaren E, van de Wijngaert L, Huizer E. Understanding technology adoption through individual and context characteristics: the case of HDTV. J Broadcast Electron Media 2011; 55: 72–89. [Google Scholar]
- 20.Ghazizadeh M, Lee JD, Boyle LN. Extending the technology acceptance model to assess automation. Cogn Tech Work 2012; 14: 39–49. [Google Scholar]
- 21.Dünnebeil S, Sunyaev A, Blohm I, et al. Determinants of physicians’ technology acceptance for e-health in ambulatory care. Int J Medical Inform 2012; 81: 746–760. [DOI] [PubMed] [Google Scholar]
- 22.Whetton S, Georgiou A. Conceptual challenges for advancing the socio-technical underpinnings of health informatics. Open Med Inform J 2010; 4: 221–224. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Berg M. Patient care information systems and health care work: a sociotechnical approach. Int J Med Inform 1999; 55: 87–101. [DOI] [PubMed] [Google Scholar]
- 24.Singh H, Sittig DF. A sociotechnical framework for Safety-Related electronic health record research reporting: the SAFER reporting framework. Ann Intern Med 2020; 172: S92–S100. [DOI] [PubMed] [Google Scholar]
- 25.Danholt P. The sociotechnical configuration of the problem of patient safety. Stud Health Technol Inform 2010; 157: 31–37. [PubMed] [Google Scholar]
- 26.Blease C, Kaptchuk T, Bernstein M, et al. Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners’ views. J Med Internet Res 2019; 21: e12802. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Blease C, Bernstein MH, Gaab J, et al. Computerization and the future of primary care: a survey of general practitioners in the UK. PLoS One 2018; 13: 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Fan W, Liu J, Zhu S, et al. Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Ann Oper Res 2020; 294: 567–592. [Google Scholar]
- 29.Moores TT. An integrated model of IT acceptance in healthcare. Decision Support Syst 2012; 53: 507–516. [Google Scholar]
- 30.Alami H, Lehoux P, Denis J-L, et al. Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manage 2020; 35: 106–114. [DOI] [PubMed] [Google Scholar]
- 31.Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. JAMA 2016; 316: 2353–2354. [DOI] [PubMed] [Google Scholar]
- 32.Blease C, Locher C, Leon-Carlyle M, et al. Artificial intelligence and the future of psychiatry: qualitative findings from a global physician survey. Digit Health 2020; 6: 2055207620968355. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Laï MC, Brian M, Mamzer MF. Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J Transl Med 2020; 18: 14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Turja T, Rantanen T, Oksanen A. Robot use self-efficacy in healthcare work (RUSH): development and validation of a new measure. AI Soc 2019; 34: 137–43. [Google Scholar]
- 35.Sit C, Srinivasan R, Amlani A, et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights Imaging 2020; 11: 14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Australian Government. Australia's National Digital Health Strategy. Safe, seamless and secure: evolving health and care to meet the needs of modern Australia. Agency ADH, Australia, 2015. –2017.
- 37.Morren L. Technology paradoxes in the practice of healthcare. Enschede, The Netherlands: University of Twente, 2019. [Google Scholar]
- 38.Meißner A, Schnepp W. Staff experiences within the implementation of computer-based nursing records in residential aged care facilities: a systematic review and synthesis of qualitative research. BMC Med Inform Decis Mak 2014; 14: 54. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Strudwick G, Booth R, Mistry K. Can social cognitive theories help us understand nurses' use of electronic health records? Comput Inform Nurs 2016; 34: 169–174. [DOI] [PubMed] [Google Scholar]
- 40.Davis FD. Percieved usefulness, perceived ease of use, and user acceptance of information technology. MIS Q 1989; 13: 319–340. [Google Scholar]
- 41.Keeney S, Hasson F, McKenna HP. A critical review of the Delphi technique as a research methodology for nursing. Int J Nurs Stud 2001; 38: 195–200. [DOI] [PubMed] [Google Scholar]
- 42.Goodman CM. The Delphi technique: a critique. J Adv Nurs 1987; 12: 729–734. [DOI] [PubMed] [Google Scholar]
- 43.McKenna B, Hugh P. The Delphi technique: a worthwhile research approach for nursing? J Adv Nurs 1994; 19: 1221–1225. [DOI] [PubMed] [Google Scholar]
- 44.Hasson F, Keeney S. Enhancing rigour in the Delphi technique research. Technol Forecast Soc Change 2011; 78: 1695–1704. [Google Scholar]
- 45.Von der Gracht HA. Consensus measurement in Delphi studies: review and implications for future quality assurance. Technol Forecast Soc Change 2012; 79: 1525–1536. [Google Scholar]
- 46.Dajani JS, Sincoff MZ, Talley WK. Stability and agreement criteria for the termination of Delphi studies. Technol Forecast Soc Chang 1979; 13: 83–90. [Google Scholar]
- 47.Dalkey N. An experimental study of group opinion: the Delphi method. Futures 1969; 1: 408–426. [Google Scholar]
- 48.Steele SG, Booy R, Mor SM. Establishing research priorities to improve the one health efficacy of Australian general practitioners and veterinarians with regard to zoonoses: a modified Delphi survey. One Health 2018; 6: 7–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Helms C, Gardner A, McInnes E. Consensus on an Australian nurse practitioner specialty framework using Delphi methodology: results from the CLLEVER 2 study. J Adv Nurs 2017; 73: 433–447. [DOI] [PubMed] [Google Scholar]
- 48.Schofield R, Chircop A, Baker C, et al. Entry-to-practice public health nursing competencies: a Delphi method and knowledge translation strategy. Nurse Educ Today 2018; 65: 102–107. [DOI] [PubMed] [Google Scholar]
- 49.DelGiudice NJ, Street N, Torchia RJ, et al. Vitamin D prescribing practices in primary care pediatrics: underpinnings from the health belief model and use of web-based Delphi technique for instrument validity. J Pediatr Health Care 2018; 32: 536–547. [DOI] [PubMed] [Google Scholar]
- 50.Boulkedid R, Abdoul H, Loustau M, et al. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. Plos One 2011; 6: e20476. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Gill FJ, Leslie GD, Grech C, et al. Using a web-based survey tool to undertake a Delphi study: application for nurse education research. Nurse Educ Today 2013; 33: 1322–1328. [DOI] [PubMed] [Google Scholar]
- 52.Donohoe H, Stellefson M, Tennant B. Advantages and limitations of the e-Delphi technique AU – Donohoe, Holly. Am J Health Educ 2012; 43: 38–46. [Google Scholar]
- 53.Diamond IR, Grant RC, Feldman BM, et al. Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol 2014; 67: 401–409. [DOI] [PubMed] [Google Scholar]
- 54.Powell C. The Delphi technique: myths and realities. J Adv Nurs 2003; 41: 376–382. [DOI] [PubMed] [Google Scholar]
- 55.Anderson W. 2007 national statement on ethical conduct in human research. Intern Med J 2011; 41: 581–582. [DOI] [PubMed] [Google Scholar]
- 56.Yardley L, Morrison L, Bradbury K, et al. The person-based approach to intervention development: application to digital health-related behavior change interventions. J Med Internet Res 2015; 17: e30-e. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Zhang B, Dafoe A. Artificial intelligence: American attitudes and trends, SSRN 3312874. 2019.
- 58.Vinuesa R, Azizpour H, Leite I, et al. The role of artificial intelligence in achieving the sustainable development goals. Nat Commun 2020; 11: 233. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.European Commission. Communication on enabling the digital transformation of health and care in the Digital Single Market; empowering citizens and building a healthier society. UK: European Commission, 2018. [Google Scholar]
- 60.Topol EJ. Preparing the healthcare workforce to deliver the digital future. UK: United Kingdom National Health Service, 2019. [Google Scholar]
- 61.Australian Government. The eHealth readiness of Australia’s Allied Health Sector. Australia: Commonwealth of Australia, 2011. [Google Scholar]
- 62.Australian Digital Health Agency. National Digital health workforce and education roadmap. Sydney, NSW: Government of Australia, 2020. [Google Scholar]
- 63.World Health Organization. WHO guideline: recommendations on digital interventions for health system strengthening: web supplement 2: summary of findings and GRADE tables. Geneva: World Health Organization, 2019. [PubMed] [Google Scholar]
- 64.Selwyn N, Gallo Cordoba B, Andrejevic M, et al. AI for social good? Australian public attitudes toward AI and society. Clayton, Victoria: Monash University, 2020. [Google Scholar]
- 65.Dos Santos DP, Giese D, Brodehl S, et al. Medical students' attitude towards artificial intelligence: a multicentre survey. Eur Radiol 2019; 29: 1640–1646. [DOI] [PubMed] [Google Scholar]
- 66.A Jindal, Manishi B. Knowledge and education about artificial intelligence among medical students from teaching institutions of India: a brief survey. MedEdPublish 2020; 9: 200. [Google Scholar]
- 67.Chan KS, Zary N. Applications and challenges of implementing artificial intelligence in medical education: integrative review. JMIR Med Educ 2019; 5: e13930. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019; 6: 94–98. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Ward R. The application of technology acceptance and diffusion of innovation models in healthcare informatics. Health Policy Technol 2013; 2: 222–228. [Google Scholar]
- 70.Zayyad MA, Toycan M. Factors affecting sustainable adoption of e-health technology in developing countries: an exploratory survey of Nigerian hospitals from the perspective of healthcare professionals. Peerj 2018; 6: e4436-e. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Venkatesh V, Moris MG, Davis GB, et al. User acceptance of information technology: toward a unified view. MIS Q 2003; 27: 425–478. [Google Scholar]
- 72.Nemeth LS, Feifer C, Stuart GW, et al. Implementing change in primary care practices using electronic medical records: a conceptual framework. Implement Sci 2008; 3: 3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Homes S, Nugent BC, Augusto JC. Human-robot user studies in eldercare: Lessons learned. Smart homes and beyond: Icost. 2006: 4.
- 74.Liyanage H, Liaw S-T, Jonnagaddala J, et al. Artificial intelligence in primary health care: perceptions, issues, and challenges. Yearb Med Inform 2019; 28: 41–46. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Nilsen ER, Dugstad J, Eide H, et al. Exploring resistance to implementation of welfare technology in municipal healthcare services – a longitudinal case study. BMC Health Serv Res 2016; 16: 657. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Patel VL, Shortliffe EH, Stefanelli M, et al. The coming of age of artificial intelligence in medicine. Artif Intell Med 2009; 46: 5–17. [DOI] [PMC free article] [PubMed] [Google Scholar]