Abstract
Artificial intelligence technology has advanced rapidly in recent years and has the potential to improve healthcare outcomes. However, technology uptake will be largely driven by clinicians, and there is a paucity of data regarding the attitude that clinicians have to this new technology. In June–August 2019 we conducted an online survey of fellows and trainees of three specialty colleges (ophthalmology, radiology/radiation oncology, dermatology) in Australia and New Zealand on artificial intelligence. There were 632 complete responses (n = 305, 230, and 97, respectively), equating to a response rate of 20.4%, 5.1%, and 13.2% for the above colleges, respectively. The majority (n = 449, 71.0%) believed artificial intelligence would improve their field of medicine, and that medical workforce needs would be impacted by the technology within the next decade (n = 542, 85.8%). Improved disease screening and streamlining of monotonous tasks were identified as key benefits of artificial intelligence. The divestment of healthcare to technology companies and medical liability implications were the greatest concerns. Education was identified as a priority to prepare clinicians for the implementation of artificial intelligence in healthcare. This survey highlights parallels between the perceptions of different clinician groups in Australia and New Zealand about artificial intelligence in medicine. Artificial intelligence was recognized as valuable technology that will have wide-ranging impacts on healthcare.
Subject terms: Health policy, Software, Health care, Health occupations, Medical research, Mathematics and computing
Introduction
Nomograms and clinical algorithms have long played a role in supporting clinical decision-making in medicine. More recently, major advances in research on artificial intelligence (AI) have been applied to medical image analysis with promising results1,2. This technology is now poised for clinical application. In order for AI to fully deliver its potential benefits for healthcare, medical practitioners need to understand and embrace the technology. Equally, patients need to entrust aspects of their healthcare to AI systems.
Numerous AI image analysis algorithms have been shown to achieve high level performance, in some instances comparable to human experts, in the detection of a range of diseases in ophthalmology3–5, radiology6–8, dermatology9,10, and in other fields of medicine. A significant milestone in the clinical application of AI came in 2018 with the US Food and Drug Administration approval of the IDx-DR system for the autonomous detection of referable diabetic retinopathy—the first such approval in any field of medicine2. Although the application of this technology is nascent, it has the potential to transform aspects of healthcare11.
As is true for any new medical technology, the extent to which AI algorithms for screening, diagnosis or prognosis are adopted in medicine will be dependent upon the attitudes of clinicians and patients. Consideration of these attitudes and knowledge gaps will therefore be essential for health systems, medical educators, professional bodies, AI developers and regulators. Few studies have examined clinician perceptions of new AI technologies on healthcare provision and the clinical workforce. Those that have been conducted have surveyed the potential impacts of AI in samples of medical students12–14, radiology trainees15, radiologists16–18, pathologists19, psychiatrists20, general practitioners (GP)21, as well as general physicians, surgeons, and trainees22,23, with contrasting results. Most surveys indicate that clinicians believe that AI will have a positive impact on their profession13,16–19,22. Psychiatrists and general practitioners indicated that AI would not affect their profession20,21. In contrast, surveys of medical students12 and radiology trainees15 have highlighted concerns about the implications of AI for training and employment prospects24–27.
Areas of medicine that are most reliant on imaging will be amongst the first to be impacted by advances in AI technologies28. These include ophthalmology, radiology/radiation oncology and dermatology. In this study we conducted a survey of fellows and trainees of these specialty colleges in Australia and New Zealand in order to ascertain their current use, understanding and perceptions of AI.
Results
Demographics
A total of 632 trainees and fellows from three specialty colleges (305 ophthalmology, 230 radiology/radiation oncology, and 97 dermatology) completed the survey (Supplementary Tables 1 and 2). Completed surveys were received from 20.4% of members of the Royal Australian and New Zealand College of Ophthalmologists (RANZCO) (1279 fellows, 212 trainees), 5.1% of members of the Royal Australian and New Zealand College of Radiologists (RANZCR) (4505 fellows and trainees) and 13.2% of members of the Australasian College of Dermatologists (ACD) (621 fellows, 113 trainees). RANZCR members included both radiologists (n = 199; 86.5% of RANZCR respondents) and radiation oncologists (n = 31; 13.5% of RANZCR respondents). ACD respondents were from Australia, whereas RANZCO and RANZCR respondents were from Australia and New Zealand. Incomplete surveys (n = 85/717, 11.9%) were excluded from analyses (Supplementary Table 3). Responses from completed surveys are summarised in Supplementary Table 4.
Respondents predominantly practiced in metropolitan areas (n = 460, 72.8%), followed by a combination of metropolitan and rural practice (n = 102, 16.1%) and rural practice only (n = 70, 11.1%). Ophthalmologists were more likely to spend at least part of their time in a rural setting than radiologists/radiation oncologists or dermatologists (32.5%, 21.7% and, 23.7% respectively, p = 0.064). Respondents from each group had similar years of professional experience (p = 0.608) and almost half of all respondents had been in practice for 20 years or more (n = 303, 47.9%) (see Supplementary Table 2).
Current knowledge and use of artificial intelligence in clinical practice
Almost half of respondents (47.6%; n = 301) rated their knowledge of AI as average relative to their peers, with few rating their knowledge as excellent (n = 35, 5.5%) or very poor (n = 31, 4.9%) (Fig. 1; Supplementary Table 4). Responses were similar across the three professional groups (p = 0.542). Most respondents indicated that they had never used AI applications in their work as a clinician (511, 80.9%). Ophthalmologists were more than twice as likely to use AI in their daily clinical practice than radiologists/radiation oncologists or dermatologists (15.7%, 6.1% and 5.2%, respectively, p = 0.001) (Fig. 2). Among the 67 ophthalmologists who specified how AI was utilized in their clinics, applications included those for glaucoma progression analysis, optical coherence tomography assessment, diabetic retinopathy image assessment and intraocular lens (IOL) power calculation (Supplementary Table 5). Among the 40 radiologists/radiation oncologists who provided details as to how they use AI, applications included those for automated organ and lesion detection using computer aided detection of computed tomography and magnetic resonance imaging (Supplementary Table 5). Among 8 dermatologists, the most common applications included those for skin lesion surveillance and voice transcription (Supplementary Table 5). Respondents who reported using AI had higher self-reported knowledge of AI than those that did not (p = 0.008).
Perceived impact of artificial intelligence on the profession
Most respondents predicted that the introduction of AI would improve their field of practice (n = 449, 71.0% agreed or strongly agreed, Fig. 3). Almost two thirds (n = 379, 60.0%) of respondents believed that it would be ≤ 5 years before AI had a noticeable impact on their specialty (Fig. 4). No respondents reported that AI would never have a noticeable impact on their profession. Respondents who reported using AI predicted that the impact would be apparent sooner that respondents who did not (p < 0.001).
The majority of respondents (n = 452, 71.1%) reported that AI will impact workforce needs ‘somewhat’ or ‘to a great extent’ in these specialty areas of medicine within the next decade. Most (n = 542, 85.8%) respondents believed that AI will impact workforce needs ‘somewhat’ or ‘to a great extent’ beyond the next decade. Radiologists/radiation oncologists were more likely to report workforce needs will be impacted ‘to a great extent’ within the next decade than ophthalmologists or dermatologists (54.4%, 41.6%, and 43.3%, respectively, p = 0.04) (Supplementary Table 4; Fig. 5).
The majority of radiologists/radiation oncologists (n = 202, 87.8%) and dermatologists (n = 53, 54.6%) considered that their medical specialist workforce would be impacted by AI to a greater extent than other health professionals in their field of practice (such as radiographers or general practitioners). In contrast, ophthalmologists considered that the impact of AI on the workforce would be greater for optometrists, than for general practitioners or ophthalmologists. Respondents who reported using AI were more likely to report that the introduction of AI would lead to the need for an increase in workforce numbers than did those who did not use AI (p = 0.006).
Acceptable artificial intelligence performance standards and clinical workflows
Most survey respondents considered that AI systems would need to achieve performance that was superior to the average performing specialist when applied to screening for disease (n = 405, 64.1%, Fig. 6A) or for diagnostic decision support (n = 506, 80.1%, Fig. 6B).
Radiologists/radiation oncologists and dermatologists were twice as likely as ophthalmologists to consider that AI screening systems should have error levels superior to the best performing specialist (21.7%, 20.6%, and 10.5% respectively, p = 0.005, Fig. 6A).
Expectations for system performance were even higher when used for diagnostic decision support by specialists. Accordingly, radiologists/radiation oncologists and dermatologists were each more likely than ophthalmologists to expect AI systems to be superior to the best performing specialist when used for decision support (30.9% and 23.9% and 19.0% respectively, p = 0.035, Fig. 6B).
A hypothetical clinical workflow was proposed as follows: “patient clinical images undergo artificial intelligence analysis. A specialist subsequently reviews both the image and the artificial intelligence findings”. Most respondents would consider using this clinical workflow, however significantly more ophthalmologists and radiologists/radiation oncologists were willing to consider this than dermatologists (82.0%, 82.6%, and 67.0% respectively, p < 0.001).
Perceived advantages of the use of artificial intelligence
The top three ranked potential advantages of AI were (1) improved patient access to disease screening, (2) improved diagnostic confidence, and (3) reduced time spent by specialists on monotonous tasks. The top ranked advantage for ophthalmologists and dermatologists was ‘improved patient access to disease screening’ and the top ranked advantage for radiologists/radiation oncologists was ‘reduced time spent on monotonous tasks’ (Fig. 7).
Concerns about the use of artificial intelligence
The top three potential concerns about the use of AI were (1) concerns over the divestment of healthcare to large technology and data companies, (2) concerns over medical liability due to machine error, and (3) decreasing reliance on medical specialists for diagnosis and treatment advice. The top ranked concern for ophthalmologists and radiologists/radiation oncologists was ‘concerns over the divestment of healthcare to large technology and data companies.’ The top ranked concern for dermatologists was ‘concerns over medical liability due to machine error’ (Fig. 8).
Preparedness for the introduction of artificial intelligence in clinical practice
A minority of respondents (n = 87; 13.8%) felt that the specialist training colleges were adequately prepared for the introduction of AI into clinical practice. Qualitative analysis of a total of 632 open-ended responses was performed, with similar responses grouped according to theme. Examples of verbatim responses are provided for each of the top five themes in Supplementary Table 6. Many respondents (n = 285) highlighted a need for improved training and education of college members about AI via scientific meetings, seminars, and the training curriculum. Some respondents highlighted the need for the colleges to be proactive in their approach in order to safeguard member interests (n = 99), with one participant requesting the college “make sure any changes are in the best interests of patients and [the] profession”. Others endorsed the development of frameworks and/or guidelines for AI implementation by the colleges (n = 30). The formation of sub-committees and working groups with expertise in AI to develop position statements (n = 46) was seen by some as a priority. Other response themes related to colleges taking leadership in relation to AI policy development and licensing (n = 74) and to be intimately involved in the trial and application of emerging AI systems (n = 69). Others called for the development of measures to ensure patient safety (n = 16) and the optimisation of clinical outcomes (n = 17).
Ensuring that patient care remains in the realm of (the specialist) (n = 39) was recognized as a priority. In addition, the implementation of measures to assist clinicians in adapting to changes in care delivery (n = 17) was reported as a goal, with one respondent stating that this will “assist trainees to prepare for the AI era”. Some respondents highlighted a need for colleges to support more AI research (n = 69) and engage with industry, regulators and Government (n = 74). A need for clarity regarding the responsibilities of clinicians who will use the technology was also emphasised. Several respondents highlighted the need for a redefinition of the professional scope of specialists and related healthcare provider groups as AI is deployed more widely in clinical practice.
Analysis according demographic subgroups
No statistical differences were found for responses based on location of practice (metropolitan, rural or both, p > 0.149, Supplementary Table 7). In contrast, differences were noted based on years of clinical experience (Supplementary Table 8). Respondents currently in training were more than twice as likely than qualified specialists to consider that AI will impact workforce needs ‘to a great extent’ within the next decade (p < 0.001), despite no differences in self-reported knowledge of AI and its applications between these groups. Similarly, those still in training (n = 44, 66.7%) or with less than 5 years of experience as a specialist (n = 41, 66.1%) were more likely to believe that AI would impact practice ‘to a great extent’ beyond the next decade than were those with 20–30 years of experience or more than 30 years of experience (p < 0.001). Those with more than 30 years of experience were less likely to believe that workforce needs would decrease compared to those with fewer years of practice (p = 0.007).
Comparison of responses between radiologists and radiation oncologists
Despite the distinct clinical roles of radiologists and radiation oncologists, no meaningful differences were detected in responses between the two professional groups (p ≥ 0.14), with the exception of knowledge of AI (radiation oncologists were more likely to rate their knowledge as below average compared to their peers, p = 0.02) and the number of years before AI has a noticeable effect on their field (radiation oncologists were more likely to believe that the effect will be noticeable within the next five years, p = 0.01). It is possible that responses from a larger sample of specialists in these fields may identify group differences.
Discussion
This survey was conducted to understand the perceptions of ophthalmologists, radiologists/radiation oncologists, and dermatologists about AI. These groups were selected as image analysis is a core work task for each profession and a variety of AI tools are being developed specifically for these specialties. To our knowledge, this survey is one of the first of its kind to investigate these specialist groups in parallel.
Most survey respondents perceived the introduction of AI technology in their respective fields as a positive advance. A recent survey of fellows and trainees of the Canadian Royal College of Physicians and Surgeons had similar findings: 72.2% of 3,919 respondents indicated that AI would have a positive impact on workflow and/or clinical practice and patient experience. Only 17.2% of respondents in the Canadian survey indicated that AI would have either a negative impact on workflow or place their specialty at risk. These generally positive sentiments regarding AI have been echoed across a range of medical specialties in previous surveys13–16,19,22,23, however, surveys of general practitioners21 and of psychiatrists20 have indicated that the potential of AI may be limited for these groups. Similar to our findings, low rates of clinical AI use by radiologists/radiation oncologists have previously been reported, indicating that AI has not yet been widely adopted in this field15,16. However, it is possible that AI use is underreported in surveys such as this due to a lack of visibility of algorithms that are deployed within imaging platforms or variations in the interpretation of what constitutes AI.
Most respondents considered that AI will have a noticeable impact on clinical practice within the next 5 years and on workforce needs within the next decade. In contrast, participants with more years of clinical experience were less likely to believe AI would have an impact on workforce needs within and beyond the coming decade, and current AI users were more likely to believe that workforce needs would increase. The current survey did not explore the basis for these diverging opinions. A survey of radiologists in the United States reported that AI would dramatically influence professional duties, however, the impact on workforce numbers was not ascertained15. Other surveys have indicated that workforce needs would remain stable or increase over the coming decade despite the introduction of AI16,18. Understanding the basis for these different perceptions on the impact of AI on the future medical workforce may be an interesting subject for further study.
Improved access to disease screening was reported by ophthalmologists and dermatologists as the greatest perceived advantage to the use of AI. Recognition of the need for increased disease screening capacity may be explained by the combined effects of workforce maldistribution in regional and rural areas versus urban areas and projected increases in the burden of diseases, such as diabetic retinopathy and skin cancer, due to population ageing and growth29,30. The perceived advantage of reducing time on monotonous tasks by radiologists/radiation oncologists is not surprising given the large and growing volume of images viewed by these practitioners. Other studies have similarly identified reduced administrative burden and decreased image interpretation time as advantages of AI use for primary care physicians21 and pathologists16.
The adoption of AI tools by clinicians is likely to be influenced by the extent to which they can be integrated into clinical workflows, enhance efficiency and achieve acceptable levels of performance31. The standards set by regulators of AI applications for health may not be directly aligned with the expectations of clinicians in practice. Interestingly, dermatologists were less accepting of a proposed clinical workflow which included AI-assisted diagnosis. This may be attributable to the distinct clinical practices of dermatologists31. Survey respondents had universally high expectations of AI system performance. There are many examples of AI outperforming humans1,2, however, until this study, little was known of the level of performance clinicians expect from AI systems.
In keeping with high expectations for AI system performance, respondents were concerned about medical liability due to machine error. Legal processes for dealing with harms arising from new technologies have evolved in parallel with innovation, however the legal, moral and ethical considerations are increasingly challenging as technologies become more autonomous32,33. Lessons learned from autonomous vehicle technologies may provide some guidance, but this remains an area for further research and broad-based consultation33,34.
An additional concern of respondents was a reduced reliance on medical specialists as a consequence of AI adoption. This concern is consistent with the impact of the technology on future workforce needs reported in this and other studies15,16,19. It is interesting that the primary concern of respondents was the divestment of healthcare to large technology companies. General mistrust in large technology companies has been documented recently35 and specifically in relation to healthcare36. Accordingly, numerous survey respondents called for leadership from specialist colleges and representative bodies to educate their members and to contribute to frameworks for the development, adoption and regulation of AI technologies in healthcare. The recent survey of fellows and residents conducted as part of the Canadian Royal College of Physicians and Surgeons Taskforce Report on Artificial Intelligence and Emerging Digital Technologies, similarly indicated that digital health literacy standards should be introduced and that governing bodies should take a proactive approach to AI23. Respondents called for improved training and education to improve awareness of AI and to increase proficiency in data science and statistics. Needs for training in ethics and legal aspects of AI ranked second to basic AI proficiency for fellows23.
The limitations of this survey warrant consideration. Volunteer response bias means that the results may not be broadly representative of the views of clinicians in Australia and New Zealand. Moreover, the survey may not be generalizable beyond these countries. As response rates from radiologists/radiation oncologists and dermatologists were low, it is not possible to ascertain whether the views of respondents are representative of others in these specialty groups. Finally, the survey design imposes limitations on the scope of response options and thus the survey findings should not be regarded as a comprehensive account of the perceptions of respondents. This limitation was mitigated by the inclusion of open-ended questions.
In conclusion, this survey highlights major similarities between the perceptions of ophthalmologists, radiologists/radiation oncologists and dermatologists in Australia and New Zealand about the application of AI in medicine. Overall, AI was regarded as a means to improve patient access to care, as well as to enhance clinical efficiency and performance. Concerns were raised about the influence of large technology and data companies, implications for medical liability and reduced reliance on medical specialists. As the successful implementation of AI in healthcare is dependent on detailed understanding of clinician and patient expectations of the technology, further research in these domains is needed.
Methods
This prospective anonymous online survey was approved by the Human Research Ethics Committee of the Royal Victorian Eye and Ear Hospital (HREC 18-1408HL) and was conducted in accordance with the doctrines of the Declaration of Helsinki and its subsequent revisions. Electronic informed consent was obtained from each participant online prior to survey commencement.
Survey design
Study data were collected and managed using REDCap electronic data capture tools hosted at the Centre for Eye Research Australia. Survey questions were developed after a review of previous survey literature and in consultation with ophthalmologists, radiologists and dermatologists to ensure face validity. Example scenarios and related occupational fields differed between surveys designed for each specialty. Draft surveys were circulated to executives from the RANZCO, the RANZCR and the ACD for testing. The survey consisted of 18 multiple choice and open-ended questions (Supplementary Table 1). An additional question was included for radiologists and radiation oncologists to enable differentiation between these professional groups. Questions focused on frequency of AI use, knowledge, workforce impact, preparedness for future AI implementation, acceptable levels of error, individual concerns and barriers. Survey invitations were emailed by college staff to all fellows and trainees of the colleges residing in Australia and New Zealand. Dermatology respondents were from Australia alone. Participation in this anonymous survey was voluntary and no incentives were provided.
Data analysis
Multiple choice questions were analysed using Stata (v15.1, StataCorp, College Station, Texas). A complete-case analysis was conducted (i.e., incomplete surveys were excluded from the analyses, see Supplementary Table 3). Responses were compared according to profession, location of practice and years of experience using the Pearson’s χ2 test. Two-sided significance testing was conducted with an alpha of 5%. Thematic analysis of the response to “What do you think the college should do in preparation for the deployment of AI?” was undertaken using a ‘bottom up’ approach. Two authors (PR and XH) generated a list of initial codes after reviewing responses. Codes were built into broader categories and recurring themes were developed. Thematic discrepancies were resolved via discussion. Responses to questions regarding advantages and disadvantages of AI were analysed as follows: the first preference of each respondent received a score of 3, the second preference a score of 2 and the third preference a score of 1. Scores were tallied for each response and divided by the number of respondents to provide an overall score for each response. Higher scores indicate higher rankings. Scores were displayed as radar plots (Figs. 7, 8).
Supplementary Information
Acknowledgements
We gratefully acknowledge David Andrews, Iris Hui, Kirsten Fitzpatrick, Mark Nevin, Natalia Vukolova and Tim Wills for assisting with the dissemination of the survey to college fellows and trainees.
Author contributions
P.v.W. and S.K. conceived the study, engaged with stakeholders and developed the survey questions in consultation with all co-authors. M.M., P.R., J.S., P.v.W. and X.H. performed analysis and interpretation of results. J.S. and P.v.W. wrote the manuscript with support from P.R. and M.M. These above authors as well as H.S., M.J., J.C., L.O., and L.P. were all actively involved in reviewing and revising the manuscript.
Data availability
The authors declare that the data supporting the findings of this study are available within the paper and the supplementary information files. Raw data are available upon request.
Competing interests
The authors declare the following competing interests: HPS is a shareholder of MoleMap NZ Limited and e-derm consult GmbH and undertakes regular teledermatological reporting for both companies. HPS is a Medical Consultant for Canfield Scientific Inc., MetaOptima and Revenio Research Oy and also a Medical Advisor for First Derm. HPS holds an NHMRC MRFF Next Generation Clinical Researchers Program Practitioner Fellowship (APP1137127). The authors declare that there are no other competing interests. The Centre for Eye Research Australia receives Operational Infrastructure Support from the Victorian Government.
Footnotes
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
These authors contributed equally: Jane Scheetz and Philip Rothschild.
Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-021-84698-5.
References
- 1.Haenssle HA, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. 2018;29(8):1836–1842. doi: 10.1093/annonc/mdy166. [DOI] [PubMed] [Google Scholar]
- 2.Abràmoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. Npj Dig. Med. 2018;1(1):1–8. doi: 10.1038/s41746-017-0008-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Li Z, et al. An automated grading system for detection of vision-threatening referable diabetic retinopathy on the basis of color fundus photographs. Diabetes Care. 2018;41(12):2509–2516. doi: 10.2337/dc18-0147. [DOI] [PubMed] [Google Scholar]
- 4.Ting DSW, et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA. 2017;318(22):2211–2223. doi: 10.1001/jama.2017.18152. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Li Z, He Y, Keel S, Meng W, Chang RT, He MJO. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology. 2018;125(8):1199–1206. doi: 10.1016/j.ophtha.2018.01.023. [DOI] [PubMed] [Google Scholar]
- 6.Lakhani P, Sundaram BJR. Deep learning at chest radiography: Automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574–582. doi: 10.1148/radiol.2017162326. [DOI] [PubMed] [Google Scholar]
- 7.Halicek M, et al. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. J. Biomed. Opt. 2017;22(6):060503. doi: 10.1117/1.JBO.22.6.060503. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Rajpurkar P., et al. Chexnet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv preprint arXiv:1711.05225 2017.
- 9.Haenssle HA, et al. Man against machine reloaded: performance of a market-approved convolutional neural network in classifying a broad spectrum of skin lesions in comparison with 96 dermatologists working under less artificial conditions. Ann. Oncol. 2020;31(1):137–143. doi: 10.1016/j.annonc.2019.10.013. [DOI] [PubMed] [Google Scholar]
- 10.Tschandl P, et al. Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: An open, web-based, international, diagnostic study. Lancet Oncol. 2019;20(7):938–947. doi: 10.1016/S1470-2045(19)30333-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Topol EJ. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019;25(1):44–56. doi: 10.1038/s41591-018-0300-7. [DOI] [PubMed] [Google Scholar]
- 12.Gong B, et al. Influence of artificial intelligence on Canadian medical students’ preference for radiology specialty: A national survey study. Acad. Radiol. 2019;26(4):566–577. doi: 10.1016/j.acra.2018.10.007. [DOI] [PubMed] [Google Scholar]
- 13.Dos Santos DP, et al. Medical students’ attitude towards artificial intelligence: A multicentre survey. Radiol Ed. 2019;29(4):1640–1646. doi: 10.1007/s00330-018-5601-1. [DOI] [PubMed] [Google Scholar]
- 14.Sit C, et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: A multicentre survey. Insights Imaging. 2020;11(1):14. doi: 10.1186/s13244-019-0830-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Collado-Mesa F, Alvarez E, Arheart K. The role of artificial intelligence in diagnostic radiology: A survey at a single radiology residency training program. J. Am. Coll. Radiol. 2018;15(12):1753–1757. doi: 10.1016/j.jacr.2017.12.021. [DOI] [PubMed] [Google Scholar]
- 16.Waymel Q, Badr S, Demondion X, Cotten A, Jacques T. Impact of the rise of artificial intelligence in radiology: What do radiologists think? Diagn. Interv. Imaging. 2019;100(6):327–336. doi: 10.1016/j.diii.2019.03.015. [DOI] [PubMed] [Google Scholar]
- 17.van Hoek J, et al. A survey on the future of radiology among radiologists, medical students and surgeons: Students and surgeons tend to be more skeptical about artificial intelligence and radiologists may fear that other disciplines take over. Eur. J. Radiol. 2019;121:108742. doi: 10.1016/j.ejrad.2019.108742. [DOI] [PubMed] [Google Scholar]
- 18.European Society of Radiology Impact of artificial intelligence on radiology: A EuroAIM survey among members of the European Society of Radiology. Insights Imaging. 2019;10(1):105. doi: 10.1186/s13244-019-0798-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Sarwar S, et al. Physician perspectives on integration of artificial intelligence into diagnostic pathology. npj Dig. Med. 2019;2(1):1–7. doi: 10.1038/s41746-018-0076-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Doraiswamy PM, Blease C, Bodner K. Artificial intelligence and the future of psychiatry: Insights from a global physician survey. Artif. Intell. Med. 2020;102:101753. doi: 10.1016/j.artmed.2019.101753. [DOI] [PubMed] [Google Scholar]
- 21.Blease C, Kaptchuk TJ, Bernstein MH, Mandl KD, Halamka JD, DesRoches CM. Artificial intelligence and the future of primary care: Exploratory qualitative study of UK general practitioners’ views. JMIR. 2019;21(3):e12802. doi: 10.2196/12802. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Oh S, Kim JH, Choi S-W, Lee HJ, Hong J, Kwon SH. Physician confidence in artificial intelligence: An online mobile survey. JMIR. 2019;21(3):e12422. doi: 10.2196/12422. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Reznick R. K, et al. Task Force Report on Artificial Intelligence and Emerging Digital Technologies. https://protect-au.mimecast.com/s/9uFBC3Q8MvCpVR922hYwd-G?domain=royalcollege.ca. Accessed 14/12/2020.
- 24.Pakdemirli E. Artificial intelligence in radiology: Friend or foe? Where are we now and where are we heading? Acta Radiol. Open. 2019;8(2):2058460119830222. doi: 10.1177/2058460119830222. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Rao V. M. RSNA president calls for radiology leaders to explain AI. 2018; https://ai-med.io/rsna-president-calls-for-radiology-leaders-to-explain-ai. Accessed 14/12/2020.
- 26.Musa M. Opinion: Rise of the robot radiologists. The Scientist. 2018; https://www.the-scientist.com/news-opinion/opinion--rise-of-the-robot-radiologists-64356. Accessed 14/12/2020.
- 27.Chockley K, Emanuel E. The end of radiology? Three threats to the future practice of radiology. J. Am. Coll. Radiol. 2016;13(12):1415–1420. doi: 10.1016/j.jacr.2016.07.010. [DOI] [PubMed] [Google Scholar]
- 28.Krizhevsky A., Sutskever I., Hinton G. E. Imagenet classification with deep convolutional neural networks. Paper presented at: Advances in Neural Information Processing Systems 2012.
- 29.Johnson C. National medical workforce strategy urgently needed. Aust. Med. 2018;30(7):8. [Google Scholar]
- 30.Hay M, et al. Selecting for a sustainable workforce to meet the future healthcare needs of rural communities in Australia. Adv. Health Sci. Educ. 2017;22(2):533–551. doi: 10.1007/s10459-016-9727-0. [DOI] [PubMed] [Google Scholar]
- 31.Janda M, Soyer HP. Can clinical decision making be enhanced by artificial intelligence? Brit. J. Dermatol. 2019;180(2):247–248. doi: 10.1111/bjd.17110. [DOI] [PubMed] [Google Scholar]
- 32.O'Sullivan S, et al. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int. J. Med. Roboti. Comput. Assist. Surg. 2019;15(1):e1968. doi: 10.1002/rcs.1968. [DOI] [PubMed] [Google Scholar]
- 33.Awad E, et al. The moral machine experiment. Nature. 2018;563(7729):59–64. doi: 10.1038/s41586-018-0637-6. [DOI] [PubMed] [Google Scholar]
- 34.Yang G-Z, et al. Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy. Robots Soc. 2017;2(4):8638. doi: 10.1126/scirobotics.aam8638. [DOI] [PubMed] [Google Scholar]
- 35.Stjernfelt F., Lauritzen A. M. Trust Busting the Tech Giants? Your Post has been Removed. Springer; 2020:217–239.
- 36.Hunter P. The big health data sale. Sci. Soc. 2016;17(8):1103–1105. doi: 10.15252/embr.201642917. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The authors declare that the data supporting the findings of this study are available within the paper and the supplementary information files. Raw data are available upon request.